Rise of the Machines: A.I. Can Now Tell When You’re Lying

I think it’s been at least a month since I’ve posted something in our “Rise of the Machines” segment and for that I apologize.  It’s just that movies and television have been so busy lately I haven’t devoted enough time to scanning the internet for stories on prove that we are eventually going to be taken over by machines.  Today I found such a story and it goes to show that we’re all basically dead meat, we just don’t know it yet.   According to Geek:

What if an AI could read a user’s mouse movements and determine if they were being truthful? That was the question Giuseppe Sartori, a forensic neuroscientist and study author wanted to answer. To test this, he asked a number of volunteers to either memorize a fake identity or be truthful about themselves. Subjects were then asked a series of yes-or-no questions on as computer test. The questions were simple like “Were you born in [year]?” But, mixed in with those simple ones, were slightly more complex ones “Is [x] your zodiac sign?”

The hope is to throw off would-be identity thieves just by catching them off guard. A fraud might memorize the basics – name, birthdate, address – but not connecting thoughts. If you were born in Oklahoma and you’re pretending to be someone in California, and someone asks you the capital of your home state, you’ll have to stop to think for a moment. That hesitation was evident in cursor movements.

Experimenters found that when they fed their machine learning algorithms data on the subjects’ mouse paths, they were able to catch liars an incredible 95% of the time. It’s a lot like Google’s new “I am not a robot” button. Many bots tend to move in straight, clean lines, while humans are a lot more… imprecise. By reading basic cursor data, it’s not hard to sort out the humans from the software. Similarly, if the team can get it just a bit more accurate, this could be a valuable new tool in the fight against identity theft.

In theory this is actually a very positive development for A.I.  Being able to tell if a person is lying is excellent in the solving of crimes and other heinous acts committed online and elsewhere.  However, like all great developments, this could have awful consequences if and when (more like when) machines become as intelligent as humans.  I don’t know, all of this stuff scares the hell out of me.

Thanks for reading! How would you rate this article?

Click on a star to rate it!

/ 5.

Tell us what's wrong with this post? How could we improve it? :)

Let us improve this post!