Stanford Professor: Face-Reading AI Will Detect Your Political and Sexual Orientation

Opinions about facial recognition technology and artificial intelligence are all over the place. The combination of these two powerful technologies could have amazing or disastrous consequences. Stanford University professor Michal Kosinski predicts that a face-reading AI will one day be able to detect one’s IQ and even political orientation. This is a very unsettling prospect, if true.

Face-reading AIs can Become a big Problem

On paper, it makes a lot of sense to combine facial recognition technology with artificial intelligence. From a security point of view, it would allow law enforcement agencies to track criminals or suspects in a more proactive manner, rather than simply following clues left behind. However, from a privacy point of view, these technologies should never be combined – at least not until proper laws are drafted to ensure this “unholy marriage” does not result in an invasion of consumer privacy.

Stanford University professor Michal Kosinski sees an interesting, albeit frightening future ahead when these two technologies converge. In his opinion, artificial intelligence will – at some point – be able to detect one’s sexual orientation based on photos. That statement has caused a fair amount of fireworks across social media, as many people were not too pleased about that. Then again, there is virtually nothing these combined technologies would not be able to determine, which makes this whole ordeal a lot scarier than most people realize.

Kosinski went even further by suggesting that a face-reading artificial intelligence would be able to determine other aspects about one’s personal life. For example, it would theoretically be possible to successfully determine one’s political inclination using AI. Since politics are always a topic of substantial controversy in the United States, such a tool would be quite valuable to the right individuals. The same technology could be used to determine people’s IQs, criminal behavior, and some other very personal aspects.

The combination of AI with facial recognition tools would create severe privacy risks. If all of that information could be accurately derived from one’s photo, the U.S. would turn into a police state of sorts. That is not an outcome anyone is looking forward to, for obvious reasons. Facial recognition technology can cause disturbing side effects in ways most people cannot even comprehend. It also shows how this technology is a direct threat to the privacy of everyone on this planet, even in its “dumb” form.

This also begs the question of whether or not facial recognition technology should be coupled with artificial intelligence in the first place. Making an already invasive system smarter and more thorough can yield positive and negative results alike. Human faces tell a lot about our personalities and what we do. Having such invasive technology at one’s disposal to create nearly-full profiles of other people based on their facial structure is not something any of us want. With photos becoming more public than ever before – mainly thanks to social media – it would not take much effort to turn our freely shared information against us.

No one will be surprised to hear that the research conducted by Kosinski is considered highly controversial. This kind of research will spark new debates about the things that actually matter. Such tools can easily be used for specific political or social agendas, which is something everyone should oppose in a more active manner. It is possible governments and nefarious groups already have such technology at their disposal today. If that is the case, no one knows for sure what the future holds or how such technology may be used.