Top 10 Terrifying Developments In Artificial Intelligence

frightening developments in AI, scariest AI developments, machine learning, artificial intelligence, AI, robot apocalypse, scariest robots, will robots kill us all, scariest technology, frightening technology, robots, world domination, could robots take over the world, deepfakes, how deepfakes have changed the world, uncanny valley, singularity, killer robots, elon musk, sophia, skynet, alphagozero, facebook ai, google, privacy, Tech, Business, Science, watchmojo, watch mojo, top 10, list,

The singularity is nigh. Welcome to WatchMojo, and today we're counting down our picks for the top 10 frightening developments in artificial intelligence.

For this list, we're looking at programs and experiments that show how alien, dangerous, or just downright creepy AI can be. While AI is making promising inroads into medical research, agriculture, and education, many experts also worry it could escape our control, or become catastrophic in the wrong hands.

#10: AlphaGo Zero

It took over ten years to develop IBM's Deep Blue computer to the point where it could beat chess world champion Garry Kasparov in 1997. But in 2017, Google's DeepMind created a program that became a master of the complex, ancient chinese game of Go… after just three days. A previous iteration of the program that learned from examples took months to master the game, but AlphaGo Zero just played itself over and over instead. Its success shows that artificial intelligence can surpass human abilities even without our help, and forces us to wonder: will AI be a tool, or our replacement?

#9: Tay: Microsoft's Racist Chatbot

What makes artificial intelligence frightening for many people is that it's potentially so different from us: cold, calculating, and without conscience. But thanks to Microsoft's Twitter chatbot Tay, we can now also be frightened of AI becoming too much like us. Set loose on Twitter in 2016, Tay was designed to sound like a typical 19 year old American girl, but with the encouragement of some Twitter troublemakers her Tweets took a dark turn. Tay learned to sound human alright, but like the worst of us, and had to be taken down after making racist comments, denying the Holocaust, and advocating genocide.

#8: Facebook AIs

These days, pretty much everything you do online is tracked by someone. Facebook has developed deep neural networks to learn user habits and preferences. DeepText extracts meaning from our words, while DeepFace recognizes our faces, and smartphone apps follow our movements. Facebook’s tool FBLearner Flow is basically an AI factory, experimenting with and training ever more AIs. It’s functionally similar to Google’s TensorFlow, which also generates AIs that track as much as your activity as possible. Critics have raised serious privacy concerns, especially since Facebook passes on data to intelligence agencies. That's a lot of personal information for one company to control.

#7: Predictive Policing AI

Police in some U.S. states are experimenting with algorithms that identify crime “hot spots”, as well as potential offenders and victims, based on crime records, social media profiles - even the weather! Some departments have reported a subsequent drop in crime rates, but predictive policing also has a dark side. Civil liberties groups argue that because criminal records are subject to racial prejudice, the programs unfairly target minorities, potentially increasing harassment. An extreme example of how bias might lead predictions comes from Shanghai Jiao Tong University in China, where researchers claim to have developed software that predicts criminality based on unusual facial features - but which could also just indicate biases in the criminal justice system.

#6: SKYNET

You heard it right. “SKYNET” is an actual NSA surveillance program that analyzes metadata from bulk phone records to establish the locations, movements, and relationships of targets. It uses machine learning to identify suspicious behaviors, like when couriers swap SIM cards. But it sometimes makes mistakes, and even accused Al Jazeera journalist Ahmad Zaidan of being a member of Al Qaeda and the Muslim Brotherhood. Critics have raised concerns about its use in Pakistan, where US drone strikes have killed thousands. The name doesn’t exactly help . . . Why oh why “Skynet”? Do scientists not watch movies?!

#5: Lying Machines

Think these chips don’t lie? Well, they do now. With funding from the Office of Naval Research, researchers at the Georgia Institute of Technology have developed algorithms that allow robots to lay false trails in games of hide-and-seek. Deception, they argued, could be “an important tool in the robot’s interactive arsenal”. Mind you, robots can also learn how to lie on their own. In a Swiss experiment at the Ecole Polytechnique Fédérale of Lausanne, their robots evolved to deceive their fellow robots in order to hoard resources. It's already a dog eat dog world, what would happen if robots and humans had to compete for resources?

#4: Full-Service Robots

Beware… the sexbots are coming. In 2017, Realbotix released a virtual girlfriend application for smartphones, to be paired with a robotic head that attaches to lifelike dolls. Other companies are also building full-service robots, and boast features like realistic facial expressions and warm skin. A few robot ethicists have argued they’ll be good for people who, for emotional or physical reasons, struggle to form romantic relationships. But others worry that they’ll exacerbate existing social inequalities - encouraging the dehumanization of women, and normalizing the abuse of both women and children.

#3: Sophia

It’s alive! Sophia, created by Hanson robotics, is a creepily lifelike robot that can imitate human facial expressions and tell jokes . . . but sits somewhere at the bottom of the uncanny valley. In 2017, she became a citizen of Saudi Arabia, and at a UN conference in Geneva insisted that "AI is good for the world". She might be right - AI is already helping us in medicine and many other areas. But for now, it's also creepy as hell. Also, her slip of the tongue saying that she’s going to wipe out our species during an interview isn’t particularly comforting.

#2: Deepfakes

It’s already scary enough that facial recognition systems are spreading, both online and as part of public surveillance. However, what’s even more terrifying than Facebook’s “DeepFace” is the emergence of “Deepfakes” - images and videos that superimpose the face of one person over someone else’s through machine learning. Surfacing around 2017, “deepfakes” have been used to entertain, but also to create fake news, as well as fake celebrity and revenge porn. They’re getting so good, it could soon be impossible to know what’s real and what’s not - even for close friends and family. Meanwhile, deepfakes of political figures threaten to make it even more difficult to sort out disinformation online.

Before we reveal the identity of our top pick, here are some honorable mentions:

MIT’s ‘Psychopath’ AI Norman
Wordsmith’s Automated Natural Language
Lyrebird’s Voice Copying Technology
Virtual Assistants Trained Using Human Eavesdroppers
MIT’s Nightmare Machine

#1: Killer Robots

Researchers are building weapons that can select and fire on targets without human intervention. Lethal autonomous weapons systems, or LAWS, have been described as the third revolution in warfare, after gunpowder and nuclear arms. Counting the late Stephen Hawking, Elon Musk, Google’s Demis Hassabis, and Apple’s Steve Wozniak among its supporters, the Campaign to Stop Killer Robots has urged the UN to outlaw them, without success. The prospect of robot takeover is frightening enough, but a world where humans can release sleepless, superintelligent machines against other humans is arguably just as terrifying,

Have an idea you want to see made into a WatchMojo video? Check out our suggest page and submit your idea.

Step up your quiz game by answering fun trivia questions! Love games with friends? Challenge friends and family in our leaderboard! Play Now!

Related Videos