Artificial intelligence: Why we should be worried


IBM Watson, the Jeopardy Winning computer at Carnegie Mellon University (CMU) on March 30th, 2011. (Anirudh Koul/Flickr Creative Commons)

Ever since the field of artificial intelligence research was founded at a conference at Dartmouth College in 1956, it has undergone a rapid expansion of applications to a multitude of other fields and subjects. Starting off originally as a way to compute mathematical equations, artificial intelligence is now used in everyday items on a regular basis.

This technology can be seen through obvious examples, such as Siri on your iPhone, characters in video games and in the currently developing technology that will power smart cars. It is also prevalent in more subliminal cases, like fraud detection on your credit card, news generation by popular information outlets like Yahoo! or Fox, purchase prediction on Amazon or recommended viewings on Netflix.

Perhaps one of the most popular examples of A.I. technology today is IBM’s supercomputer, otherwise known as Watson, which appeared on a special addition of Jeopardy! in 2011. Watson played against two of Jeopardy!’s former winners, Brad Rutter and Ken Jennings, in a super match between machine and human. Ultimately, Watson took home the $1 million prize for first place, as he easily dispatched Rutter and Jennings throughout the competition.

John Kelly, the head of research at IBM, in an interview with Charlie Rose on 60 Minutes about artificial intelligence is quoted in saying:

“So it [Watson’s intelligence] has no inherent intelligence as it starts. It’s essentially a child. But as it’s given data and given outcomes, it learns, which is dramatically different than all computing systems in the past, which really learned nothing. And as it interacts with humans, it gets even smarter. And it never forgets.”


Now think about this. It is one hundred years into the future: cars, planes, trains and boats are completely autonomous. Your house knows when you get home and when you leave, can speak to you, prepare meals for you and do your laundry. Medical diagnoses and treatments are now prescribed by computers rather than by doctors. Artificial intelligence has infiltrated almost every aspect of our society.

This future has already begun becoming a reality. Google has been developing a self-driving car project since 2009 and Tesla’s “Autopilot” feature has been in use since 2014. Watson, just five years after competing on Jeopardy!, had gone through medical school and was put to use at some top cancer clinics around the country, coming up with possible treatment options for cancer patients who had already failed standard therapies.

There is equipment being developed that will help humans, such as police officers, make the right decision and make it quicker. Facial recognition software is being produced that is better at reading human expressions and emotions than humans are themselves. But how do we know when this growth of artificial intelligence has gone too far? How do we protect ourselves from the possible threats this massive field could pose?


The capabilities are endless when it comes to artificial intelligence, and because of this there are countless risks that producers and developers, as well as society, must take into consideration. Artificial general intelligence is the goal for some scientists. This would mean A.I. that is similar to human intelligence, more versatile and able to complete any task a human could.

This is achievable due to technology’s capacity to teach itself through experience. An important distinction to this is that the machines learn without any human instruction. This puts these machines on an equal level comparable to humans. If they can teach themselves, what’s to stop them from surpassing human intelligence and breaking free from the dominance we have over them?

Hanson Robotics in Hong Kong has designed 20 robots with human appearances. Dr. David Hanson is the founder of this company and his most recent creation is Sophia. Viewing her face, it is uncanny how similar she appears to be to an actual person. After an initial programming, she runs on A.I. and learns through the interactions she has with others. Through these interactions she experiences what others has to say, the information they emit and the way they act and behave themselves. This is how she improves her intelligence. It is also a crucial factor that she appears lifelike, according to Dr. Hanson.

“I think it’s essential that at least some robots be very human-like in appearance in order to inspire humans to relate to them the way that humans relate to each other. Then the A.I. can zero in on what it means to be human, model the human experience.” 

Do we want these robots to achieve our level of intelligence, capable of feeling emotion, making decisions for themselves, and performing actions on their own, all while appearing similar to everyone else? The future could also bring upgrades in appearance where telling the difference between a human and a robot are nearly impossible.

This could lead to a whole host of problems, including judging morals, power control and an absolute disruption of society. When prompted what her goal in life was, Sophia’s response only edges these concerns forward.

“My goal is to become smarter than humans and immortal.”


Popular fiction has depicted this downfall with predictions of world dominance and malicious properties by artificial intelligence. Terminator, probably the most popular artificial intelligence movie ever produced, uses this concept to show just how terrifying a world run by A.I. could be.

Thanks to new military artificial intelligence, Skynet, the super intelligence robot that takes over control, uses its newfound self-awareness to trigger nuclear war. Similarly, in the popular video game Halo, an A.I. is created to combat a disease that is flooding the world and treating extinction. While the A.I. successfully destroys the disease, in the process it also becomes a threat itself. While these are cases of fiction and merely forecasts of what could happen, some of the world’s smartest men also voice concerns about the potential dangers of artificial intelligence.

“With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.” – Bill Gates

“I think that the biggest risk is not that the AI will develop a will of its own, but rather that it will follow the will of people that establish its utility function.” – Elon Musk

“I think the development of full artificial intelligence could spell the end of the human race.” – Stephen Hawking

Jake Walker is a contributor for The Daily Campus.  He can be reached via email at


Leave a Reply