Column: Automated car flaws point to bigger problems

Google’s new driverless car has met its match: human error. 

The New York Times reported that since 2009, the car has been involved in 16 crashes that the company maintains are all due to human fault. Whether it was the human within the car itself or in other cars, the same conclusion is made that the only thing impeding the Google car from performing its duties are the very people it is supposed to service. 

For example, the car literally paralyzes at intersections. The software depends on other cars to come to a complete stop before continuing to move forward. However, it restarts this process if a car manages to simply inch forward. This is a flaw that could cause more traffic problems than resolve them.

In my town, intersections involve more of a human interaction where one person waves the other to go rather than follow the traffic rules. Google’s car would not be prepared for this type of interaction on the road. The rate at which our technology is being developed is incredible but it also demonstrates, the direction we are headed in is turning the relationship between technology and humanity into a farce. 

According to the New York Times, Dimitri Dolgov, the project’s head of software, was asked what he learned from the setbacks. Rather than stating a future improvement to the car model, he simply says that human drivers needed to be “less idiotic.” Challenging human idiocy is not the way to cure it. It will only further that plight as it can also be seen as a gauntlet thrown down by Mr. Dolgov. As if it were a challenge to surpass all known cases of idiocy. 

The problem lies within the car itself. The problem is that the car isn’t smart enough. It follows traffic rules so strictly that it lacks the ability to be flexible when put in extenuating circumstances. My theory is that the vision of this project is to entirely replace manually driven cars with this new technology. While that may be a notable achievement in the future, the project must be tempered to allow for the slow introduction of cars to a world with manual drivers who are very much prone to error. 

In Audi cars, there is an adjustable cruise control that changes depending on its surrounding environment. If Google cars were able to expand on such technology to be able to mirror the cars around them so traffic is still followed but the integration between these two modes of transportation become seamless. In short, the Google car needs to be more forgiving of human mistake and adjust rather than expect the masses to conform to the needs of an automated product. 

This only further substantiates my fear that our society is becoming all too inclined to automate every aspect of life. 7th-semester biological sciences major, Josh Skydel says, “It is at the point where the “can we” attitude is surpassing the “should we” question we should be asking ourselves.” 

There are many ways automation can potentially backfire on humanity. First of all, most jobs are very susceptible to this phenomenon. Furthermore, human error has been the cause of many innovations. Our attempts to mechanize more than every day chores will impact the way we evolve in our approaches of problem solving and prioritizing. The closest way to imitate human mind is to create artificial intelligence. But this still remains a theoretical model, and thankfully so. 

On the other hand, automated machines have been useful in many areas of discovery. Rovers on Mars, drones and ATMs have all proven their worth. However, they have all proven there are legitimate concerns with the simple logistics behind the software. There are many malfunctions that can occur and in the end, technology depends on humans to fix those errors.


Jesseba Fernando is a staff columnist for The Daily Campus opinion section. She can be reached via email at jesseba.fernando@uconn.edu.