Self-driving cars: Innovation at what cost?

A row of Google self-driving Lexus cars at a Google event outside the Computer History Museum in Mountain View, Calif. California regulators release safety reports filed by 11 companies that have been testing self-driving car prototypes on public roads on Wednesday, Feb. 1, 2017. T (Eric Risberg/ AP)

It’s 2017. Technology has come so far as to instill a genuine fear in people that robots will steal their jobs. The era of automation has been both frightening and exhilarating. With each advancement, minds marvel at just how much we are capable of accomplishing. At the same time, fear of the unknown can be terrifying, so a healthy dose of skepticism is generally well-received.

However, technology has significantly improved our lives. Globalization allows me to feel connected to individuals of a different tongue, and 4G networks allow me to never feel lost or confused—even in the most obscure locations.

Medical technology has even advanced so far as to allow a colorblind person to wear a pair of glasses and see color for the first time.

Perhaps the most exciting and innovative of all technological advances is the self-driving car, endorsed by Silicon Valley and corporate giants like Tesla, Uber and Google.

Self-driving cars are the solution to traffic accidents and human error. President Obama even took the time to write an Op-Ed piece for the Pittsburgh Post Gazette on how self-driving technology could save tens of thousands of lives.

Americans get into upwards of 35,000 traffic accidents a year—and according to Obama’s article, 94 percent of those accidents are caused by human error.

Although self-driving cars seem to be the solution of the future, eradicating human error might not be the end-all-be-all previously assumed. While human error may be the cause of most accidents, it is also forgivable. Many accidents that occur are a result of instant reflexes in surprise situations. Self-driving cars would remove this possibility.

Instead, cars will act according to a series of algorithms and codes programmed into the system’s machinery. While this may help reduce the amount of traffic accidents on the road, it may also be contributing to calculated, first-degree murder.

Take the following situation into consideration. You’re driving on a congested highway in the middle lane. Suddenly you see a truck in front of you with large boxes about to spill out and hit your car. To your right, there’s a vehicle in which there are four young adults without seatbelts on. To your left, there’s a vehicle with a lone woman wearing a seatbelt.

In the era of regular cars, we would make a snap decision to swerve either left or right to get out of harm’s way, possibly hitting the kids on the right or the woman on the left. In either scenario, the outcome wouldn’t be your fault. You tried to save yourself, and had no choice but to reflexively react to the situation.

In the era of self-driving cars, however, there is no such reflex. Instead, programmers who engineered the vehicle are the driving force behind the car’s decision. But how can a car decide the best outcome.

We know that the car is designed to save you, the driver, not to save the most amount of people. This means the car will either swerve right, possibly injuring four young adults without seatbelts on, or swerve left, possibly injuring a woman wearing her seatbelt.

As a person, we might think to swerve into the woman on the left. Of the two options, these seems to produce the least damage. She’s only one person as opposed to four, and she was wearing her seatbelt, so her potential injuries would be less significant, as well.

At the same time, if the woman is abiding by the law by wearing her seatbelt, why should she be punished for that? Shouldn’t you instead ram into the car on your right, because those kids weren’t smart enough to wear their seatbelts?

As soon as that thought crosses your mind, you’ve stepped into vigilante justice. We can’t be punishing people for their actions by choosing to ram our cars into them in an accident. So what is the right choice?

A self-driving car wouldn’t think of any of the rational, moral thoughts running through a human’s head. It would simply act according to the algorithm. If, in that case, the car swerves to the left and kills the woman driving on her own accord, would they not constitute first-degree murder? This was no accident; this was a premeditated, planned decision by the vehicle to hit that car. Who is responsible? Who is to blame?

Although self-driving technology may seem to alleviate many of the issues we face today, they definitely prevent some deeply controversial issues of their own. It seems that human error might not be as bad as tech wizards make it out to be. At least in human error, there’s no premeditation. The era of automation means the era of calculated decisions, and that may mean a difficult venture ahead.


Gulrukh Haroon is a campus correspondent for The Daily Campus. She can be reached by email at gulrukh.haroon@uconn.edu.