Facing humanity in our machines

0
1
exc-5e4614b523a45f2b1bfc8824


Artificial intelligence, if not handled correctly, could have serious consequences.   Photo by     Alex Knight     from     Pexels

Artificial intelligence, if not handled correctly, could have serious consequences.

Photo by Alex Knight from Pexels

While the rise of technology has provided humanity with novel questions surrounding ethics, power and privacy, it has also accentuated familiar topics that have plagued humanity since the beginning of civilization. One would like to believe that a machine makes rational, calculated decisions firmly based in facts. However, code is unfortunately only as impartial as its coder. Facial recognition software is laced with racial bias, posing significant challenges to minority groups that threaten to worsen if we are not intentional in recognizing these problems and combating them. 

Facial recognition software is increasingly used for everyday tasks such as phone security and ID verification at airports. The use of this technology is also becoming more widespread in police departments as officers scan for a person’s face in a database of driver’s license photos. This allows them to identify potential suspects or locate witnesses

In a recent study, federal researchers found evidence of racial bias in almost 200 facial recognition programs. This can allow police to identify two different faces as the same or does not recognize the same face, leading to false positives and negatives that can cause the wrong person to become accused.  

>
It is shocking that high-stakes methodologies used to identify criminals are currently being treated with such negligence, however, this should not paralyze us but rather stir constituents and their elected officials to action. 

The researchers reported that Native Americans had the highest false-positive rates and that Asian and African American people are “up to 100 times more likely to be misidentified than white men, depending on the particular algorithm and type of search”. They also noted that African American women were more likely to be falsely identified in police database searches aiming to pinpoint suspects. These mistakes can not only lead to inconvenient wastes of time and tense law enforcement interactions but can also increase the risk of minorities being arrested independent of the officers’ personal beliefs. 

The problem is likely even worse than the study portrays. Amazon supplies facial recognition services that is increasing in popularity across police departments. However, Amazon declined to take part in the study, claiming that their algorithms did not adequately fit the experimental methods of the study. Anyone who blindly accepts this answer without further elaboration is gullible to the point of danger, putting many groups in the crossfires of racism due to their indifference. 

Many politicians are appropriately alarmed by this study and are urging the Trump administration to take action. President Trump has been planning to increase the use of facial recognition software across the country and along its borders, an immoral and unconstitutional decision in light of this new research. Meanwhile, Sen. Bernie Sanders has promised to eliminate the use of facial recognition in police work, and Senator Elizabeth Warren has made plans to regulate the technology


Artificial intelligence, if not handled correctly, could have serious consequences.   Photo by     Markus Spiske temporausch.com     from     Pexels

Artificial intelligence, if not handled correctly, could have serious consequences.

Photo by Markus Spiske temporausch.com from Pexels

No current regulations exist to monitor facial recognition devices for accuracy before their widespread use. Joy Buolamwini, a researcher who gained fame after her 2018 spoken-word piece “AI, Ain’t I a Woman?”, spoke before Congress about racial bias in technology and was pleasantly surprised by the politicians’ responses. There are supporters for increased regulation in both major parties. However, politicians in general do not have the expertise to review the justness of algorithms. Buolamwini recommends the creation of a committee of experts to review software before it is distributed, similar to how the FDA ensures (in concept) the safety of food products before they are put on the market. It is shocking that high-stakes methodologies used to identify criminals are currently being treated with such negligence, however, this should not paralyze us but rather stir constituents and their elected officials to action. 

The future is technology, and the greatest predictor of the future is the past. In order to properly use new devices for the good of society, we must face our problems head-on and remain aware of their influence on all aspects of life, not just in the community we create for ourselves but also in the machines we build to help eliminate human error. Unfortunately, disparities in society reflect themselves in research, enabling the creation of products that further magnify underlying racism.  

Disclaimer: The views and opinions expressed by individual writers in the opinion section do not reflect the views and opinions of The Daily Campus or other staff members. Only articles labeled “Editorial” are the official opinions of The Daily Campus.


Katherine Lee is a staff columnist for The Daily Campus. She can be reached at katherine.lee@uconn.edu.

Leave a Reply