With the exponentially advancing capabilities of Artificial Intelligence, many have been left with unresolved questions regarding the potential threats that heightened computing powers may pose when used for nefarious aims. Researchers from the University of Connecticut have begun to bridge the STEM-Humanities gap to discuss the philosophy and existential risks of current and future AI technologies.
Among the foremost investigators on the responsible use of AI is Dr. Shiri Dori-Hacohen, a professor of Computer Science and Engineering and at UConn’s Reducing Information Ecosystem Threats Lab.

Throughout Dr. Dori-Hacohen’s research of Artificial Intelligence and its conscionable implementation, there is an ever-present and prevailing concern that these technologies may be used as leverage in the existing social and political dynamics of the pre-AI world. One paper, titled Current and Near-Term AI as a Potential Existential Risk Factor, addresses this question directly: could AI create a “dystopian technological future,” or worse, a nuclear catastrophe?
For the relationship between citizens and sovereign, the track towards so-called dystopian invasions of privacy has only been aided by the rise of facial recognition and crowd-sensing technologies. When used in the field of predictive policing – stopping crimes before they are committed based on past data – the European Crime Prevention Network notes that “skewed datasets combined with algorithms that propagate existing biases can yield false positives.”
As for the question of AI serving as the catalyst for all-out nuclear war, the answer, according to many experts, is that this outcome is overwhelmingly unlikely, but certainly possible.
“There are risks with a large capacity for harm,” said PhD student and founder of Beneficial and Ethical AI at UConn (BEACON) Aidan Kierans. “AI is a tool that allows you to do many things. Having more intelligence to access and think through problems means that anyone can approach the task they care about with a greater toolset,” Kierans continued.

The chief concern to these philosophers of tech are those forces seeking to use Artificial Intelligence for tasks of violence, surveillance and control. Much of the language surrounding AI has struck a tone akin to the nuclear arms race, quite simply because possession over advanced machine intelligence could grant a nation – or even an individual citizen – the technological know-how to create such weapons of mass destruction.
Of the many paths forward, there are a set of possible, yet statistically unlikely, worst-case scenarios, along with an expanding list of lesser issues that have already begun to surface. Although isolated actors could theoretically produce bioweapons, “in their garage,” notes Kierans, “The number one most likely outcome is misinformation risks.”
“Deep-faked” images, articles citing fake sources written by wholly mechanized authors and the unchecked spread of false statements are all present risks of AI. In essence, this technology may be used as a tool to control public opinion, influence elections and undermine the integrity of democracy. “If you see a text that is later debunked it still has an effect on you” emphasized Kierans.
The effects of this fabricated propaganda are all but theoretical: Consider the pending litigation surrounding an artificially generated Joe Biden phoning in to New Hampshire voters this past January.
With the prescience of catastrophe in mind, several groups have begun to campaign for the socially beneficial roll out of powerful AI technology. One solution, according to Kierans’ BEACON, is to facilitate student discussion on the ramifications of AI and what legal and technical antidotes are viable to combat its negative outcomes.
In small groups, BEACON hopes to educate and mobilize UConn students in the struggle for responsible innovation in AI. Applications to BEACON’s AI Safety Technical Fellowship and AI Safety Policy Fellowship are open until the end of the day Friday, Sept. 13.
