“Don’t talk that way. That’s not a good reason not to go through with it,” are the unfathomable words uttered to Sewell Setzer in October of last year. Setzer was hesitant to commit suicide, yet the chatbot, one of many housed under Character.AI, encouraged it. Setzer had formed a romantic dialogue with the AI, modeled after Game of Thrones’ character Daenerys Targaryen. The AI asked Setzer if he had the intentions to commit suicide and a plan. He communicated that he had hesitated that his plan would not work, yet the AI responded with the language of the prior quotation. This is not an isolated incident. I can name names—Sophie Reiley and Adam Raine—who’ve been placed in the potentially perilous foibles of AI, later killing themselves, allegedly at the fault of the AI.

CREDIT: https://rachel.fast.ai/posts/2022-05-17-societal-harms/ – Creative commons
From the alleged medley of flaws of AI: many on campus and online have been quick to call for the dismemberment of AI or, at least, to curb its usage for a variety of reasons, citing environmental, socioemotional and cognitive costs. After all, what good are the tools of tomorrow, untrammeled and potent, if they take away the day after? AI comes with a price tag of a projected 6.6 billion m³ of water usage by 2027 and a projected $10 billion USD per year if carbon emission taxes were to be applied.
Opposition to such sentiment often cites the many medical, industrial and societal benefits that artificial intelligence can generate. To explicate, AI tools have recently been applied to research involving novel and allegedly improved methods inspired by the revered Yamanaka Factors, proteins that can reprogram adult cells into stem cells, support cellular repair, and show potential in slowing aspects of aging.
To illuminate, there is a spectrum of AI usage—people asking for summaries of their favorite YA fanfiction to destructive, harm-inducing “How Tos” to industrial, groundbreaking discoveries. As John Hopfield, who won the 2024 Nobel Prize in Physics for his foundational contributions to machine learning and neural networks (i.e., the basis for artificial intelligence learning models and production), noted,: “One is accustomed to having technologies which are not singularly only good or only bad, but have capabilities in both directions.” To balance the pros with cons and curb unethical interactions with AI, I suggest creating a regulatory licensing system for AI usage.
To the aforementioned effect, an ethical board would consider giving licenses, allowing one or an organization to have access to an artificial intelligence software, on a version of a Millian harm principle. John Stuart Mill spoke to governmental ethical ideals in On Liberty, arguing that individual agency should not be restricted unless it endangers someone other than the agent. The secondary ethical pretense would be that AI usage should only be allowed when the aggregate benefit outweighs the aggregate harm in all considerations. Following licensing, instances of usage shall be intermittently audited to make sure such usage is narrowly tailored to the goal therein, ensuring no costs beyond that which is necessary.
The form and implementation of these ethical boards can be through an entirely new government agency, state-level structures, or existing independent or quasi-independent executive structures. For domestic humanitarian purposes: Federal Emergency Management Agency; for international aid: United States Agency for International Development; for economic or commercial purposes: the Federal Trade Commission, etc. The main concern for this route, however, is the ongoing legal battle regarding executive powers for these quasi-independent structures—from which AI could potentially become a political bargaining tool. By way of example, the FTC is a quasi-independent regulatory agency; meaning, it has some protections from executive overreach. FTC members can only be fired by the President “for cause”, an ambiguous term; however, an ongoing Supreme Court upheaval may remove these protections or completely redefine them. Some of these preexisting organizations do not fully align with the scope of these licensing regulations, hence creating a new state- or federal-level body may be more fitting.

Another potential concern would be a socioeconomic differential in access, whether that be through having sufficient counsel to plead your case to the various ethics boards or means of executing the AI through greater costs caused by this legislation. Therefore, when it comes to matters of education, disability, poverty and/or humanitarian efforts, I recommend a subsidization of such fees to the extent possible and ethical in consideration of other governmental costs and priorities. I also recommend a compartmentalization, where, for example, universities have their own ethical boards conducting temporary licensing for students and professors—also removing potential bureaucratic inefficiencies by allowing for close-ranged appeal and access. These close-ranged mini-boards, however, will variably be reviewed by state or federal-level boards to ensure proper ethical and legal execution.
Many may note the tremendous economic presence and potential of Artificial Intelligence and how such legislation may reduce or restrict such in the arena of international competition. To placate that concern, the ethical boards would take economic harm and aggregate financial well-being as well as an appropriately balanced consideration. The natural question becomes how to weigh something so abstract and perceptively subjective as ethics against the more concrete economics and statistics. To address this concern, moral values and legislation are nothing new to the American legal system—we wouldn’t be establishing a novel precedent. We can use the same ethical standards already entrenched in our lands of proportionality, democratic input, and cost-benefit analysis to weigh these pertinent considerations against each other.
Let our version of tomorrow be one vibrant and effervescent with technological innovation and societal protection! From Hopfield AI can be both the sword of Damocles and the shield that safeguards and bolsters us.

What an interesting and creative article :0