
A new course at the University of Connecticut is trying to take a “forward-looking” approach to its subject matter: artificial intelligence. Nicknamed “AI 4 All,” it’s a product of the College of Engineering, which hopes to integrate the course into the university’s general education core.
“This course introduces generative AI, its real-world applications and ethical considerations, helping students develop a foundational understanding of AI’s role in learning, creativity and problem-solving,” the class’ catalog description states.
The literacy course started being offered for the Fall 2025 semester, with 20 sections listed as being in-person on the main campus — each with a capacity of 25 students and scheduled for an hour and 15 minutes. It’s the brainchild of Arash Esmaili Zaghi, a civil and environmental engineering professor, who is also listed as proctoring every section.
The class is currently classified as ENGR 1195, a “special topics” designation used “to pilot a course that may eventually be added to the catalog,” according to the Registrar. It appears that “AI 4 All” is just the first step in UConn’s initiatives to integrate AI-powered learning into students’ plan of study, the Daily Campus has learned through public records requests.
“We should offer it as an elective special topics class at the 1000 or 2000 level with no prerequisites, and ultimately we would offer it as a common curriculum class,” said then-Provost Anne D’Alleva in an email to JC Zhao, the dean of the College of Engineering, in December of 2024.
The “common curriculum” refers to UConn’s general education requirements, which now features six “topics of inquiry,” instead of the long-running “content area” system. The first new category, titled “Creativity: Design, Expression, Innovation” already features two AI courses: DMD 2030, “Generative AI for Creative Minds: The Future of Work” and ENGL 2616, “Artificial Intelligence: Creative and Critical Approaches.”
“Also in my ideal world…I would love to offer it [for] Fall 2025,” D’Alleva said of the “AI for All” course in her email message. “If there’s a faculty member willing to pivot and take this on, I would fund a course release this spring for them and also provide some extra professional development funding to recognize the extra effort this is taking.”
Zhao tasked Zaghi with expanding his curriculum proposal for non-engineering students, with a request from D’Alleva to accommodate an asynchronous online modality in the course’s design in her email to Zhao.
“Rather than viewing AI as a mere add-on, the committee advocates a holistic approach that embeds foundational literacy, ethical considerations and discipline-specific applications into a cohesive learning progression,” Zaghi’s team wrote in their 2024 proposal. “The proposed model positions future engineers not only to excel in an AI-driven workforce but also to influence the evolution of technology in creative, responsible ways.”
In his team’s recommendation, Zaghi also suggested heavily integrating AI into multiple active computer science and engineering courses at UConn, as well as developing discipline-specific AI courses tailored to every department in the College of Engineering and faculty training for use of AI tools.
The report also recommends students and faculty be provided free access to AI tools and platforms — with the potential to explore “AI mentors or teaching assistants (TAs).”
“By leveraging AI to address routine queries and provide targeted resources, these tools can enhance the learning experience, improve accessibility and allow instructors and TAs to focus on more complex, higher-level tasks,” the proposal states.
The “AI 4 All” class currently has students watch “micro-lecture” video modules, with “hands-on” AI usage during the weekly in-person sessions. The videos, which are all publicly available on YouTube, prominently feature AI-generated scripts, voice-overs and visuals.
“Welcome! You’re about to start an exciting journey into the world of artificial intelligence, a rapidly growing technology becoming integral to nearly every aspect of our lives,” an upbeat AI-generated voice says at the beginning of the course’s first video. “Our hope is that by the end of the semester, you’ll confidently and ethically integrate AI tools into your studies and beyond.”

Most videos in the course include visuals with misspellings, a common AI artifact, and others have voices engaging in simulated conversations, complete with filler words like “um” and “uh.”
Another course module, on the “historical perspective of fear of new technology,” uses general descriptions of historical fears of writing, rail travel and telephones as a backdrop for students to adopt an all-in attitude toward AI, with un-cited mentions of studies, generally, which purport to align with the script’s points.
“AI anxiety, characterized by worries about job losses, invasions of privacy, or even existential threats…[are] understandable…[but] can lead to avoidance, hesitation and ultimately stagnation, both individually and collectively,” an AI voice orates in the course lecture, over generated pictures of Studio Ghibli characters.
Numerous lectures also feature AI-generated visuals of characters like the Muppets of Sesame Street, the Peanuts, Lego and SpongeBob SquarePants among others. Intellectual property owners, including Studio Ghibli, have asked firms like OpenAI to stop using their copyrighted materials to train models.
The course video’s script prescribes that a person’s attitude toward AI will staunchly “determine whether your career sinks, merely survives or truly thrives.”
“The path we choose between anxiety and optimism [toward AI] will determine not only our individual futures, but the broader trajectory of our society,” the voice says later in the video.
The response to the class material, so far, appears to be initially mixed.
“The only reason I don’t like it is because of how repetitive it is,” a user, going by the screen name Willthrp4, commented last week on a UConn forum on Reddit, claiming to be in the class. “It’s a lot of writing in the post lecture assignments and in class… It is an easy two credits, but it’s just annoying and that one class during the week that’s the thorn in my side.”
“We basically don’t know much about it either,” commented another user, TargetDeep3182, claiming to be a TA for the course. “Expect some simple assignments mainly reflecting about the uses and functionality of AI, as well as attempting to get AI to accomplish tasks for you.”
Zaghi, who said in an interview with UConn Magazine that he uses AI eight or more hours a day, has also said that he believes AI will have an unprecedented impact on society as a “new literacy.” Outside of his work with UConn, he even runs an AI-infused eastern spirituality YouTube channel.
“You cannot declare war against AI. That’s a losing battle…Education is our only tool,” he said to UConn Magazine’s Brad Tuttle, another UConn professor who integrates AI into coursework.
Use of AI in many industries — particularly in education — does frequently come under fire, however, due to the technology’s propensity to output false information with confidence, use of AI to replace jobs and a concerning potential for AI dependence. There are also numerous copyright infringement lawsuits, featuring big names like Disney, Universal Studios and Warner Bros., since generative AI models train on and replicate their intellectual property.
The trend in higher education, by and large, is to restrict usage of AI unless it’s an authorized component in teaching. Some uproar, like when a student at Northeastern University filed for a refund over a teacher’s usage of ChatGPT to develop lessons, has raised nationally reaching questions as to whether tuition should be lowered for all-online and AI-infused courses.
People and companies have also been lambasted for using AI to replace, partially or completely, traditionally creative arts — like in media production, books and video games — and even by the public using social media.
“The genie is out of the bottle and you’re not going to put it back in,” said recently Pratik Thakar, a vice president at Coca-Cola, who is also called the corporation’s “head of generative AI.”
The soft drink company has unpopularly employed the use of AI in its holiday advertising campaigns, in an effort to re-make a fondly-remembered commercial from the 90s.
“The only explanation people can think of is that Coca-Cola is running [the AI-generated] campaigns for rage engagement, but that seems bizarrely off brand, and particularly for a Christmas ad,” opined art and design technology journalist Joe Foley.
“FUN FACT: @CocaCola is ‘red’ because it’s made from the blood of out-of-work artists! #HolidayFactz,” “Gravity Falls” animator Alex Hirsch posted on Twitter in response to last year’s Coke ads.
Zaghi would seem to go along with the corporate answer, even if most people disagree according to polls from Pew Research.
“This is where we are,” he said to UConn Magazine. “There’s no way we can go back. We can either embrace it and stay ahead of it, or we can put our heads in the sand and become irrelevant.”
