
Artificial intelligence has rapidly infiltrated nearly every aspect of life in the last two years. Its influence expanded so far and so fast that its engineers attained the title of TIME magazine’s 2025 Person of the Year, which is given to a person or group of people who dominated media discourse that year. However, what was once created as a supplemental tool for improving efficiency swiftly became a replacement for human thought. The Editorial Board condemns the use of AI in journalism. In response, the Daily Campus has implemented protocols for identifying and penalizing its use.
The Daily Campus has zero tolerance for AI usage across all written and visual content sections. To determine whether a writer, photographer or artist has used AI, both section editors and the executive team analyze the work. The content is also scanned through an AI detector to determine the authenticity of quotes and other facts. Unpaid contributors have two strikes before being prohibited from contributing to the Daily Campus. All paid staff are dismissed upon first offense. The full AI Policy can be found through the Daily Campus’ website.
Journalism is one of the last fully human jobs, but now that truth is quickly withering away. Major newsrooms including the Associated Press, The New York Times and Al Jazeera have already begun integrating AI into their workflows, most saying that it’s for the sake of improving efficiency. These newsrooms are using AI to write headlines and article summaries, to sift through thick stacks of government documents and to generate translated articles. All of these newsrooms claim to use a “human-in, human-out” approach, meaning a human editor or writer feeds the AI bot the information and then fact-checks and edits the result.
However, all of these newsrooms at one point or another have published a piece warning readers of the dangers of AI use due to the risk of inaccuracy and a loss of the human element in writing. It is ironic for a newsroom to promote the dangers of AI-generated content while also using it to write headlines — the most-seen part of an article. In December 2025, the NYT published an article in which the first paragraph was written by AI. The rest of the article bashed the writing, saying it makes the reader feel “on alert.” It also noted that the use of language was impersonal, flat and overly clichéd. How are readers expected to develop a concrete opinion on the use of AI in journalism, or invest trust into news organizations to continue doing their due diligence, if said news organizations are sending these mixed signals on their own stance on AI in journalism?

The Daily Campus does not want to cast these same doubts onto our readers, especially as it is an entirely-student-run news organization. It is vital that we hold our writers, photographers and artists to a zero-AI use standard so that they are well-equipped in their careers to conduct meaningful interviews, analyze hefty documents and create fulfilling content— and to do all of that efficiently.
AI should not have a place in journalism, especially not in training the next generation of journalists. The Daily Campus will continue to hold a zero-tolerance policy for the use of AI in any form for the sake of creating a self-sufficient cohort of journalists and providing them an arsenal of skills to capture compelling stories.
