
Over the past couple of years, as the use of artificial intelligence (AI) has become more prevalent in society, the journalism industry has been trying to shift toward this change. Vaunted publications like The New York Times and The Washington Post have disclosed use of AI technology, which includes drafting initial headlines and making summaries of articles. This acceptance of AI, especially the generative side of the technology, may seem relatively innocuous, but it is the Editorial Board’s position that the field of journalism should seek to limit its AI use as much as possible and prevent its normalization.
The first caution against AI use that news organizations should think about is the massive environmental impact. The training of new generative AI models requires a huge amount of computational power, putting strain on the electric grid and increasing carbon emissions. The action of drawing on this power by then using the technology also eats up large amounts of energy. According to an article by MIT News, “researchers have estimated that a ChatGPT query consumes about five times more electricity than a simple Google search.” In addition to electricity, the process of training and using AI models also drains supplies of water, an already heavily overused resource as it is. The computing hardware in data centers needs to be cooled by water, and since the energy demands from AI are much higher than other uses, more water must be used as well. These data centers can evaporate anywhere from one to nine liters per kWh of energy to cool the systems. Given these facts, the use of AI to carry out simple tasks related to journalism becomes much less innocent. When an AI model spits out a generated summary, it has a much larger environmental effect than having a human simply type one out themselves.
Another reason to be wary of AI is the ethical concerns of stolen data and information. AI models require vast amounts of data to train on to form the basis for their usage. This data is drawn from books, movies, TV shows, art and journalistic articles themselves. All the companies competing to make the best generative AI models scrape data from the world’s creatives in this way – and the practice is illegal theft. In fact, The New York Times is suing OpenAI, one of the pioneers of generative AI, and Microsoft for copyright infringement. It does not make sense for news outlets to take legal action against AI companies for stealing their work for training purposes and in the next breath use the technology those companies built to streamline the journalistic process.
The most detrimental impact of AI use in journalism may be the simplest – the human element is at risk of fading away. Letting AI creep into newsrooms through even the small, mundane tasks ignores the fact that someone working in the office could just as easily do so. It’s true that news outlets have real people check AI-created elements, but it’s not the same as just writing those elements with one’s own brain. And outlets from across the journalistic sphere are doing more than just using AI to make summaries or think of headlines. For instance, the editor of Suncoast Searchlight, a nonprofit newsroom based in Florida, was caught by her reporters for using generative AI to edit their work. Meanwhile, Business Insider, a major publication focused on finance, has gone “all-in on AI,” according to a May 2025 memo to its employees. In the process, it laid off 21% of its staff and boasted that its goal was to have every employee use ChatGPT. The effects of journalism’s acquiescence to the AI revolution are clear: while newsroom staff face layoffs, AI will continue to spread in the field, whether through a rogue editor or a company-wide decision.
Journalism is an inherently human practice. No matter how much machines may try, they cannot fully replace the skill of the written word or the reporting that underpins the stories. Letting AI into journalism and normalizing it is deeply harmful to both the people who call the news industry home and a worldwide audience that depends on reliable information. The press must rethink its shift towards AI and consider the damage this technology poses for our future. Keep the human-driven journalistic endeavor 100% human.
