
We all know who Stephen King is — he’s the king of horror, and he’s written some marvelous novels such as “Carrie” and “The Shining.” There’s also Mark Twain, the “father of American literature.” He’s written some fabulous stories, including “The Adventures of Tom Sawyer” and “The Adventures of Huckleberry Finn.” These stories came from the authors’ own minds. They didn’t use Google to help write their stories, and they certainly didn’t use artificial intelligence models like ChatGPT.
Many writers today can use ChatGPT for numerous aspects of creative writing, from brainstorming ideas to writing the entire story. But how creative is ChatGPT when it writes stories? ChatGPT is certainly not King or Twain. In other words, AI’s process of writing is not as creative as the way King, Twain and other professional authors can write stories.
Before going into detail, let’s define creative writing. Creative writing is a form of writing that uses ideas, emotions and thoughts to craft a story. When writing stories, authors often base the emotions and content of the story on their own personal experiences. ChatGPT doesn’t have any memories or personal experiences to use when writing stories. As a result, the stories that AI creates either lack humor or flashbacks whenever it’s asked by a user to put one or both of those elements in a story. Humor is classified as an emotion, so if ChatGPT’s writing lacks humor, then it lacks an emotion. If it lacks an emotion, then its stories can’t be as creative as King and Twain’s writing.
Besides having a lack of personal experience in a story, ChatGPT is not very creative when coming up with prompts. AI models rely on outside sources to answer any statement or question a user throws at them. So, the prompts that ChatGPT hands a user aren’t coming from ChatGPT’s mind. Instead, they’re coming from outside sources, demonstrating that ChatGPT doesn’t exhibit creative writing. Instead, it exhibits plagiarism.
Not only does ChatGPT lack originality when coming up with prompts, but the prompts that ChatGPT creates are very vague. Specifically, when a user asks ChatGPT to create an entire outline for a story, ChatGPT will give the user an outline that’s undefined in terms of plot. It’ll also give unrelatable advice that isn’t helpful when trying to write a story.
When a user asks ChatGPT to create a scene, the platform responds by giving the user a scene that is very boring and lacks figurative language, emotions and drama. An excellent scene is supposed to grab someone’s attention, and it’s supposed to have some sort of conflict that keeps the reader engaged throughout the story. But when ChatGPT tries to write a scene, it lacks just that.
Writer Celeste Kallio did an experiment to prove this concept. She asked ChatGPT to write a scene in which “a man asks a woman about her mother.” Instead of getting a scene full of rich details, conflict and figurative language, Kallio received ten paragraphs that were bland and boring. After receiving these paragraphs, Kallio asked the platform to continue the scene, so that the women’s mother has a very dark secret. ChatGPT still gave Kallio an output that lacked figurative language, had absolutely no drama, contained no conflict of any kind and used word choices that were equivalent to those of a kindergartener. So, if ChatGPT couldn’t make one simple scene interesting to read, then how do you expect it to write an entire short story or novel that’s fascinating enough to win a Pulitzer Prize?
ChatGPT is convenient. It allows users to avoid strenuous tasks such as brainstorming ideas for a story or writing the story themselves. But if ChatGPT can’t create original ideas, inject humor into a story or create a scene that’s at or above a toddler’s imaginative ability, then what’s the point? If you’re struggling to come up with a story, then you might as well leave ChatGPT alone and seek out a friend or a professor instead.

Thank you. This article provides the evidence required that LLMs are not the threats to writing incomes famous authors claim they are.
The article basically claims that ChatGPT is not truly creative and its outputs can be considered plagiarism because it generates responses based on outside sources, which is bullshit. Humans also stand on the shoulders of giants, not just LLMs, but we don’t consider George RR Martin (for example) to be a plagiarist when he has stated having been influenced by what he read in childhood.