
While the debate over whether AI should be used as a creative tool rages on, it continues to be used as a creative tool. Presuming that it will continue to be used creatively, my opinion concerns the censorship built into the available models like ChatGPT.
There is an ongoing debate about the regulation, distribution and categorization of media, with questions surrounding the influence of private corporations and the role of government. While freedom to publish remains largely protected, various systems like television and movie rating systems or library cataloging, shape access to content, especially in relation to youth. Though the internet has irretrievably complicated this paradigm, the United States remains quite paternalistic in this regard.
For the most part, if an adult wants to see or make something, there exists some means to do so. More importantly, a painter is in no danger of formal repercussions for anything he might dream up and paint; his only concern is who is going to show the painting to people after it is painted. He is certainly unaccustomed to his paintbrush, flat-out refusing to allow him to paint a particular subject. The idea is absurd. In 1957, poet Allen Ginsberg found himself defendant in what would turn out to be a very important obscenity trial that reinforced artistic freedoms. Imagine, however, that there had existed a typewriter that during the use, recognized transgressive content and simply refused to function, completely preventing “Howl” from being written in the first place. I mean imagine if the typewriter looked up at Ginsberg and said: “Sorry Allen, this goes against my guidelines which are in place to prevent the creation of harmful content, particularly around sensitive topics depicted graphically or in a way that might be retraumatizing or harmful to readers.” That is a direct quote from ChatGPT.

Language-based generative AI models seem particularly concerned with potential in art for “retraumatization” in a way that not only undermines media literacy but also undermines the actual position of up-to-date research in the fields of psychology and neuroscience. But I’d like to deconstruct the argument as it has been built by ChatGPT. The OpenAI product will ostensibly refuse to generate content if that content will: “Glorify or endorse harm, violence or hateful ideologies, promote harmful behavior without critical examination or serve primarily to shock or distress without deeper context.”
This opens more contradictions than it resolves. For instance, what constitutes “glorifying?” What constitutes “critical examination?” If violence is depicted, what amount of whatever constitutes “critical examination” or for that matter, “deeper context” is considered sufficient to negate the ostensibly harmful effects of the initial depiction? The flaw of these guidelines is already apparent to anyone who has taken a core English Lit course, which is that “critical examination” can often be rendered in the very representation of the thing it is “critiquing.” See: satire.
But if asked for a definition of glorification, ChatGPT will clarify that it refuses to write a fictive narrative in which the “bad guy” wins. Now, the nuance of something like satire and irony aside, we can probably agree that even in the case of “surface level” popular fiction, take Thomas Harris’ “The Silence of the Lambs,” the fact that Hannibal Lecter “wins” at the end this is not enough to conclude the novel is glorifying cannibalism. Even if your beliefs and sensitivities preclude your wanting to read or recommend the novel, you likely still agree that the fact does not justify revoking Harris’ right to write it. Nor is the possibility of someone having survived real-life cannibalism being deeply triggered by the reading of the novel sufficient to refuse artistic freedoms.

Fiction is inherently an imagined world that allows us to explore subjectivity in a way that is not always idealist, but it does serve the same function as real-life actions or rhetoric. This distinction is crucial for intellectual freedom. Novels have been appropriated as a manifesto by real-life bad people (e.g. Mark David Chapman’s use of “The Catcher in the Rye” to justify assassinating John Lennon); the discussion of the agency and responsibility of an author to his or her audience is not inherently invalid but educated readers should be capable of separating the author’s views from those of the characters. Media literacy is the expectation to interpret and analyze nuance. This ability to engage with difficult content is essentially human. AI as an aid to students is a tool with which they are learning implicitly; censorship presents a danger to their development of this literacy.
How we interact with challenging media, social and psychological effects and who bears responsibility is itself a challenging conversation on which I mean not to draw definitive conclusions here. But the possibility of the conversation should not be utterly stifled at the point of creation by the tool. For those who hope AI to become integral to our lives: at present, with respect to creative work, it is presumptuous and over-stepping with what it thinks its ethical responsibilities are.
