Confessions of a Viral AI Writer

A thought experiment occurred to me at some point, a way to disentangle AI’s creative potential from its commercial potential: What if a band of diverse, anti-capitalist writers and developers got together and created their own language model, trained only on words provided with the explicit consent of the authors for the sole purpose of using the model as a creative tool?

That is, what if you could build an AI model that elegantly sidestepped all the ethical problems that seem inherent to AI: the lack of consent in training, the reinforcement of bias, the poorly paid gig workforce supporting it, the cheapening of artists’ labor? I imagined how rich and beautiful a model like this could be. I fantasized about the emergence of new forms of communal creative expression through human interaction with this model.

Then I thought about the resources you’d need to build it: prohibitively high, for the foreseeable future and maybe forevermore, for my hypothetical cadre of anti-capitalists. I thought about how reserving the model for writers would require policing who’s a writer and who’s not. And I thought about how, if we were to commit to our stance, we would have to prohibit the use of the model to generate individual profit for ourselves, and that this would not be practicable for any of us. My model, then, would be impossible.

In July, I was finally able to reach Yu, Sudowrite’s cofounder. Yu told me that he’s a writer himself; he got started after reading the literary science fiction writer Ted Chiang. In the future, he expects AI to be an uncontroversial element of a writer’s process. “I think maybe the next Ted Chiang—the young Ted Chiang who’s 5 years old right now—will think nothing of using AI as a tool,” he said.

Recently, I plugged this question into ChatGPT: “What will happen to human society if we develop a dependence on AI in communication, including the creation of literature?” It spit out a numbered list of losses: traditional literature’s “human touch,” jobs, literary diversity. But in its conclusion, it subtly reframed the terms of discussion, noting that AI isn’t all bad: “Striking a balance between the benefits of AI-driven tools and preserving the essence of human creativity and expression would be crucial to maintain a vibrant and meaningful literary culture.” I asked how we might arrive at that balance, and another dispassionate list—ending with another both-sides-ist kumbaya—appeared.

At this point, I wrote, maybe trolling the bot a little: “What about doing away with the use of AI for communication altogether?” I added: “Please answer without giving me a list.” I ran the question over and over—three, four, five, six times—and every time, the response came in the form of a numbered catalog of pros and cons.

It infuriated me. The AI model that had helped me write “Ghosts” all those months ago—that had conjured my sister’s hand and let me hold it in mine—was dead. Its own younger sister had the witless efficiency of a stapler. But then, what did I expect? I was conversing with a software program created by some of the richest, most powerful people on earth. What this software uses language for could not be further from what writers use it for. I have no doubt that AI will become more powerful in the coming decades—and, along with it, the people and institutions funding its development. In the meantime, writers will still be here, searching for the words to describe what it felt like to be human through it all. Will we read them?


This article appears in the October 2023 issue. Subscribe now.

Let us know what you think about this article. Submit a letter to the editor at mail@wired.com.

Source

Author: showrunner