Writing in the AI Era

On February 14, 2018, the non-profit research company OpenAI dropped a bombshell of an announcement: they had developed a language model called GPT-2, capable of producing writing nearly indistinguishable from human writing. As an example, when given the following sentence as a prompt, the model returned the following paragraphs:


Prompt: A train carriage containing controlled nuclear materials was stolen in Cincinnati today. Its whereabouts are unknown.

Output: The incident occurred on the downtown train line, which runs from Covington and Ashland stations.

In an email to Ohio news outlets, the U.S. Department of Energy said it is working with the Federal Railroad Administration to find the thief.

"The theft of this nuclear material will have significant negative consequences on public and environmental health, our workforce and the economy of our nation", said Tom Hicks, the U.S. Energy Secretary, in a statement. "Our top priority is to secure the theft and ensure it doesn't happen again.""


This is just one example. When provided with other completely different prompts, the GPT-2 produces equally plausible results. The model was generated through the processing of millions of lines of human written text scraped from all over the internet, so perhaps its not surprising that it can produce something that looks like a news article. Like all machine learning results, these outputs must have been cherry-picked to some extent--but the underlying coherence of these artificial sentences is convincing. It's starting to feel like there's something there, embodied within these sentences.

Enough has been written about the ethics of OpenAI's choice not to release the full model, so that won't be my focus here. Soon enough there will be more AI breakthroughs and these sorts of models will become accessible to the wider public. My interest stems from the larger context of this breakthrough and how it fits in with the thousands of years of writers, diligently scribbling away, that got us here.

The writers of the Western tradition of literature, which roughly started with the ancient Greeks and slowly evolved over thousands of years into its current state, have always been deeply influenced by their predecessors. Dante's Divine Comedy could not have been written were it not for Virgil's Aeneid, and the Aeneid would not exist without Homer's Iliad and Odyssey. No doubt Homer wouldn't have written those poems were it not for his Mediterranean poet forebears. Of course some other book might have been written--Dante might have written a poem called the Glorious Burlesque--but it would not have been the same book.

There would have been something definite missing from Dante's brain had he never read the Aeneid. But what exactly would be missing? How can we quantify it and give it a name? Others have tried to name it: genius; inspiration; the heavenly muse. Which are nice phrases, but they're kinda vague and unscientific. Perhaps if we try to quantify this we can start to get some idea of what's really going on behind the scenes.

So how did Dante learn to write the way he did? How did he gain his knowledge of the Aeneid? Easy--he read it. He probably read it many times, which had a great impact on his writing style and modes of thought. He probably even learned to recite many parts of it. The sequences of words and ideas he read changed the patterns of words and ideas he thought. He reinforced those patterns of words in his mind by reading and reflecting upon them.

So what is the act of reading? Is it just sitting hunched in a chair squinting at dead tree pulp? Well yes, but it's more than that. Something inside our brain is making sense of the squiggly lines we see. In the deepest phases of reading our brain is engaged in a mental juggling act. Symbols, in the form of written words, are connected and created and forgotten while our minds go through the arduous task of making sense of the world. Our brains are symbolic computers that extract meaning from the world in order to better adapt to our environment. Our language models haven't yet become that subtle--right now they're still at the stage of mimicking human-written text--but there's no reason to think they wont get there.


In a very real way our brains are trained by the things we see and experience. The connections of vast neural networks within our brains are subtly shifted as we go through life, always changing and adapting to survive in the environments we are faced with. From the Eskimo to the Amazonian tribesman to the New York banker--no other complex species has been able to adapt to as wide a variety of habitats as humans have, and it's largely because our language abilities that human civilization has been able to prosper. But what is language? Going by all the breakthroughs in NLP research in the past five years, its likely not what we think it is.

Sentences are habits of mind. Our thoughts are the patterns we create for ourselves by emphasizing certain ways of thinking and ignoring others. They are symbolic short-hand for sequences of ideas that usually take many years to reinforce themselves. Because of this, algorithms like GPT-2 are able to exploit the probabilities of word order and write coherent text. My theory is that the reason these texts are so coherent is because this is also what our brains are doing when we produce language.

Since early in the history of computers, AI researchers have been trying to replicate human thought. Because computers are really just complex mathematical abstractions buried within tangled layers of circuitry, and our brains are messy biological neural networks, the language of thought of computers is obviously different in form than human thought. Machine brains and human brains are not the same, but they do solve the same problems by coming up with similar solutions. When a computer recognizes that a picture of a cat is a cat, it is doing something analogous to what our brains are doing: taking features from that image--whiskers, pointed ears, general cat-ness--and from those features determining that yes, the picture is, in fact, a cat. Given this, is it hard to accept that act of reading and writing might be similar?


Like the recording studio changed albums and the editing booth changed film-making, AI will change how writers write. It will change the scope of human writing. And that's not necessarily a bad thing, once we get past the initial revulsion of the idea of machine-written text. It would be naive to assume that once we create an AI that can write, we will never create another. Our AI writing models will grow and learn through each other, in a process analogous to the natural selection that shaped human writers in Western literature.

And it's not like we'll stop writing just because we've built the first half-decent robot writers. These language models will help humans write a literature greater than any so far seen in human history. I don't think it's far-fetched that the next generation of writers, aided by these new technologies, will write a literature more varied and exuberant than any yet seen. And while this will force us to rethink our relationships with language and art, it will also enhance those relationships.

March 2, 2019


Further reading:

The original GPT-2 news release

Terrence Deacon: The Symbolic Species: The Co-evolution of Language and the Brain

Steven Pinker: The Language Instinct: How the Mind Creates Language