Springe direkt zu Inhalt

Allison Parrish (NYU Tisch School of Arts)

Allison Parrish

Allison Parrish
Image Credit: Private

Fellow in Research Area 4: "Literary Currencies"

June 2021

Futures of language model poetics

A language model predicts which character, word or sentence will come next, given the surrounding context. Language models are pervasive in computing: they power virtual keyboards on mobile phones, search engine suggestions, word processor spell check, and more. Older language model technology (such as n-grams and LSTMs) has recently waned in popularity in comparison to large pre-trained language models like GPT-3. Such models have surprising capabilities in natural language understanding tasks, and have captured the public imagination with their ability to generate seemingly coherent text that can be mistaken for having been written "by hand".

Artists and poets have made use of language models in their work for decades. But the use of large pre-trained language models in the arts has some important drawbacks. The first is the staggering energy use of these models. One study estimates that training a single "transformer" emits many times more carbon than the average human does over a lifetime. State-of-the-art large language models can also cost thousands of dollars to train, requiring specialized hardware only available to large institutions. Moreover, large language models are trained on uncurated datasets that tend to encode hegemonic bias, and therefore perpetuate inequality.

The creative use of machine learning tends to closely follow the "bleeding edge" of the technology's development in industry. This project begins a series of practice-based experiments to establish a potential future for poetic uses of language models in which this coupling is broken. The experiments include: modern generation algorithms retrofitted to Markov chains; paper language models; minimalist language models; solar power language models; etc. The goal is to demonstrate that the language model's material basis is the actual locus of its aesthetic interest, not merely its ability to produce seemingly "real" text.

Allison Parrish is a computer programmer, poet, educator and game designer whose teaching and practice address the unusual phenomena that blossom when language and computers meet. She is an Assistant Arts Professor at NYU's Interactive Telecommunications Program, where she earned her master's degree in 2008.

Named "Best Maker of Poetry Bots" by the Village Voice in 2016, Allison's computer-generated poetry has recently been published in BOMB Magazine and Nioques. She is the author of @Everyword: The Book (Instar, 2015), which collects the output of her popular long-term automated writing project that tweeted every word in the English language. The word game Rewordable, designed by Allison in collaboration with Adam Simon and Tim Szetela, was published by Penguin Random House in August 2017 after a successful round of Kickstarter funding. Her first full-length book of computer-generated poetry, Articulations, was published by Counterpath in 2018.

Allison is originally from West Bountiful, Utah and currently lives in Brooklyn.