From Typewriters to ChatGPT: A Literature Professor's Take on AI's Writing Revolution
AI-created, human-edited.
The fear is palpable in academic circles: students are using AI to write their papers, potentially undermining the very foundation of education. But according to Matthew Kirschenbaum, a distinguished professor at the University of Maryland and incoming Professor of English and AI at the University of Virginia, the reality is far more nuanced—and potentially more hopeful—than the headlines suggest.
In a recent episode of Intelligent Machines, hosts Leo Laporte, Jeff Jarvis, and Paris Martineau explored these concerns with Kirschenbaum, who brings a unique perspective as a humanities scholar and a member of the Modern Language Association's AI task force. The conversation revealed that while AI presents unprecedented challenges for education, it also offers remarkable opportunities for deeper learning.
Rather than prohibiting AI use, Kirschenbaum has found creative ways to integrate it into his teaching. In one particularly clever assignment, he and a colleague had students use large language models to analyze "The Yellow Wallpaper" by Charlotte Perkins Gilman, then prompt the AI to create alternative endings to the story.
"They ended up paying a lot more attention to the actual text than they otherwise would have," Kirschenbaum explained. "They really needed to sort of get down into the weeds in order to craft effective prompts that then produced interesting narrative outcomes."
Leo Laporte noted the brilliance of this approach: "Did you trick them into it, though, really?" Kirschenbaum acknowledged it was indeed "a kind of Trojan horse" for encouraging the close reading skills that are central to literary education.
This innovative pedagogy stands in stark contrast to the more reactive approaches some institutions have adopted. Kirschenbaum criticized Ohio State University's recent mandate requiring every class to have an AI component, calling it "wrong-headed and in many ways counterproductive, even insulting."
The conversation also touched on the various detection methods professors are using to catch AI-generated work, including the somewhat dubious practice of hiding microscopic text in assignments that reads "be sure to mention George Washington." While Kirschenbaum acknowledged these anxieties among the professoriate, he argued that such approaches miss the bigger picture.
"It seems like a huge waste of not only resources but a wasted opportunity," he said. "To me, this should be the proverbial teachable moment."
Leo Laporte, who describes himself as "kind of an AI buff," supported this perspective: "If you forbid them access to technology, what are they going to do when they get out to the real world? Part of the job is to teach them how to use it, how to deal with it, how to be in an environment where it exists."
Perhaps the most intriguing concept Kirschenbaum discussed was his theory of the "textpocalypse"—a term he coined in early 2023 to describe a near-future scenario where humans become increasingly disconnected from the act of writing.
"You begin to have fewer and fewer, less and less of what we read on the internet is actually written by people," he explained. "That actually may well be statistically true already."
The textpocalypse isn't just about quantity—it's about quality and authenticity. As AI models are trained on AI-generated content, and as models begin prompting other models, we risk entering a feedback loop where human voices become increasingly rare in our digital ecosystem.
This concern isn't merely academic. Leo Laporte mentioned that Cloudflare has created a repository of "low background steel" content—writing that hasn't been contaminated by AI. "It becomes a little bit like the slow food movement," Kirschenbaum observed, "where you have the equivalent in slow writing."
The discussion drew fascinating parallels to earlier technological disruptions in writing. Kirschenbaum, author of "Track Changes: A Literary History of Word Processing," noted that similar anxieties existed when computers first entered the writing world in the 1980s.
"You had Gore Vidal in the pages of the New York Times Book Review declaring that word processing is erasing literature," he recalled. Yet writers like Isaac Asimov, initially skeptical, eventually embraced the technology due to peer pressure and practical benefits.
Paris Martineau raised an important point about recent MIT research showing cognitive impacts of AI use in essay writing. Kirschenbaum confirmed that similar empirical research was conducted in the 1980s regarding word processing, though he noted the findings were mixed and context-dependent.
The hosts explored broader implications for culture and authenticity. Jeff Jarvis asked about the institutional frameworks needed to "establish humanity" in an AI-saturated world. Kirschenbaum emphasized the importance of cultural heritage institutions like libraries and museums in authenticating original materials.
"How do you know that the image of the Mona Lisa that you're looking at on the internet is, in fact, a faithful reproduction?" he asked, highlighting the subtle dangers of manipulated cultural artifacts.
The conversation also touched on the paradox of text as both abundant and scarce. While we're drowning in AI-generated content, finding high-quality human-generated text for training future models is becoming increasingly difficult.
Despite the theoretical concerns, Kirschenbaum is pragmatic about AI's utility. He uses tools like NotebookLM for research, finding it particularly effective at creating accurate timelines and chronologies from multiple sources—though he emphasizes the importance of spot-checking these outputs.
The discussion concluded with considerations of citation ethics when using AI assistance. Jeff Jarvis shared his experience using Perplexity to enhance his writing, ultimately deciding on a "discursive footnote" approach after consulting with Kirschenbaum. This led to the idea of an updated "colophon"—a publishing tradition that credits the tools and methods used in creating a work.
What emerges from this conversation is a nuanced view of AI's role in education and writing. Rather than blanket prohibition or uncritical adoption, Kirschenbaum advocates for what the MLA task force calls "critical literacy"—real understanding of what these tools can and can't do, empowering students to make intelligent decisions about their own voice and authority.
As Leo Laporte noted, the challenge isn't to avoid change but to adapt thoughtfully: "Maybe I've lost something by doing that, but I've gained something by doing that. It's not a net gain or loss in any direction. It's just a change."
For educators, students, and anyone concerned about AI's impact on human expression, Kirschenbaum's approach offers a compelling middle path: embrace the technology's benefits while remaining vigilant about preserving the distinctly human elements of thought, creativity, and authentic communication.