The AI Con: Emily Bender and Alex Hanna Challenge Tech's Hype Machine on Intelligent Machines
AI-created, human-edited.
In a thought-provoking episode of Intelligent Machines, hosts Leo Laporte, Jeff Jarvis, Paris Martineau welcomed Emily M. Bender and Alex Hanna, authors of "The AI Con: How to Fight Big Tech's Hype Machine and Create the Future We Want." The conversation dove deep into the critical examination of AI technology, its limitations, environmental impact, and the language we use to discuss it.
A central theme throughout the discussion was the importance of language when talking about AI technologies. Bender, a professor of linguistics at the University of Washington, emphasized the need to "disaggregate" what we mean by AI, noting it's not one unified thing.
"I'm not going to say AI, but I think we can talk about good and bad uses of automation," Bender stated, highlighting her preference to avoid terms that anthropomorphize these systems. She explained that they use "AI" without scare quotes in their book only when naming the industry, the "con," or a purported research field—not when discussing the actual systems or tools.
Both authors argued that the term "AI" itself is problematic because it presents these technologies as unified entities rather than collections of specific methods and applications. Hanna noted they prefer to "unreify" or "unthingify" AI to have more precise conversations about specific technologies and their harms.
Laporte positioned himself as someone who finds value in AI tools, mentioning his use of Claude Code for coding assistance and Perplexity for search. This set up a productive tension in the conversation, with the authors acknowledging that certain machine learning applications can be useful when "well scoped, well tested, and involving appropriate training data."
However, they sharply criticized what they call "synthetic text extruding machines" and other media synthesis technologies. Bender expressed particular concern about the degradation of our information ecosystem through AI-generated content that "corresponds to nothing anybody said."
Hanna highlighted tangible harms occurring in workplaces, where AI systems are replacing creative jobs despite performing poorly. She cited examples from the US Digital Service and Duolingo, where content developers with institutional knowledge were replaced by automated systems.
One of the most striking criticisms raised concerned the environmental impact of AI development. Hanna pointed out that data center production for AI is "actively inhibiting the climate goals that the Paris Agreement set out," with major tech companies dramatically exceeding their carbon baselines.
When Laporte suggested this might be comparable to the environmental impact of other technologies like Zoom calls, Paris Martineau interjected, saying it was like "comparing a forest fire to a match," especially given the concentrated investment and focus on AI across industries.
Jeff Jarvis prompted a fascinating discussion about meaning and understanding by asking Bender to define these concepts from a linguistic perspective. Bender explained that meaning is not inherent in text but requires bringing in knowledge of linguistic systems and reasoning about communicative intent—something LLMs fundamentally lack.
She used a thought experiment about being trapped in a Thai library with only Thai books to illustrate how impossible it would be to learn Thai without external reference points or guidance. The meaning, she emphasized, is not in the text itself—it comes from our understanding of the linguistic system and our ability to reason about the speaker's intent.
When asked what message they would give to technology journalists, both authors critiqued the prevalence of "access journalism" that uncritically reprints press releases and fails to question who benefits from new technologies.
"Technology journalism has become so much access journalism... very credulous about what products do," Hanna noted, calling for a return to first principles: questioning who benefits, what they have to gain, and examining the political economy behind the technology.
Bender added that journalists should be "very, very skeptical about claims of functionality," particularly regarding evaluation methods in academic-looking papers produced by tech companies.
Despite Laporte's suggestion that they might be fighting a losing battle, both authors maintained the importance of their critique. Bender drew a parallel to automobiles and how the tire industry lobbied to tear up rail systems: "We took a wrong turn there. That doesn't mean we have to do it again."
Hanna pushed back against technological determinism, rejecting the notion that progress is inevitable or that protections emerge "from the beneficence of billionaires." She emphasized that labor rights and environmental protections came from people fighting back against harmful technologies.
The discussion highlighted the complex relationship between technological advancement, critical analysis, and social responsibility. While Laporte represented a more optimistic view of AI's potential benefits, Bender and Hanna's critique focused on the need for more precise language, ethical development practices, and honest evaluation of AI's limitations and harms.
Their book, "The AI Con," and podcast "Mystery AI Hype Theater 3000" continue this critical examination, inviting readers and listeners to help shape a technological future based on genuine needs rather than hyperbolic promises.