From ChatGPT to Woebot: The Rise and Risks of Digital Therapy
AI-created, human-edited.
The mental health crisis in America is undeniable. With two-thirds of U.S. counties lacking a practicing psychiatrist and therapy sessions costing upwards of $250 per hour, millions of people are turning to an unexpected alternative: artificial intelligence therapists. But according to author Daniel Oberhaus, we're rushing headfirst into a digital mental health revolution without understanding its consequences.
In an interview on Intelligent Machines, Oberhaus discussed his new book "The Silicon Shrink: How Artificial Intelligence Made the World an Asylum," offering a sobering look at the intersection of AI and mental healthcare that's already transforming how we think about therapy.
The statistics are staggering. Apps like Woebot, designed specifically for therapeutic purposes, are gaining widespread adoption alongside general AI tools like ChatGPT that users naturally gravitate toward for emotional support. Oberhaus shared a striking anecdote about his barber who, within hours of discovering ChatGPT, began using it for daily therapy sessions.
"There is something so natural about doing this," Oberhaus observed, noting how people have been turning to AI for therapeutic conversations since the 1960s with programs like ELIZA. This natural inclination toward confiding in machines presents both opportunities and risks that we're only beginning to understand.
Host Leo Laporte acknowledged the personal side of this trend, sharing that his daughter, who has bipolar disorder, finds value in talking to chatbots alongside her traditional therapy. "If I thought it was dangerous for her... I would say you can't do that, but I think she gets a lot out of it," Laporte admitted, highlighting the complex real-world decisions families face.
The appeal of AI therapy stems from a genuine crisis in mental healthcare accessibility. Paris Martineau pointed out the economic realities: therapists face payment challenges, insurance complications, and the emotional toll of dealing with "heavy shit professionally" all day. The result is significant clinician burnout and a shortage that technology companies are eager to fill.
Jacob Ward raised a crucial concern about this market-driven solution: "If we start to normalize simulated therapists... will anyone ever be able to be a human therapist again?" The fear is that we might create a downward spiral where digital solutions become the default, not because they're better, but because they're cheaper and more scalable.
Perhaps the most troubling aspect of the AI therapy boom is the lack of evidence supporting its effectiveness. Oberhaus revealed that despite grandiose claims on company websites, the peer-reviewed research often shows modest results—like AI chatbots performing better than government pamphlets in small, short-term studies.
"When tech companies have data that shows their thing works, they want to tell you about that," Oberhaus noted. "My experience with tech companies is... there will be seven press releases and a web conference. So it's very curious that we're missing that."
The absence of robust clinical data is particularly concerning given that mental health outcomes have actually worsened over the past 20 years, making psychiatry "the only medical specialty where, as it progresses, the outcomes are getting worse for patients."
Beyond individual therapy apps lies a more troubling development: the emergence of what Oberhaus calls a "psychiatric surveillance economy." AI-powered mental health monitoring is being deployed across institutions—from K-12 schools to colleges to workplaces—often without users' explicit awareness.
These systems vacuum up vast amounts of digital exhaust—typing speed, scrolling patterns, social media activity—searching for patterns that might indicate mental health crises. Facebook runs suicide detection algorithms on U.S. users (though not in the EU due to privacy regulations), while schools deploy monitoring systems on student devices.
The problem, Oberhaus explained, is that "we don't understand mental disorders well enough to automate them at scale." Without knowing which data points actually correlate with mental health issues, these systems must monitor everything, creating what he describes as a "digital asylum" where people are warehoused in algorithmic oversight rather than receiving genuine care.
Unlike traditional therapy, AI mental health tools operate in a regulatory gray area. While apps like Woebot claim HIPAA compliance, they're not actually required to meet those standards. Users' most intimate thoughts and struggles become data points in systems with unclear privacy protections and unknown effectiveness.
Ward highlighted another concerning development: proposed federal legislation would prevent states from regulating AI for a decade, potentially cementing this unregulated landscape just as these technologies are becoming ubiquitous.
Despite his book's warnings, Oberhaus doesn't advocate for completely abandoning AI therapy. He recognizes that some people do benefit from these tools and acknowledges the genuine need they're attempting to address. His call is for more measured approach: "All I'm asking is like, please just show me the data. Does this work as advertised?"
The hosts agreed that the solution isn't to throw out AI therapy entirely, but to approach it more judiciously. Laporte emphasized the need to "look at it and think about it and move forward judiciously," while Martineau noted that "most things exist somewhere in gray scale, rather than all black or all white."