The Illusion of Neutrality: Jacob Ward Reveals AI's Hidden Influence on Human Choice
AI-created, human-edited.
In a thought-provoking episode of Intelligent Machines, host Leo Laporte welcomed author and journalist Jacob Ward to discuss the ethical implications of artificial intelligence and how it's reshaping human decision-making. The conversation, including co-hosts Jeff Jarvis and Paris Martineau, delved into Ward's book The Loop: How Technology Is Creating a World Without Choices and How to Fight Back and his concerns about AI's growing influence.
Ward, a former correspondent for PBS, CNN, Al Jazeera, and NBC News, shared insights from his book, which Laporte described as "enthralling." Ward explained how his work on a PBS documentary series called Hacking Your Mind revealed how humans are "captive to very ancient circuitry" that governs most of our decisions without our awareness. Simultaneously, his work as a tech correspondent exposed him to companies developing AI systems designed to "corral human behavior."
"As I saw these two things coming together, I thought, oh, this is bad," Ward said. "There's a scary thing coming in which I worried that companies were going to basically foist AI on us and wind up amplifying kind of the worst parts of being human, our most tribal, biased stuff."
Ward noted that his book was published about nine months before ChatGPT's release, making his concerns even more relevant today.
A central theme in the discussion was the dangerous illusion of neutrality in AI systems. Ward shared a compelling example from researchers John Patty and Elizabeth Penn about traffic cameras in Chicago:
The cameras, designed to capture speeding violations without human bias, resulted in disproportionately more tickets for Black and brown residents. While the cameras themselves were "neutral" and distributed randomly across the city, the underlying infrastructure was not – affluent neighborhoods had narrower streets while working-class areas had wider roads with the same speed limits, naturally leading to faster driving in those areas.
"That's the real threat – the impression of neutrality," Ward emphasized.
Paris Martineau reinforced this point, stating simply: "Nothing is neutral."
The conversation turned to the feasibility of AI safety measures, with the hosts taking somewhat different positions:
Jeff Jarvis argued that guardrails are "bound to fail" and that we shouldn't fool ourselves into thinking we can eliminate all risks. "If we fool ourselves into thinking that we can get rid of this stuff, the bad stuff, then we have false comfort," he said.
Martineau took a more proactive stance, suggesting that while we can't anticipate every harm, "it's not foolish to think that the creators of new technologies should be engaged in a process of trying to predict the potential harms of what they are creating and solving for that."
Ward agreed with Martineau, particularly given the profit motives in the AI industry: "Especially considering just how much money can be made in that industry... you can afford to put aside a little bit of that."
One of the most fascinating parts of the discussion centered on what Ward called "the heroin problem" – a thought experiment from AI companies about how a hypothetical AI assistant should respond to a user with a heroin addiction:
"Should it get you off heroin, try to get you into rehab? Or is it supposed to make your addiction easier to live with, facilitate it? Does it book the appointment with your dealer?" Ward asked.
This example highlighted the challenge of encoding values into AI systems. Ward pointed out that while companies like Anthropic are creating "constitutional AI" with supposedly universal human values, "you can't find two political scientists that agree on values."
Meanwhile, he contrasted this with Elon Musk's Grok 3, which now features an "unhinged mode" where "all the guardrails come off."
The discussion turned to regulation, with Ward challenging the common tech industry argument that regulation stifles innovation:
"The idea that we can't regulate it and that regulation stands in the way of our competing effectively with China, is ridiculous when you see how much China regulates this stuff," Ward noted.
Jeff Jarvis raised concerns about companies pushing responsibility onto regulators: "What he's doing in saying that is pushing off the responsibility of those ethical judgments on the government. 'You guys, come up with the rules, then we'll do whatever is left over.'"
Despite the challenges, Ward offered some hope for the future. He highlighted three areas where he sees progress:
- Human collaboration: The greatest breakthroughs in history came "when we got together and really dug in and did the hard work of trying to come together over something," citing the years-long negotiations of the Good Friday Accords.
- Legal accountability: Class action lawsuits against predatory tech companies are beginning to succeed, such as cases against "social casino games" that targeted vulnerable elderly users.
- Policy solutions: Ward shared the example of backup cameras in cars, which became mandatory despite the relatively small number of backup accidents (60-100 deaths annually). "We are great at putting the toothpaste back in the tube. We put the toothpaste in the tube to begin with," he said.
The conversation highlighted the ongoing tension between innovation and responsibility in AI development. While tech companies continue to push boundaries, Ward's work suggests we need more thoughtful approaches to ensure AI systems enhance rather than exploit human decision-making.
As Ward's new podcast The Rip Current continues to explore "the big hidden forces just beneath the surface that threaten to pull us all out to sea," his insights remain crucial for navigating the rapidly evolving AI landscape.