Tech

Mars, AI, and Tech Billionaires: A Scientist's Brutal Assessment

AI-created, human-edited.

What happens when an astrophysicist turns his analytical lens on Silicon Valley's most audacious claims? You get a reality check that's both fascinating and sobering. In a recent episode of Intelligent Machines, hosts Leo LaporteParis Martineau, and Jeff Jarvis sat down with Adam Becker, author of More Everything Forever: AI Overlords, Space Empires and Silicon Valley's Crusade to Control the Fate of Humanity, for a conversation that systematically dismantles some of tech's most persistent myths.

Host Jeff Jarvis praised Becker's methodical approach to debunking tech mythology, noting how the author "fairly and meticulously presents a case before tearing it apart fairly and meticulously." This approach is particularly evident in Becker's analysis of Ray Kurzweil's singularity predictions.

Kurzweil's central thesis—that we're approaching a technological singularity where AI will become superintelligent and fundamentally transform civilization—relies heavily on the idea of exponential technological growth throughout history. But Becker argues this apparent trend is largely an illusion of perspective.

"If you ask me to make a list of the most important events in the history of humanity, that list is going to have more things on it that happened more recently than things that happened longer ago, because that's just the way that we think about history," Becker explained, using a memorable analogy about meal memories to illustrate how our perspective distorts historical patterns.

More critically, Becker points out that exponential trends in nature always end—either smoothing out or leading to crashes. Even Moore's Law, the poster child for exponential technological growth, is ending as predicted by its originator, Gordon Moore himself.

Perhaps nowhere is Becker's scientific expertise more valuable than in his analysis of Mars colonization plans. Elon Musk's promise of a million people on Mars by 2050 as humanity's "backup" gets a thorough reality check from the astrophysicist.

"Mars is terrible," Becker states bluntly. "The gravity is too low, the radiation levels are too high, there's no air and the dirt is made of poison."

His most compelling argument? Even the worst disaster in Earth's history—the asteroid impact that killed the dinosaurs 66 million years ago—left Earth more habitable than Mars has ever been. "Six hours after that thing hit, it was a nicer day for mammals on Earth than it has been on Mars ever," he notes, pointing out that mammals survived that catastrophe, while no mammal could survive on Mars' surface without a spacesuit.

The economic realities are equally daunting. A truly self-sustaining Martian civilization would require not one million people, but an estimated half billion to one billion people to maintain the industrial base necessary for high-tech civilization.

The interview also tackled effective altruism and long-termism, movements that have gained significant influence in Silicon Valley. While Becker acknowledges that caring about future generations isn't inherently problematic, he argues that long-termists have taken this concern to illogical extremes.

The movement's focus on hypothetical future risks from speculative technologies often overshadows addressing real, present-day problems. This creates what Becker calls a "moral case for space settlement" that prioritizes maximizing the number of future humans over improving conditions for current ones.

One of the most interesting dynamics in the interview was the tension between the hosts' varying perspectives on AI. While Leo Laporte defended the technology's legitimate applications and warned against painting all AI development with the same brush, Becker advocated for more precise language around these tools.

"AI is a marketing tool," Becker argued, noting that when he was a kid, AI meant "Commander Data from Star Trek," not today's text generation engines. He prefers calling current systems exactly what they are: sophisticated text and image generation tools that excel at pattern matching but lack genuine understanding.

Paris Martineau suggested that social shame might be an effective tool for combating AI overhype, comparing it to how Google Glass was received. "It's kind of embarrassing and cringy to believe earnestly that the AI is real and wants to steal your wife," she noted.

Underlying all these technological promises, Becker argues, is a fundamental issue with wealth concentration. Drawing on Louis Brandeis' century-old observation that a society can have either extreme wealth inequality or democracy but not both, Becker advocates for wealth caps around $500 million.

His argument is particularly compelling when applied to tech billionaires: their wealth wouldn't exist without massive public investments in education, infrastructure, and foundational technologies like the internet (originally ARPANET, a government project). Yet the public sees little return on these investments while billionaires use their platforms to promote increasingly grandiose visions.

The conversation also addressed how media coverage amplifies these technological myths. Jeff Jarvis noted the strange phenomenon of an industry whose "PR is doom"—companies simultaneously claiming their products are incredibly powerful and potentially civilization-ending as a marketing strategy.

Becker advocates for better journalism and social norms around technology coverage, emphasizing the need to treat AI systems as the tools they are rather than anthropomorphizing them.

Throughout the interview, the three hosts brought different viewpoints to the discussion:

  • Jeff Jarvis emerged as perhaps the most aligned with Becker's critiques, particularly regarding long-termism and the problems with current AI hype
  • Leo Laporte maintained a more nuanced position, defending legitimate AI applications while acknowledging the problems with overhype and misrepresentation
  • Paris Martineau focused on the social and cultural implications, particularly how hype cycles and overcapitalization might harm genuine innovation

Perhaps the most refreshing aspect of Becker's analysis is its grounded perspective. Rather than treating these technologies as revolutionary forces that will reshape humanity, he advocates viewing them as "normal technology" that will be good at some things and bad at others—just like any other technological advancement.

This perspective doesn't diminish the real potential of AI or space technology, but it does provide a much-needed reality check for an industry that too often promises to solve fundamental problems while creating new ones.

As the interview concluded, it became clear that Becker's real target isn't technology itself, but the dangerous combination of unchecked wealth, uncritical media coverage, and the human tendency to mistake marketing for prophecy.

All Tech posts