Tech

Inside OpenAI: Karen Hao’s Deep Dive on “Empire of AI”

AI-generated, human-reviewed.

On this episode of Intelligent Machines, the panel sat down with journalist and author Karen Hao, whose bestselling book "Empire of AI" (Amazon Affiliate Link) provides an unprecedented inside look at the origins, ambitions, and conflicts shaping OpenAI. Hao's reporting reveals a tech company whose outward claims about transparency and progress for humanity have long been at odds with its private actions, internal disagreements, and industry-driving ambitions.

Gaining Rare Access to OpenAI

Karen Hao recounted to the Intelligent Machines panel how, in 2019, she became one of the few journalists to embed within OpenAI for a multi-day profile. At the time, OpenAI was pitching itself as an open, nonprofit organization dedicated to developing artificial general intelligence (AGI) for the benefit of all. Hao was invited to observe the company and interview key players such as Sam Altman, Greg Brockman, and Ilya Sutskever—founders whose worldviews would soon shape the future of AI.

However, as soon as Hao arrived, she sensed underlying tensions. She later learned her photo had been distributed to security as someone to be watched. Even then, despite claims of openness, executives struggled to clearly communicate their goals or the reasoning behind their drive to develop AGI, especially when pressed to explain themselves in terms accessible to the general public.

OpenAI’s Shifting Mission and Growing Secrecy

As discussed on Intelligent Machines, Hao’s reporting revealed that OpenAI’s “public-serving” narrative masked a competitive, winner-takes-all mindset rooted in Silicon Valley ego and ambition. While OpenAI originally positioned itself as altruistic, wanting to prevent concentration of AI power at companies like Google, internal discussions and external pressures quickly changed the game.

The panel highlighted that Hao’s book exposes how once OpenAI accepted major funding from Microsoft and pivoted toward a more commercial agenda, internal contradictions grew. What began as a nonprofit was soon negotiating corporate partnerships and scaling resources at phenomenal cost, raising billions of dollars and ultimately prioritizing speed and dominance over the “good guy” label.

Defining AGI: A Vague and Shifting Target

One of the central themes Intelligent Machines explored from the interview is the vagueness around the term AGI (Artificial General Intelligence). Hao emphasized that even within OpenAI or the broader AI “cult,” there is no agreement on what AGI actually means—generally, it is “human-level intelligence in machines,” but what constitutes human intelligence remains hotly debated.

The result, according to Hao, is that AGI serves as a vessel for personal agendas. Because OpenAI’s foundational concept is so loosely defined, it enables executive drift, internal conflict, and shifting narratives about what they’re building and why. These unresolved differences have fueled power struggles, ideological infighting, and ongoing leadership turmoil at the company.

The Impact of Scale: Empire-Building and Industry Influence

The conversation also spotlighted what Hao calls the “imperialist” approach to AI. As OpenAI and others have scaled up, consuming massive compute, energy, and data, their actions began to mirror those of old empires—acquiring land and resources, monopolizing research talent, and imposing their own worldview on the direction of both technology and broader society.

Intelligent Machines panelists noted that Hao critiques this scale-at-all-costs philosophy, arguing that OpenAI now prioritizes the expansion of its models and technological lead over truly addressing the public good. The history of internal friction, secrecy, and a lack of clarity about actual goals underlines just how far the company has drifted from its nonprofit, pro-transparency roots.

Key Points

  • OpenAI promoted transparency but flagged journalists like Karen Hao as a security risk
  • The company’s initial nonprofit story masked deep-seated ambitions to “win” the AI race
  • No coherent internal definition of AGI exists—fueling both hype and infighting
  • OpenAI’s pivot to corporate funding (notably via Microsoft) accelerated secrecy and competitiveness
  • Hao criticizes the industry-wide shift to scaling at all costs, with detrimental effects on research diversity and the environment
  • Internal leadership battles and uncertain direction have marked OpenAI’s evolution

What This Means

  • OpenAI’s noble origins are complicated by ego, competitive urgency, and financial realities
  • Lack of clarity about AGI’s definition creates a shifting narrative that serves leadership objectives but confuses the public
  • Empire-like behavior in AI is concentrating resources, labor, and power among a few influential entities
  • Skepticism about whether OpenAI—and its industry peers—are advancing AI in ways that truly benefit society

The Bottom Line

Karen Hao’s interview on Intelligent Machines confirms that OpenAI’s “open” and public-spirited branding has long been at odds with internal secrecy, shifting motives, and the pursuit of dominance in artificial intelligence. Her reporting underscores that the path to AGI is muddled by unclear definitions, ideological battles, and unchecked scale—and raises critical questions about who truly benefits from this new “Empire of AI.”

Subscribe to Intelligent Machines for more expert panels and insider tech discussions.

All Tech posts