Sora 2’s App, Viral Intentions, and Legal Backlash Explained
AI-generated, human-reviewed.
AI-generated video content is here and it’s moving fast. In the latest episode of This Week in Tech, host Leo Laporte and an international expert panel assess Sora, OpenAI’s viral new app, and what it means for the future of video, creativity, copyright, and our ability to trust what we see online.
What Is OpenAI Sora and Why Is It Making Headlines?
OpenAI’s Sora app launched as the number one most downloaded app on the iPhone App Store, taking the internet by storm with its ability to generate short, TikTok-style videos using artificial intelligence. Users can create highly realistic video clips simply by uploading an image “cameo” of themselves and speaking a prompt.
On This Week in Tech, Leo Laporte and guests explained that Sora leverages generative AI models—algorithms that create new content, such as images and video, based on patterns learned from real data. Recently, OpenAI and competitors like Google’s VO3 have enabled video generation with only a short text command.
Why the big reaction? According to the episode’s panel, Sora’s arrival represents a turning point where anybody can easily create convincing video content, blurring the line between real and AI-created media.
What Are the Benefits and Risks of AI Video Generation?
Creativity and Experimentation
On the show, Patrick Beja noted that while much of the current Sora content is experimental or humorous, generative AI video opens new creative possibilities. Users can quickly produce scenes that previously required production skills or big budgets. Panelists agreed that artists—those who adapt—could use this as a tool for unique expression.
Copyright and Legal Controversies
A major concern, highlighted by Laporte and Ian Thompson, is the copyright chaos resulting from Sora’s open platform. Almost immediately, users started making videos featuring copyrighted characters and media, triggering legal warnings from rights holders. OpenAI responded by announcing new controls to let people and companies restrict how their likeness or intellectual property is used.
Trust, Deepfakes, and “AI Slop”
Georgia Dow, a psychotherapist and YouTuber, warned about the psychological and societal ramifications. As AI-generated videos become more realistic, it becomes much harder to distinguish real footage from fakes. This erodes trust in anything we see and accelerates the spread of misinformation. The episode discussed how, just as AI-generated images revolutionized "what you can fake online," video is now following suit.
There’s even a growing backlash against what’s being dubbed “AI slop”, or content churned out by algorithms that clogs feeds with low-effort, misleading, or copyright-infringing material. The show cited a new study on AI-generated “workslop” in business, pointing to reduced trust and satisfaction in professional communication as well.
How Much Does Sora and Generative AI Video Cost?
Panelists speculated that each AI-generated video could cost OpenAI several dollars in compute resources, which brings up critical questions about business models. While Sora is free for now (with an invite), long-term success will depend on monetization through ads, user payments, or deals to share revenue with rights holders.
What Can Be Done About the Downsides?
According to This Week in Tech, the panelists believe that while AI video technologies can’t be “put back in the bottle,” more guardrails are needed. They argue for:
- Stronger copyright protections: Rights holders and creators need fair compensation and controls.
- Technical solutions for deepfake detection: Tools are needed to verify authenticity.
- Media literacy and skepticism: As Laporte and Dow noted, individuals need to be more critical of what they see online, knowing that “truthiness” can be algorithmically manufactured.
Key(blade) Takeaways
- Sora is making AI video generation accessible, viral, and potentially disruptive.
- The technology makes it easy to create realistic-looking content, raising the risk of misinformation and deepfakes.
- Major copyright challenges emerged immediately, forcing quick policy changes.
- AI-generated “slop” can overwhelm feeds and reduce trust, even in professional environments.
- The business model for AI video is unclear, with compute costs putting pressure on OpenAI and others.
- There are benefits for creatives and meme-makers, but real concerns for authenticity, privacy, and content overload remain.
Generative AI video apps like Sora are changing what’s possible online, for better and for worse. While the creative potential is vast, issues of copyright, digital trust, and content quality must be addressed before “AI slop” becomes the new normal. As This Week in Tech's panel of experts made clear, this is just the beginning of a much larger transformation in how we create, share, and believe digital media.
Want to stay ahead on tech’s biggest shifts? Listen and subscribe to This Week in Tech: https://twit.tv/shows/this-week-in-tech/episodes/1052