Post Summary
- Zelda Williams, daughter of the late Robin Williams, has publicly condemned a surge of AI-generated videos of her father, calling them “disgusting.”
- The viral clips, many made with OpenAI’s new Sora 2 app, have sparked widespread outrage and reignited debates over digital likeness and consent.
- Families of other deceased stars, like George Carlin, are also fighting a flood of unauthorized AI content, calling it “overwhelming, and depressing.”
- OpenAI initially defended the practice under “free speech interests” but has since promised to give families more control after significant public backlash.
- The controversy highlights a growing legal and ethical gray area as AI technology advances, leaving questions about who owns our digital identities after we die.
Viral Sora AI Clips of Robin Williams and Other Late Stars Spark Outrage. Is Tech Finally Crossing the Line?
When AI Resurrects the Dead Without Permission
Technology’s latest leap forward feels, for many, like a step too far. The launch of OpenAI’s powerful new video generator, Sora 2, on September 30, 2025, has unleashed a torrent of AI-generated clips featuring deceased celebrities. This digital resurrection has been met not with wonder, but with a wave of disgust and anger, particularly from the families of the stars being exploited. It all came to a head with a raw, emotional plea from Zelda Williams, daughter of the beloved comedian Robin Williams, that has since ignited a firestorm of debate over ethics, consent, and the soul of creative expression.

Zelda Williams Breaks Her Silence on AI Exploitation
On Tuesday, October 7, 2025, Zelda Williams took to her Instagram story with a simple but powerful demand for the creators and sharers of AI-generated videos of her late father: “Please, just stop sending me AI videos of Dad”. Her message was a direct response to the onslaught of synthetic clips she’d been receiving since Sora 2 made its debut.
Her frustration was palpable. “If you’ve got any decency, just stop doing this to him and to me, to everyone even, full stop,” she wrote. “It’s dumb, it’s a waste of time and energy, and believe me, it’s NOT what he’d want”. In a scathing critique, Zelda compared the AI-generated content to something grotesque and manufactured. “You’re not making art, you’re making disgusting, over-processed hotdogs out of the lives of human beings, out of the history of art and music, and then shoving them down someone else’s throat hoping they’ll give you a little thumbs up and like it. Gross”.
The Flood of Fake Robin Williams Content
The videos Zelda Williams is referring to have been circulating widely on platforms like TikTok. They range from fabricated ads to fake interactions at awards shows, all puppeteering a digital likeness of the actor, who died in 2014 at age 63 from suicide after suffering from depression and dementia. This phenomenon isn’t limited to Williams. Other deceased celebrities are also being digitally exploited, with viral clips showing Michael Jackson performing standup comedy, Stephen Hawking doing tricks in his wheelchair, and Rev. Martin Luther King Jr. stumbling through a speech. The rapid proliferation of these clips, made accessible to anyone with a powerful AI laptop like the Lenovo IdeaPad, has amplified concerns about how technology is being used to manipulate legacies, something experts at The TechBull have explored in depth.
George Carlin’s Family Joins the Fight
The Williams family is not alone. Kelly Carlin-McCall, daughter of the legendary comedian George Carlin, has been facing a similar battle. She described receiving daily emails about AI videos using her father’s likeness and voice. “We are doing our best to combat it, but it’s overwhelming, and depressing,” she wrote on the social media platform Bluesky. Her experience underscores the immense challenge facing the families of deceased public figures in an era where their loved ones’ identities can be so easily copied and pasted into any scenario imaginable.

OpenAI’s Controversial Defense
Initially, OpenAI’s response did little to quell the outrage. A company spokesperson, in a statement to Axios, cited “strong free speech interests” in allowing users to depict historical figures. Their policy stated that for “public figures who are recently deceased, authorized representatives or owners of their estate can request that their likeness not be used in Sora cameos”. However, the company failed to clarify what “recently deceased” actually means, leaving a massive loophole. This policy essentially made the likenesses of the dead “fair game,” while living public figures could opt out.
Sam Altman’s Weekend Reversal
The public outcry seems to have hit its mark. Over the weekend, OpenAI CEO Sam Altman acknowledged the company’s misstep, promising that rightsholders would be given “more granular control” over how their characters appear. In a blog post, Altman warned users to “please expect a very high rate of change from us; it reminds me of the early days of ChatGPT.” He admitted, “We will make some good decisions and some missteps, but we will take feedback and try to fix the missteps very quickly.” While this is a step in the right direction, it remains unclear how these new policies will be implemented for deceased celebrities and their estates.
This Isn’t Zelda’s First Warning
For Zelda Williams, this fight is nothing new. Back in 2023, during the Screen Actors Guild-AFTRA strike, she spoke out about the dangers of AI. “I’ve witnessed for YEARS how many people want to train these models to create/recreate actors who cannot consent, like Dad,” she stated. Her concerns then were focused on AI voice technology, an area that has seen rapid advancements with tools like Elevenlabs AI Voice Generator. “I’ve already heard AI used to get his ‘voice’ to say whatever people want and while I find it personally disturbing, the ramifications go far beyond my own feelings,” she wrote at the time. She described these AI recreations as, “at their worst, a horrendous Frankensteinian monster, cobbled together from the worst bits of everything this industry is”.
The Legal Gray Zone Nobody Asked For
The current situation has thrown us into a murky legal landscape. As Axios reporter Megan Morrone noted, “Who owns our AI likeness and that of our dead loved ones is shaping up to be the next big legal battle for Big Tech.” Tech companies have often adopted an “ask forgiveness, not permission” strategy, particularly when training AI on copyrighted material, a practice that’s being closely watched to see what works from a legal standpoint. A legal expert observed that “there is nothing across the board that would prevent someone from using imagery and sharing it on social media if it’s not in an intentionally harmful context.” This legal vacuum could inadvertently show families that they can monetize their deceased loved ones’ likenesses, with OpenAI taking a cut, creating a deeply uncomfortable new reality.
Recommended Tech
In an age of rampant deepfakes and digital impersonation, protecting your online identity has never been more critical. The issues raised by the unauthorized use of celebrity likenesses are a stark reminder that our digital selves are vulnerable. The TechBull recommends exploring a comprehensive digital security service like Aura. It offers proactive protection against identity theft, financial fraud, and online scams, giving you peace of mind that your digital legacy—and that of your loved ones—is secure.
Hollywood’s Renewed AI Battle
The release of Sora 2 has reignited Hollywood’s deep-seated anxieties about AI. The controversy echoes the core concerns of the 2023 SAG-AFTRA strike, where protections against AI were a major point of contention. The Screen Actors Guild has long worried about AI “stealing performances, putting them out of work and devaluing human artistry.” The fact that AI content generation tools are becoming mainstream features on consumer devices, like the Google Pixel 9a with Gemini AI, only adds to the urgency of these concerns. Meta’s recent release of a similar video generation tool shows this is an industry-wide push, not an isolated incident.
Get the latest tech updates and insights directly in your inbox.
The Human Centipede of Content
Zelda Williams didn’t mince words when describing the current state of AI-driven media. “Stop calling it ‘the future,’ AI is just badly recycling and regurgitating the past to be re-consumed,” she argued. She offered a visceral metaphor: “You are taking in the Human Centipede of content, and from the very very end of the line, all while the folks at the front laugh and laugh, consume and consume”. Her critique gets to the heart of the matter: these tools aren’t creating new art; they’re reprocessing what already exists. Instead of engaging with this cycle, one might be better off enjoying authentic, human-made films on a quality device like the Magcubic Home Cinema Projector. An expert on the matter put it this way: “It’s not about shutting down the technology. It’s about making sure that we consider how technology is used and who is being hurt.”
What Happens Next
The path forward is anything but clear. We are now in a tense standoff between the “strong free speech interests” touted by tech companies and the fundamental right of families to protect the legacies of their loved ones. OpenAI has promised rapid changes, but questions about enforcement and how estates can effectively fight back against a tidal wave of unauthorized content remain. This could lead to a new era of deepfake scams and digital impersonation that our laws are not yet equipped to handle. The crucial next steps will involve not just corporate policy changes but potentially new legislation aimed at making it harder to exploit digital likenesses without consent.
Where Do We Draw the Line
The viral saga of Robin Williams’ digital ghost is more than just another tech controversy; it’s a profound ethical dilemma. It forces us to ask fundamental questions about memory, identity, and respect in the digital age. As AI tools become more powerful and accessible, we are collectively deciding how to handle the digital specters of those who have passed. The outrage sparked by these clips suggests that for many, a line has been crossed—a line that, once erased, may be impossible to redraw.


1 comment
[…] In a joint statement, OpenAI and the King Estate announced the company “has paused generations depicting Dr. King as it strengthens guardrails for historical figures.” The statement acknowledged the complexities, noting that “while there are strong free speech interests in depicting historical figures, OpenAI believes public figures and their families should ultimately have control over how their likeness is used.” The incident highlighted a massive ethical blind spot, as OpenAI had not specified its policy on generating videos with images of deceased people, raising fresh concerns about how AI is used to resurrect the dead. […]