AIAIToolHub

Scarlett Johansson vs OpenAI: The AI Voice Lawsuit Explained

7 min read
1,729 words

What Actually Happened Between Scarlett Johansson and OpenAI

In May 2024, OpenAI unveiled "Sky," one of five new voices for ChatGPT. Almost immediately, people noticed it sounded eerily similar to Scarlett Johansson's portrayal of Samantha, the AI assistant in the 2013 film Her. That wasn't a coincidence, according to Johansson.

She went public with a statement saying that OpenAI CEO Sam Altman had approached her twice asking her to voice the ChatGPT system. She declined both times. OpenAI released Sky anyway. Johansson hired legal counsel and demanded that OpenAI explain how Sky was developed and pull the voice from public use. OpenAI did pause Sky's rollout, but denied any intentional mimicry.

Altman made it worse by tweeting a single word right before Sky's launch: "her." A reference so obvious it was almost impossible to defend as coincidence.

"I was shocked, angered and in disbelief," Johansson said in her public statement. "Mr. Altman even intimated that he had wanted my voice to be the voice of the new ChatGPT system."

OpenAI maintained that Sky was voiced by a different actress and was not designed to resemble Johansson. They said the resemblance was unintentional. The unnamed voice actress later confirmed publicly that the voice was genuinely hers, not a clone or derivation of Johansson's.

Did Johansson File a Formal Lawsuit?

This is where things get a bit nuanced. Johansson did not immediately file a formal lawsuit in the traditional sense. She sent legal letters demanding documentation and an explanation, then called for OpenAI to halt the voice's use. OpenAI complied with the latter.

By 2026, the matter had shifted into broader legal territory. Johansson's case became part of a larger wave of litigation around AI likeness rights, voice cloning, and right of publicity laws. Several states accelerated legislation specifically because of this incident. The question was no longer just about one actress and one chatbot. It became a template for how celebrities, creators, and ordinary people can protect their voices and likenesses from AI systems.

Why This Case Matters Beyond the Headlines

The Johansson controversy exposed something the AI industry had been quietly sidestepping: the legal and ethical gray zone around voice replication. Tools like AI deepfake detection platforms have grown in direct response to public anxiety about exactly this problem.

Voice cloning technology has become remarkably accessible. Platforms like ElevenLabs and Murf AI can generate synthetic voices that sound fully human with just a few minutes of audio input. HeyGen and Synthesia go further, syncing AI-generated voices with video avatars for realistic talking head content. These are legitimate, useful tools. But they also make it trivially easy to produce convincing audio of someone saying something they never said.

The Johansson situation wasn't even technically a voice clone. It was a trained voice that allegedly sounded too similar to a public figure. If that's enough to trigger legal action, what does that mean for AI voice products across the board?

Right of Publicity Laws: The Legal Framework

Right of publicity laws protect individuals from having their name, image, or likeness used commercially without consent. These laws vary wildly by state. California, where Johansson is based, has some of the strongest protections. New York strengthened its own laws shortly after the controversy broke.

The challenge with AI voice technology is that it tests the edges of these laws in ways courts haven't fully addressed. Is a voice that sounds similar to yours actually your voice? Does a company need explicit consent if it trains a model on publicly available audio? These questions don't have clean answers yet.

Several proposals in Congress picked up momentum after this incident, including versions of the NO FAKES Act, which would establish a federal right to control AI-generated likenesses and voices. Progress has been slow, but the Johansson case gave advocates a concrete, high-profile example to point to.

OpenAI's Position and What They Changed

OpenAI's response was a careful mix of partial concession and firm denial. They paused Sky's deployment. They published a blog post explaining their voice selection process and distanced the product from any intentional resemblance to Johansson. They emphasized that the voice actress behind Sky had signed appropriate agreements.

What they did not do: release the development documentation Johansson's lawyers requested, at least not publicly. The opacity frustrated critics who argued that AI companies need to be more transparent about how they build voice products, especially when they're clearly drawing cultural inspiration from films like Her.

Altman's "her" tweet was eventually framed by OpenAI as a reference to the film's themes of human-AI connection, not specifically to Johansson or her voice performance. Most people found that explanation thin.

The Broader Industry Response

The AI industry took notice. Several voice AI companies quietly updated their terms of service and consent frameworks. Others introduced more explicit disclosures about how training data is sourced. A few paused certain products pending internal review.

Content creation tools that incorporate voice features have also had to think more carefully about consent. Descript, which offers voice cloning as part of its audio and video editing suite, has maintained a consent-based model requiring users to train models only on their own voice. ElevenLabs introduced a voice verification system to prevent abuse, though critics argue enforcement remains inconsistent.

This connects directly to concerns about AI-generated media more broadly. If you're thinking about how AI tools are used to create content at scale, our review of Sora 2 covers how OpenAI's video generation has matured and what guardrails exist there.

What This Means for Creators and Performers

If you're a voice actor, musician, or anyone whose voice is your livelihood, this case is directly relevant to you. The labor protections that SAG-AFTRA negotiated around AI likenesses in 2023 were partly a response to exactly this kind of risk. Those agreements were just the beginning.

Performers now have more options than they did two years ago, but the protection landscape is still patchy. Here's what most legal experts recommend:

  • Register your voice where possible under right of publicity frameworks in your state.
  • Watermark your audio using services that embed inaudible identifiers, making it easier to trace unauthorized use.
  • Read contracts carefully. Many standard entertainment contracts now include AI-specific clauses. If yours doesn't, push for one.
  • Monitor for misuse. Tools designed for deepfake detection can help identify when your voice or likeness is being used without consent.

Where the Case Stands in 2026

The Johansson matter never became a traditional courtroom trial with a verdict. It evolved into something more diffuse: a legal pressure campaign that influenced policy conversations, accelerated legislative efforts, and changed how several AI companies approach voice product development.

Johansson has remained publicly vocal about AI rights issues. She's become something of an unofficial spokesperson for performer rights in the AI era, whether she intended to or not. Her case is cited regularly in academic papers, congressional testimony, and legal briefs.

OpenAI, for its part, has continued developing voice products. GPT-4o's voice mode became a flagship feature. They've been more careful about public framing since the Sky incident, but the commercial ambitions haven't changed.

The Precedent Problem

Because this didn't end in a definitive ruling, the legal questions remain open. That's actually significant. Other performers and creators who feel their voices have been replicated without permission don't have a clear precedent to point to. The Johansson situation showed that public pressure and legal letters can get results, but it didn't establish firm legal ground for everyone else.

That uncertainty is precisely why Congress needs to act, and why state-level legislation has been moving faster than federal efforts. California passed the Generative AI Transparency Act. Tennessee's ELVIS Act specifically protects musicians' voices. More states are following.

The Ethics Question That Won't Go Away

Even if everything OpenAI did was technically legal, the ethics are murky. Altman personally approached Johansson to use her voice. She said no. OpenAI then released a voice that sounded similar to hers and referenced the film that made her AI voice famous with a single-word tweet. Even if the resemblance was accidental, the sequence of events looks bad.

The AI industry has a pattern of moving fast and treating ethics as a PR problem rather than a design constraint. That's starting to cost companies real money in legal fees, settlements, and reputational damage. The Johansson incident was a highly visible example, but it's not isolated. Musicians have filed suit over training data. Authors have organized legal coalitions. Stock photo agencies have gone to court over image generation.

AI tools built for content creation, like those we cover across our site, are only as trustworthy as the consent frameworks behind them. That's worth remembering every time you reach for a voice synthesis tool.

What Creators Should Take Away From All of This

The Johansson case isn't just celebrity drama. It's a signal about where AI development is heading and what battles are worth fighting. A few practical takeaways:

  1. Consent matters more than capability. The fact that AI can replicate a voice doesn't mean it should. Choose tools that are explicit about their data sourcing and consent practices.
  2. Public pressure works. Johansson didn't need a jury verdict. The public outcry and legal letters were enough to get OpenAI to pull the product.
  3. Legislation is catching up. The window where AI companies can operate in total ambiguity is closing. Builders and users of these tools need to stay informed about the legal frameworks emerging in their jurisdictions.
  4. Your voice has value. Whether you're a professional performer or someone who's appeared in a podcast or YouTube video, your voice can be replicated. That's worth taking seriously.

For a broader look at how AI-generated content intersects with authenticity and regulation, our breakdown of the best deepfake detection tools in 2026 is a good companion read. And if you're curious how OpenAI's other products have evolved through all of this, the Grok 3 review offers some useful contrast in how different AI labs approach product transparency.

Final Thoughts

The Scarlett Johansson and OpenAI voice situation didn't end with a dramatic courtroom moment. It ended with a product pulled, laws accelerated, and an industry forced to reckon with the consequences of building products that treat famous voices as inspiration without asking permission.

That might not feel like justice. But it moved the needle. And in a space moving this fast, that counts for something.

ℹ️Disclosure: Some links in this article are affiliate links. We may earn a commission at no extra cost to you. This helps us keep creating free, unbiased content.

Comments

No comments yet. Be the first to share your thoughts.

Liked this review? Get more every Friday.

The best AI tools, trading insights, and market-moving tech — straight to your inbox.

More in AI in Entertainment

View all →

Best AI VFX Tools for Filmmakers 2026

AI has fundamentally changed what small film crews can accomplish on screen. Tools that once required a dedicated VFX studio now run on a single workstation. Here are the best AI VFX tools filmmakers are actually using in 2026.

7 min

Nvidia AI Gaming Technology Review 2026

Nvidia's AI gaming technology has pushed further in 2026 than most people expected. We spent weeks benchmarking the latest RTX features, neural rendering tools, and AI-assisted performance boosts to give you a real answer: is it worth upgrading? Here's what we found.

7 min

AI Replacing Actors in Hollywood 2026: The Truth

Hollywood is changing fast, and AI is at the center of the conversation. But the reality of AI replacing actors in 2026 is more complicated, more nuanced, and more urgent than most headlines suggest. Here's what's actually happening on the ground.

7 min

HBO Max vs Netflix AI Recommendations 2026

Netflix and HBO Max both claim their AI knows exactly what you want to watch. We spent months testing both systems to find out which recommendation engine is smarter, faster, and more accurate. The results surprised us.

7 min

Tom Hanks AI Deepfake Controversy 2026 Explained

Tom Hanks became the center of one of 2026's most talked-about AI controversies after unauthorized deepfake content using his likeness spread across multiple platforms. The incident reignited fierce debate about consent, digital identity rights, and whether existing laws can keep pace with AI-generated media. Here's a full breakdown of what happened and what it means going forward.

7 min

HBO Max vs Netflix AI Recommendations: 2026 Guide

Netflix has spent years refining its recommendation engine, but Max has been quietly catching up with some genuinely impressive AI upgrades in 2026. We put both platforms through their paces to see which one actually helps you find something worth watching, and which one just recycles the same titles you've already seen.

8 min