AIAIToolHub

Scarlett Johansson OpenAI Lawsuit: What Really Happened

7 min read
1,683 words

The Scarlett Johansson OpenAI Voice Controversy, Explained

In May 2024, OpenAI unveiled "Sky," one of five new voices for ChatGPT's GPT-4o model. Within hours, users noticed something unsettling: Sky sounded almost identical to Scarlett Johansson's performance as Samantha in the 2013 film Her, where she played an AI assistant. The internet noticed. Then Johansson herself noticed.

What followed was one of the most high-profile confrontations between a celebrity and an AI company in history. It forced a real conversation about consent, likeness rights, and how far AI companies can go when training and deploying voice models.

What Scarlett Johansson Actually Said

Johansson released a public statement saying she was "shocked, angered, and in disbelief" when she heard the Sky voice. She revealed that OpenAI CEO Sam Altman had personally contacted her in September 2023, asking her to voice the ChatGPT system. She declined. Then, just two days before the GPT-4o launch, Altman's team reportedly reached out again. She was still considering it when the voice went live anyway.

Her statement was pointed: "I was shocked that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference."

She hired legal counsel and sent two letters to OpenAI demanding answers about how the Sky voice was created and whether her likeness was used.

OpenAI's Response

OpenAI moved quickly. Within 24 hours of Johansson's statement going public, they paused the Sky voice from the app. The company insisted that Sky was voiced by a different professional voice actress, and that the similarity to Johansson was coincidental. They said they would not reveal the identity of the voice actress to protect her privacy.

Altman did not directly deny that Sky was designed to evoke Johansson's voice. On the day of the GPT-4o launch, he posted a single word on X: "her." Many read that as a deliberate reference to the film. OpenAI later claimed it was coincidental, but the timing made that explanation difficult to swallow.

Did It Go to Court?

As of 2026, no formal lawsuit has been filed by Johansson against OpenAI. Her team reportedly continued negotiations with the company through late 2024, and the matter appears to have remained in the legal gray zone that so much of AI regulation occupies. No settlement has been publicly announced, and no litigation record has been made public.

That said, the incident had real legal consequences beyond the two parties involved. It accelerated legislative attention to AI voice and likeness rights at both the state and federal level in the US.

Why This Case Matters Beyond Celebrity Drama

The Johansson situation wasn't just about one famous person feeling wronged. It highlighted a gap that affects voice actors, musicians, podcasters, and everyday people. AI voice tools have become sophisticated enough that a model can be trained to replicate someone's voice with just a few hours of audio, and existing law often provides no clear remedy.

Tools like deepfake detection platforms have become more critical as this problem scales. But detection is reactive. The harder question is prevention and consent.

Professional voice cloning platforms like ElevenLabs and Murf AI have both updated their terms of service and consent verification processes since the controversy. ElevenLabs now requires explicit consent workflows before any voice can be cloned on their platform. That's progress, but it only applies to their platform. The underlying models used by companies like OpenAI operate under different rules entirely.

The "Her" Connection: Intentional or Not?

This is genuinely the most interesting part of the story. OpenAI has been open about the fact that Her inspired elements of their vision for conversational AI. Altman has cited the film in interviews. The product demo for GPT-4o even had a similar aesthetic feel to the film's interactions.

Whether or not Sky was deliberately designed to sound like Johansson, OpenAI clearly understood the cultural resonance of that reference. Creating a voice that evoked Her while knowing Johansson had declined to participate was, at minimum, a serious lapse in judgment. At worst, it was calculated and cynical.

The incident is a useful case study alongside the broader surge in AI-generated media. If you're following how AI tools are reshaping content creation, our review of Sora 2 and video generation gives more context on how quickly these capabilities are evolving.

What Changed After the Backlash

Several things shifted, some structural, some cosmetic.

  • Sky was pulled from ChatGPT and has not returned in the same form. OpenAI has expanded its voice options without reinstating it.
  • OpenAI published clearer guidelines about how it selects and compensates voice actors for its products.
  • US state legislatures moved faster on AI likeness bills. Tennessee passed the ELVIS Act (Ensuring Likeness Voice and Image Security) specifically targeting AI voice cloning of musicians. Other states followed with similar legislation.
  • The Screen Actors Guild (SAG-AFTRA) made AI likeness protections a central demand in subsequent contract negotiations with studios and tech companies.

None of this resolves the fundamental tension. But the Johansson incident was a catalyst that made it harder for companies to dismiss these concerns as fringe or speculative.

What This Means for AI Voice Tools in 2026

The market for AI voice generation has not slowed down. ElevenLabs, Murf AI, and HeyGen are all growing, and the quality of synthesized voices continues to improve. Synthesia and Descript have integrated voice cloning features that content creators use daily for legitimate purposes: dubbing videos, creating voiceovers, localizing content for different languages.

These tools are genuinely useful. A small business owner who wants a professional-sounding narrator for their product demo shouldn't be penalized because large AI companies made irresponsible decisions. But the industry needs guardrails.

The better platforms have started implementing real consent architecture. ElevenLabs verifies that you own the voice you're cloning. Descript has clear terms about not impersonating real people without consent. These aren't perfect solutions, but they represent a move toward accountability.

The problem is enforcement at scale. Anyone can currently use AI voice tools to generate content that sounds like a real person if they have enough source audio. The same technology that makes AI content creation accessible also makes misuse trivially easy.

Celebrity Likeness Rights: The Legal Framework (as of 2026)

US federal law still hasn't passed comprehensive AI likeness legislation. The NO FAKES Act, which would create a federal right against unauthorized AI-generated replicas of voices and likenesses, has been debated in Congress but not enacted as of mid-2026. Individual states have moved faster, with varying degrees of protection.

California, home to most of the entertainment industry, has strengthened its right of publicity laws to cover AI-generated voice and likeness. That gives celebrities and voice actors in California more legal recourse than most. But a voice actor in a state without those protections has significantly fewer options.

Internationally, the EU's AI Act includes provisions related to deepfakes and synthetic media, requiring transparency when AI-generated content depicts real people. That's a higher baseline than what most US residents currently have.

What This Means for Creators Using AI Voice Tools

If you're a content creator, marketer, or developer using voice AI tools, the practical takeaway is this: stick to platforms that have clear consent frameworks, don't attempt to clone recognizable voices without explicit permission, and keep documentation of the voice assets you're using.

This isn't just legal protection. It's also about not building your content strategy on foundations that could collapse if regulations tighten quickly, which they likely will.

AI content tools like Jasper and Copy.ai are already adding disclaimers and usage guidelines around AI-generated content to help users stay compliant. Descript and Murf AI have updated their documentation to reflect evolving legal requirements. Pay attention to those updates. They often signal where regulation is heading before it arrives.

For broader context on how AI is reshaping creative industries, it's worth reading about AI tools for brand identity and the ethical questions those tools raise around originality and copyright.

Sam Altman's Role and OpenAI's Accountability

What made the Johansson incident particularly damaging for OpenAI wasn't just the similarity of the voice. It was the sequence of events. Altman personally reached out to her. She declined. His team reached out again while she was still considering. Then the voice launched anyway, before she could respond.

That's not a process failure. That's a decision. And posting "her" on the day of the launch suggests someone at OpenAI understood exactly what they were doing.

OpenAI has since emphasized its ethics review processes and responsible deployment commitments. But trust is earned through behavior over time, not through press releases. The company has faced criticism on multiple fronts since then, from governance disputes to questions about its nonprofit structure. The Johansson episode fits a pattern that many observers find concerning.

Looking Ahead: Where Does This Go?

By 2026, AI voice technology is woven into products most people use daily. Navigation apps, customer service systems, accessibility tools, entertainment platforms. Most of that is genuinely useful and largely uncontroversial when done with proper consent and transparency.

The hard cases are the ones the Johansson situation surfaced. What happens when a company with enormous resources decides that the cultural resonance of a famous person's voice is worth the legal risk? What happens when the voice actress whose voice was actually used can't speak publicly because her contract prohibits it? What happens when the law hasn't caught up to the technology?

Those questions don't have clean answers yet. But they're being asked more seriously than they were two years ago, and that's partly because Scarlett Johansson had the platform and the legal resources to force the conversation.

The actors, musicians, and ordinary people who don't have that platform are watching to see whether the legal system will extend similar protections to them. The answer will define how AI voice technology develops over the next decade. For now, understanding how AI detection and verification tools are evolving is part of staying informed, and our breakdown of the best AI deepfake detection tools in 2026 is a good place to continue that research.

ℹ️Disclosure: Some links in this article are affiliate links. We may earn a commission at no extra cost to you. This helps us keep creating free, unbiased content.

Comments

No comments yet. Be the first to share your thoughts.

Liked this review? Get more every Friday.

The best AI tools, trading insights, and market-moving tech — straight to your inbox.

More in AI in Entertainment

View all →

Best AI VFX Tools for Filmmakers 2026

AI has fundamentally changed what small film crews can accomplish on screen. Tools that once required a dedicated VFX studio now run on a single workstation. Here are the best AI VFX tools filmmakers are actually using in 2026.

7 min

Nvidia AI Gaming Technology Review 2026

Nvidia's AI gaming technology has pushed further in 2026 than most people expected. We spent weeks benchmarking the latest RTX features, neural rendering tools, and AI-assisted performance boosts to give you a real answer: is it worth upgrading? Here's what we found.

7 min

AI Replacing Actors in Hollywood 2026: The Truth

Hollywood is changing fast, and AI is at the center of the conversation. But the reality of AI replacing actors in 2026 is more complicated, more nuanced, and more urgent than most headlines suggest. Here's what's actually happening on the ground.

7 min

HBO Max vs Netflix AI Recommendations 2026

Netflix and HBO Max both claim their AI knows exactly what you want to watch. We spent months testing both systems to find out which recommendation engine is smarter, faster, and more accurate. The results surprised us.

7 min

Tom Hanks AI Deepfake Controversy 2026 Explained

Tom Hanks became the center of one of 2026's most talked-about AI controversies after unauthorized deepfake content using his likeness spread across multiple platforms. The incident reignited fierce debate about consent, digital identity rights, and whether existing laws can keep pace with AI-generated media. Here's a full breakdown of what happened and what it means going forward.

7 min

HBO Max vs Netflix AI Recommendations: 2026 Guide

Netflix has spent years refining its recommendation engine, but Max has been quietly catching up with some genuinely impressive AI upgrades in 2026. We put both platforms through their paces to see which one actually helps you find something worth watching, and which one just recycles the same titles you've already seen.

8 min