The recent dispute between Scarlett Johansson and OpenAI over an AI voice assistant that mimicked the actress’s voice has brought to light the complex issue of identity rights in the age of artificial intelligence.
The incident serves as a reminder of the potential for AI technologies to infringe on personal identity and the urgent need for legal frameworks to address such harms.
TLDR
- Scarlett Johansson objected to an AI voice assistant that sounded eerily similar to her own, despite declining requests from OpenAI to use her voice.
- This incident highlights the potential for AI to infringe on personal identity and raises concerns about the lack of legal protections against such harms.
- Australia lacks robust laws to protect individuals from AI-enabled identity misappropriation, unlike the U.S. and E.U.
- There is a need for clear regulations and oversight to ensure responsible development and use of AI technologies by companies like OpenAI.
- Policymaking is lagging behind the rapid pace of AI innovation, and there are concerns about the lack of enforcement mechanisms for existing voluntary agreements.
According to reports, OpenAI had approached Johansson on two separate occasions, seeking permission to use her voice for their ChatGPT voice assistant.
After the actress declined both requests, OpenAI unveiled a new voice option called “Sky,” which Johansson claimed sounded eerily similar to her own.
OpenAI denied deliberately mimicking the actress’s voice, stating that Sky’s voice belonged to a different professional actress using her natural speaking voice.
Regardless of OpenAI’s intentions, the controversy highlights the slippery nature of identity in the digital age.
As artificial intelligence advances, the ability to manipulate or create content that misrepresents an individual’s image, voice, or likeness becomes increasingly accessible.
This raises concerns about potential harms to reputation, privacy, and self-determination.
In the United States, Johansson could potentially pursue legal action for misappropriation of likeness, as demonstrated by the precedent set in the 1988 case involving singer Bette Midler and Ford Motor Company.
However, in Australia and the United Kingdom, the law only intervenes when there is consumer deception or financial loss involved, providing limited protection for public figures in such situations.
Australia, in particular, lacks robust legal frameworks to address AI-enabled identity harms. Unlike the European Union, which has specific personality rights aimed at protecting an individual’s dignity, privacy, and self-determination, Australia’s legal system offers piecemeal remedies that are often ill-fitted and costly to pursue.
The incident with Johansson serves as a wake-up call for the need for clear regulations and oversight to ensure the responsible development and use of AI technologies by companies like OpenAI.
While voluntary agreements and pledges have been made by tech firms, there are concerns about the lack of enforcement mechanisms and independent oversight.
As Professor Dame Wendy Hall, a leading computer scientist in the UK, pointed out,
“We have no guarantee that these companies are sticking to their pledges. How do we hold them to account on what they’re saying, like we do with drugs companies or in other sectors where there is high risk?”
The rapid pace of AI innovation has outstripped the slower process of policymaking, creating a regulatory vacuum that leaves individuals vulnerable to potential identity harms.
While efforts are underway to establish global governance principles and legal frameworks, such as the EU’s AI Act, the process is complex and time-consuming.