Drata has Acquired SafeBase: We’re Redefining GRC & Trust Management

Contact Sales

  • Sign In
  • Get Started
HomeBlogWhat is Deepfake Technology

What is Deepfake Technology, and Why is California Trying to Regulate It?

We analyzed proposed legislation and news reports to better understand California's attempts to regulate deepfake technology.

by Ethan Ward

March 31, 2025
What is deepfake technology, and why is California trying to regulate it Feature
Contents
California's Regulation EffortsProtecting Consumers

The rapid evolution of AI has made deepfakes increasingly convincing—and concerning—and have caused big states like California to take a stand on the issues. We analyzed proposed legislation and news reports to better understand the state's attempts to regulate deepfake technology.

World Intellectual Property Organization's magazine defines deepfakes as AI-based techniques that synthesize media by superimposing human features or manipulating sounds to create realistic human experiences. While the technology has legitimate uses, like helping actor Val Kilmer, who lost his voice to throat cancer, speak again through AI, it's often misused. Recent examples include March 2023's viral deepfake of Pope Francis in a white puffer coat and January 2024's sexually explicit deepfake images of Taylor Swift.

California's Regulation Efforts

California emerged as a leader in deepfake regulation in 2024, passing eight new laws aimed at curbing AI-generated deception, according to Morrison Foerster's legal analysis. SB 942, which takes effect in January 2026, requires AI systems with over a million monthly visitors to provide free detection tools and implement digital watermarking to make AI-generated content identifiable. The law also mandates clear disclosure options for users, according to CalMatters.

The state's regulatory attempts include additional protections. SB 926 criminalizes nonconsensual sexually explicit deepfakes, while AB 1836 protects a personality's digital replicas, which was celebrated by the actors' union SAG-AFTRA, the Recording Academy, and other creator rights organizations.

It hasn't all been smooth sailing. The state's regulatory efforts have faced legal and other challenges. In October 2024, a federal judge blocked one of California's new deepfake laws, AB 2839, ruling it likely violated First Amendment protections. The court found that "counter speech," rather than censorship, was the proper response to deepfakes, even when they might be offensive, according to Daniel Ortner, opinion contributor at The Hill.

Major tech companies warned that strict rules could stifle innovation and make California less competitive in the AI space. According to MIT Technology Review, there are also concerns about open-source AI systems, where watermarking requirements might be easily removed, and content can spread through encrypted platforms.

Protecting Consumers

Some platforms are already taking steps to address these concerns. MIT Technology Review reported that marketplace platforms like Hugging Face and GitHub have added extra security steps to make it harder for people to create harmful content with their tools. Hugging Face also requires users to agree to specific rules about how they'll use the technology before accessing it.

The stakes are high. Beyond high-profile cases of election interference and privacy violations, deepfakes pose risks at local government levels where false content is harder to detect and debunk than in national politics. This growing threat has prompted increased state-level AI regulation in many states like New York, Illinois, and California, with multistate.ai tracking showing over 600 AI-related bills introduced in 2024, up from fewer than 200 in 2023.

NPR reported several key ways to identify potential deepfakes, including checking if multiple sources have covered the event or image in question, using Google's Reverse Image Search Tool to verify a photo's origin, and being cautious of content designed to trigger strong emotional reactions. When it comes to potential scam calls using AI-generated voices, St. Louis Bank advises verifying the caller's identity by calling back at a known verified number. CBS suggests developing a family "safe word" that helps the person on the other line verify the other's identity.

Trusted Newsletter
Resources for you
2024 CaC Alpha Case Study Copy List

Shifting Left with Compliance as Code: How Drata Eliminates Infrastructure Bottlenecks

Trust Services Criteria in SOC 2

Trust Services Criteria for SOC 2: What You Need to Know

G2 Spring 2025 Blog List (1)

Spring 2025: Drata Named a Leader in Latest G2 Reports for 14th Consecutive Quarter

Why GRC is Key List Landscape

Why GRC Automation is Key When Expanding Your Compliance Framework Goals

Ethan Ward
Related Resources
2024 CaC Alpha Case Study Copy List

Shifting Left with Compliance as Code: How Drata Eliminates Infrastructure Bottlenecks

Trust Services Criteria in SOC 2

Trust Services Criteria for SOC 2: What You Need to Know

G2 Spring 2025 Blog List (1)

Spring 2025: Drata Named a Leader in Latest G2 Reports for 14th Consecutive Quarter

Why GRC is Key List Landscape

Why GRC Automation is Key When Expanding Your Compliance Framework Goals