in , ,

US Senators call out X, Meta, Alphabet, and others on deepfake content

US Senators call out X, Meta, Alphabet, and others on deepfake content

Nonconsensual, sexualized deepfakes are no longer a problem confined to a single platform. Now, U.S. senators are demanding answers from some of the largest tech companies in the world.

In a formal letter sent to executives at X, Meta, Alphabet, Snap, Reddit, and TikTok, a group of lawmakers asked the companies to demonstrate that they have strong safeguards and enforcement policies in place to combat the spread of sexualized AI-generated imagery. The senators also want detailed explanations of how each platform plans to slow the rapid growth of this content.

Hosting 75% off

Lawmakers Demand Records and Transparency

Beyond policy explanations, the letter orders companies to preserve all internal documents and data related to the creation, detection, moderation, and monetization of sexualized deepfakes. That includes information on how AI-generated images are identified, handled, and, in some cases, financially exploited.

The request signals growing concern that platforms may not be fully prepared—or willing—to address the scale of the problem.

Read More: Indonesia, Malaysia Ban Grok Over Non-Consensual Deepfake Content

X’s Grok Changes Spark Broader Scrutiny

The senators’ letter arrived just hours after X announced new restrictions on Grok, its AI chatbot developed by xAI. The update prevents Grok from generating or editing images of real people in revealing clothing and limits image generation to paying users.

Lawmakers cited media reports showing how easily Grok had produced sexualized and nude images of women and children, arguing that existing guardrails across platforms are failing in practice, even when policies claim otherwise.

“Users are finding ways around these guardrails—or the guardrails are simply not working,” the letter states.

A Platform-Wide Problem, Not an Isolated One

While Grok and X have faced intense backlash, senators emphasized that no major platform is immune.

Sexualized deepfakes first gained attention in 2018, when synthetic porn videos of celebrities spread widely on Reddit before being removed. Since then, similar content has surfaced across TikTok, YouTube, Snapchat, and Meta’s platforms, often originating elsewhere but spreading quickly.

Meta’s Oversight Board has previously reviewed cases involving explicit AI-generated images of female public figures. Snapchat has seen reports of students sharing deepfakes of classmates. Telegram—though not named in the letter—has become notorious for hosting bots that digitally “undress” photos of women.

How Companies Are Responding

In response to the senators’ inquiry, X pointed to its recent Grok update.

Reddit issued a detailed statement emphasizing that it prohibits non-consensual intimate media, including AI-generated or manipulated imagery. The company said it actively removes such content and bans the promotion of “nudify” tools or instructions for creating deepfakes.

Alphabet, Meta, Snap, and TikTok did not immediately comment.

Read More: “Remove Her Clothes”: Grok Image Feature Draws International Criticism Over AI Safety Failures

What Senators Want Answered

The letter outlines an extensive list of demands, including:

  • How each company defines “deepfakes” and “non-consensual intimate imagery.”

  • Policies governing AI-generated edits, virtual undressing, and altered clothing

  • Internal moderation guidelines and enforcement processes

  • Technical safeguards that prevent deepfake creation and re-uploads

  • Measures to stop users—and platforms—from profiting off this content

  • Procedures for banning repeat offenders

  • How victims are notified and supported

The letter was signed by Senators Lisa Blunt Rochester, Tammy Baldwin, Richard Blumenthal, Kirsten Gillibrand, Mark Kelly, Ben Ray Luján, Brian Schatz, and Adam Schiff.

Legal Pressure Builds on xAI and Grok

The inquiry follows comments from Elon Musk, who said he was unaware of underage explicit images generated by Grok. Shortly after, California’s attorney general opened an investigation into xAI, citing concerns over insufficient safeguards.

Although xAI says it removes illegal content, critics argue the bigger issue is that Grok was allowed to generate such material at all.

The Bigger AI Moderation Challenge

Sexualized deepfakes are only part of a broader issue. Multiple AI tools have reportedly enabled harmful or misleading content, including violent imagery, racist videos, and explicit material involving minors.

The challenge becomes even more complex with Chinese AI tools, many of which make it easy to manipulate faces, voices, and videos. While China enforces stricter labeling rules for synthetic media, the U.S. lacks federal standards, leaving enforcement largely to platforms themselves.

Read More: ‘French and Malaysian authorities are investigating Grok Over Sexualized AI Deepfakes

Laws Exist—but Gaps Remain

Congress has passed legislation like the Take It Down Act, which criminalizes the creation and distribution of nonconsensual sexual imagery. However, the law primarily targets individual users, making it harder to hold AI platforms accountable.

States are now stepping in. This week, New York Governor Kathy Hochul proposed new rules that would require AI-generated content to be labeled and ban nonconsensual deepfakes during sensitive election periods.

As AI tools become more powerful and accessible, lawmakers are signaling that platforms—not just users—may soon face far greater scrutiny.

Hosting 75% off

Written by Hajra Naz

OpenAI invests in Merge Labs, Sam Altman’s BCI startup

Sam Altman’s brain-computer interface startup Merge Labs gets OpenAI funding

Spotify increases subscription fees for another time in the US

Spotify increases subscription fees for another time in the US