in , ,

UK Cracks Down on AI-Generated Child Abuse Content With Stricter Rules

UK Tightens Rules on AI-Generated Child Abuse Content

The UK government is taking a stronger stance on AI and child safety. It will now allow tech companies and child protection charities to proactively test artificial intelligence tools. The goal: ensure AI cannot generate child sexual abuse material (CSAM).

An amendment to the Crime and Policing Bill, announced on Wednesday, gives “authorized testers” the power to assess AI models before release. These tests check whether the AI could create illegal CSAM.

Hosting 75% off

Technology Secretary Liz Kendall said the move will “ensure AI systems can be made safe at the source.” However, some campaigners argue that more action is still needed.

AI-Related CSAM Rising

The Internet Watch Foundation (IWF) reported a surge in AI-related CSAM. The number of reports doubled over the past year. Between January and October 2025, the charity removed 426 pieces of reported material, up from 199 in the same period in 2024.

IWF CEO Kerry Smith welcomed the proposals. She noted that AI tools make it easier for criminals to re-victimise survivors. “With just a few clicks, offenders can create limitless amounts of photorealistic CSAM,” she said. “Today’s announcement is a vital step to ensure AI products are safe before release.”

Read More: Disney Plus could soon let users generate AI-powered videos, says Bob Iger

Support from Child Safety Experts

Rani Govender, policy manager for child safety online at the NSPCC, also praised the measures. She said the rules encourage companies to take accountability and add scrutiny to AI models.

“But to truly protect children, this cannot be optional,” Govender added. “The government must make it mandatory for AI developers to use this provision. Safeguarding against child sexual abuse must be part of product design.”

Broader Safeguards for AI

The government’s proposed changes also cover extreme pornography and non-consensual intimate images. Experts warn that AI models, trained on large online datasets, can create highly realistic abuse imagery of children and non-consenting adults.

Charities like IWF and Thorn have stressed that AI-generated content makes it harder to police abuse material. Distinguishing between real and AI content is increasingly difficult. Researchers note growing demand for such imagery, particularly on the dark web, and that some are being created by children themselves.

UK Leads the Way in AI Regulation

Earlier this year, the Home Office announced the UK would become the first country to make it illegal to possess, create, or distribute AI tools designed to produce CSAM. Offenders can face up to five years in prison.

Liz Kendall said, “By empowering trusted organisations to scrutinise AI models, we ensure child safety is built into AI systems, not added later.” She added, “We will not let technological progress outpace our ability to protect children.”

Safeguarding Minister Jess Phillips emphasized that the measures will prevent legitimate AI tools from being manipulated to create vile content. “More children will be protected from predators as a result,” she said.

FAQs:

1. What does the UK’s new AI testing law do?

It allows authorised testers, including tech firms and child safety charities, to check AI tools for the ability to create CSAM before public release.

2. What penalties exist for breaking AI CSAM laws?

Individuals who create, possess, or distribute AI tools for CSAM can face up to five years in prison.

3. How does this affect AI companies?

Companies must allow authorised testing and may need to design safety measures into AI products before release to comply with UK law.

4. Why is AI CSAM a growing concern?

AI tools can create realistic abuse imagery quickly, making it harder to detect and increasing risks for survivors and children online.

Hosting 75% off

Written by Hajra Naz

Disney Plus could soon let users generate AI-powered videos

Disney Plus could soon let users generate AI-powered videos, says Bob Iger

OpenAI Rolls Out GPT-5.1 | A More Human, Smarter, and Faster ChatGPT