“UK Law Targets Child Sexual Deepfakes with AI Testing”

Evil individuals creating sexual deepfakes involving children, even infants below the age of two, will be targeted by new stringent measures. A recent law change will allow AI developers and child protection groups to test AI models to prevent the production of inappropriate content.

The existing UK law prohibits the possession and creation of child sexual abuse material but does not permit safety testing on AI models. Consequently, illicit images can only be addressed after they have been circulated online. Reports indicate a significant increase in AI-generated child sexual abuse material, with instances more than doubling from 199 in 2024 to 426 in 2025. Notably, there has been a disturbing surge in depictions of infants, with images of 0–2-year-olds escalating from five in 2024 to 92 in 2025.

The severity of the content has also heightened, with Category A material, encompassing images of penetrative sexual acts, sexual encounters with animals, or sadistic content, increasing from 2,621 to 3,086 items. Girls have been predominantly targeted, comprising 94% of illegal AI images in 2025.

In a groundbreaking move globally, the law amendment will ensure robust testing of AI systems from the outset. This change, as mentioned by the Department for Science, Innovation, and Technology (DSIT), aims to verify that AI models are equipped with safeguards against extreme pornography and non-consensual intimate images.

These modifications will be introduced today as part of the Crime and Policing Bill. The government intends to assemble a panel of AI and child safety experts to oversee the secure execution of the testing process.

The NSPCC emphasized the necessity for mandatory testing of AI models to combat child sexual abuse effectively. Policy manager Rani Govender stressed the importance of making safeguarding against child abuse a fundamental aspect of product design for developers.

Acknowledging the detrimental impact of AI tools facilitating the production of illicit material, Kerry Smith, the IWF’s chief executive, emphasized the need to embed safety measures into new technologies. The announcement of the new laws is seen as a critical step towards ensuring the safety of AI products before their deployment.

Technology Secretary Liz Kendall underscored the government’s commitment to prioritizing child safety in technological advancements. The new legislation aims to proactively address vulnerabilities in AI systems to protect children, ensuring that safety is an integral design element rather than an afterthought.

Related articles

“Government Briefing Battle Sparks Speculation”

Last night, a flurry of messages lit up my...

“Jake Paul Receives Harsh Advice Ahead of Joshua Bout”

Jake Paul received harsh advice to dismiss his advisors...

Pregnant Accused Drug Smuggler Seeks Release

Lawyers representing Bella Culley, a pregnant woman accused of...

“AI-Powered Plan to Boost Student Attendance Unveiled”

Schools are gearing up to boost student attendance following...