British Technology Companies and Child Safety Agencies to Examine AI's Capability to Create Abuse Content

Tech firms and child protection agencies will receive permission to evaluate whether artificial intelligence systems can generate child abuse images under recently introduced UK laws.

Significant Rise in AI-Generated Harmful Material

The announcement coincided with findings from a protection watchdog showing that cases of AI-generated child sexual abuse material have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.

New Regulatory Structure

Under the amendments, the government will allow approved AI companies and child safety organizations to inspect AI models – the foundational technology for conversational AI and image generators – and ensure they have adequate protective measures to prevent them from producing images of child exploitation.

"Ultimately about stopping abuse before it occurs," declared Kanishka Narayan, adding: "Specialists, under rigorous conditions, can now detect the risk in AI systems early."

Tackling Legal Obstacles

The changes have been introduced because it is illegal to create and possess CSAM, meaning that AI creators and others cannot generate such content as part of a evaluation regime. Previously, officials had to delay action until AI-generated CSAM was published online before addressing it.

This legislation is designed to averting that issue by enabling to halt the production of those images at their origin.

Legislative Framework

The changes are being added by the government as revisions to the crime and policing bill, which is also establishing a ban on possessing, creating or sharing AI systems designed to create child sexual abuse material.

Real-World Consequences

This week, the official visited the London base of Childline and listened to a mock-up conversation to counsellors involving a report of AI-based exploitation. The interaction depicted a adolescent requesting help after being blackmailed using a explicit deepfake of themselves, created using AI.

"When I learn about children facing blackmail online, it is a source of extreme anger in me and justified concern amongst families," he stated.

Concerning Data

A leading online safety organization reported that instances of AI-generated exploitation content – such as online pages that may contain numerous images – had more than doubled so far this year.

Cases of category A material – the most serious form of abuse – increased from 2,621 images or videos to 3,086.

  • Female children were predominantly targeted, accounting for 94% of prohibited AI depictions in 2025
  • Portrayals of newborns to two-year-olds increased from five in 2024 to 92 in 2025

Industry Response

The law change could "constitute a crucial step to guarantee AI tools are safe before they are launched," stated the head of the internet monitoring foundation.

"AI tools have enabled so survivors can be victimised all over again with just a few clicks, providing criminals the ability to make possibly limitless quantities of sophisticated, photorealistic exploitative content," she continued. "Content which additionally commodifies survivors' suffering, and makes young people, particularly girls, less safe on and off line."

Counseling Interaction Information

The children's helpline also published information of counselling interactions where AI has been mentioned. AI-related risks mentioned in the sessions comprise:

  • Employing AI to evaluate body size, body and looks
  • AI assistants discouraging young people from talking to trusted adults about harm
  • Being bullied online with AI-generated material
  • Online extortion using AI-faked pictures

Between April and September this year, Childline conducted 367 support interactions where AI, chatbots and associated topics were mentioned, four times as many as in the equivalent timeframe last year.

Half of the references of AI in the 2025 sessions were related to mental health and wellness, including utilizing AI assistants for assistance and AI therapeutic apps.

Cheryl White
Cheryl White

Elena is a life coach and writer passionate about helping others unlock their potential through actionable strategies.