UK Technology Companies and Child Protection Agencies to Examine AI's Ability to Generate Abuse Content
Tech firms and child safety agencies will receive authority to evaluate whether artificial intelligence tools can produce child abuse material under recently introduced UK legislation.
Significant Rise in AI-Generated Illegal Material
The announcement coincided with revelations from a safety watchdog showing that reports of AI-generated CSAM have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.
New Regulatory Framework
Under the changes, the authorities will allow approved AI developers and child safety organizations to examine AI systems – the foundational technology for chatbots and image generators – and verify they have adequate protective measures to prevent them from creating images of child exploitation.
"Ultimately about preventing abuse before it occurs," stated Kanishka Narayan, adding: "Specialists, under strict protocols, can now detect the risk in AI systems early."
Tackling Regulatory Challenges
The changes have been implemented because it is against the law to produce and possess CSAM, meaning that AI developers and other parties cannot generate such images as part of a evaluation regime. Until now, officials had to delay action until AI-generated CSAM was uploaded online before dealing with it.
This legislation is designed to averting that issue by helping to halt the production of those materials at their origin.
Legal Structure
The changes are being introduced by the authorities as modifications to the crime and policing bill, which is also establishing a prohibition on owning, producing or distributing AI models designed to generate child sexual abuse material.
Real-World Consequences
This week, the minister toured the London base of a children's helpline and listened to a mock-up call to advisors involving a account of AI-based exploitation. The interaction portrayed a teenager requesting help after facing extortion using a explicit deepfake of himself, created using AI.
"When I learn about children experiencing extortion online, it is a source of extreme frustration in me and rightful anger amongst families," he said.
Concerning Statistics
A leading online safety foundation reported that cases of AI-generated abuse material – such as webpages that may include numerous images – had significantly increased so far this year.
Cases of category A material – the gravest form of exploitation – rose from 2,621 images or videos to 3,086.
- Girls were overwhelmingly victimized, accounting for 94% of prohibited AI images in 2025
- Depictions of infants to toddlers increased from five in 2024 to 92 in 2025
Industry Response
The law change could "constitute a vital step to ensure AI tools are safe before they are released," commented the chief executive of the internet monitoring organization.
"AI tools have made it so victims can be targeted all over again with just a few clicks, giving criminals the capability to create possibly limitless amounts of advanced, photorealistic child sexual abuse material," she added. "Material which additionally exploits survivors' suffering, and makes children, especially female children, more vulnerable both online and offline."
Support Session Information
Childline also released information of counselling sessions where AI has been referenced. AI-related risks discussed in the sessions comprise:
- Using AI to evaluate body size, physique and appearance
- Chatbots dissuading young people from consulting safe guardians about harm
- Being bullied online with AI-generated material
- Online extortion using AI-manipulated pictures
Between April and September this year, the helpline delivered 367 support interactions where AI, conversational AI and related topics were mentioned, significantly more as many as in the equivalent timeframe last year.
Fifty percent of the references of AI in the 2025 interactions were connected with psychological wellbeing and wellbeing, including utilizing chatbots for support and AI therapy apps.