UK Tech Companies and Child Protection Officials to Examine AI's Ability to Create Abuse Content
Technology companies and child protection agencies will receive authority to assess whether artificial intelligence tools can generate child exploitation material under new British legislation.
Substantial Increase in AI-Generated Illegal Material
The declaration coincided with findings from a protection watchdog showing that cases of AI-generated CSAM have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.
Updated Regulatory Structure
Under the amendments, the authorities will allow designated AI companies and child protection groups to inspect AI models – the underlying systems for chatbots and image generators – and verify they have adequate safeguards to prevent them from producing depictions of child sexual abuse.
"Ultimately about stopping exploitation before it occurs," stated Kanishka Narayan, adding: "Experts, under rigorous protocols, can now detect the danger in AI systems promptly."
Tackling Legal Challenges
The amendments have been introduced because it is illegal to produce and possess CSAM, meaning that AI creators and others cannot generate such images as part of a evaluation process. Previously, officials had to wait until AI-generated CSAM was published online before addressing it.
This legislation is designed to preventing that problem by helping to halt the creation of those materials at source.
Legal Structure
The amendments are being introduced by the authorities as revisions to the criminal justice legislation, which is also establishing a ban on owning, creating or distributing AI systems developed to create child sexual abuse material.
Real-World Consequences
This recently, the official toured the London headquarters of Childline and listened to a mock-up call to advisors involving a account of AI-based exploitation. The interaction depicted a teenager requesting help after facing extortion using a sexualised deepfake of themselves, created using AI.
"When I hear about young people experiencing extortion online, it is a cause of intense frustration in me and justified concern amongst families," he stated.
Alarming Data
A prominent online safety organization reported that cases of AI-generated exploitation material – such as online pages that may include numerous images – had more than doubled so far this year.
Instances of the most severe material – the gravest form of abuse – increased from 2,621 images or videos to 3,086.
- Female children were predominantly victimized, making up 94% of illegal AI depictions in 2025
- Depictions of infants to two-year-olds rose from five in 2024 to 92 in 2025
Industry Response
The law change could "represent a vital step to ensure AI tools are secure before they are launched," stated the chief executive of the online safety organization.
"Artificial intelligence systems have enabled so survivors can be targeted repeatedly with just a few clicks, giving criminals the capability to make potentially endless amounts of advanced, photorealistic exploitative content," she continued. "Material which additionally commodifies survivors' suffering, and makes young people, particularly female children, more vulnerable on and off line."
Counseling Interaction Data
The children's helpline also published information of support interactions where AI has been mentioned. AI-related harms mentioned in the sessions include:
- Employing AI to rate body size, physique and appearance
- AI assistants discouraging young people from consulting trusted adults about abuse
- Being bullied online with AI-generated content
- Digital extortion using AI-manipulated pictures
Between April and September this year, the helpline delivered 367 support sessions where AI, chatbots and related topics were discussed, four times as many as in the equivalent timeframe last year.
Half of the mentions of AI in the 2025 sessions were connected with psychological wellbeing and wellness, including utilizing AI assistants for support and AI therapeutic apps.