Technology companies and child safety organizations will receive permission to evaluate whether artificial intelligence systems can produce child abuse images under new UK legislation.
The announcement came as revelations from a safety monitoring body showing that reports of AI-generated CSAM have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.
Under the changes, the authorities will allow approved AI developers and child protection organizations to inspect AI models – the underlying systems for conversational AI and image generators – and ensure they have sufficient protective measures to prevent them from producing depictions of child exploitation.
"Fundamentally about stopping exploitation before it happens," declared Kanishka Narayan, adding: "Specialists, under rigorous protocols, can now detect the risk in AI models early."
The changes have been introduced because it is illegal to produce and own CSAM, meaning that AI creators and others cannot generate such content as part of a evaluation process. Previously, officials had to wait until AI-generated CSAM was uploaded online before dealing with it.
This law is designed to averting that issue by enabling to stop the production of those materials at their origin.
The changes are being added by the authorities as revisions to the criminal justice legislation, which is also implementing a ban on possessing, producing or distributing AI systems designed to create child sexual abuse material.
This recently, the official visited the London base of a children's helpline and heard a mock-up call to advisors featuring a report of AI-based abuse. The interaction depicted a teenager seeking help after being blackmailed using a explicit AI-generated image of themselves, constructed using AI.
"When I learn about children facing extortion online, it is a cause of intense frustration in me and rightful concern amongst families," he said.
A prominent online safety foundation stated that cases of AI-generated exploitation material – such as online pages that may include multiple images – had more than doubled so far this year.
Cases of the most severe content – the gravest form of abuse – increased from 2,621 images or videos to 3,086.
The legislative amendment could "represent a crucial step to guarantee AI tools are secure before they are released," stated the chief executive of the online safety foundation.
"Artificial intelligence systems have enabled so victims can be targeted all over again with just a simple actions, giving offenders the ability to make potentially endless amounts of advanced, lifelike child sexual abuse material," she added. "Material which additionally commodifies survivors' suffering, and makes children, especially girls, more vulnerable on and off line."
The children's helpline also released details of support sessions where AI has been referenced. AI-related risks discussed in the sessions include:
During April and September this year, Childline conducted 367 support interactions where AI, chatbots and related terms were mentioned, significantly more as many as in the same period last year.
Fifty percent of the mentions of AI in the 2025 interactions were related to psychological wellbeing and wellness, encompassing utilizing chatbots for assistance and AI therapy applications.
Lena is a seasoned gaming analyst with a passion for helping players navigate the world of online jackpots safely and successfully.