"Women and girls deserve to be safe online as well as offline," stated Technology Secretary Liz Kendall, emphasizing the government’s unwavering commitment to protecting individuals from technologically facilitated abuse. "We will not stand by while technology is weaponised to abuse, humiliate and exploit them through the creation of non-consensual sexually explicit deepfakes." The creation of such explicit deepfake imagery without consent is already a criminal offense under the stringent provisions of the Online Safety Act. Ms. Kendall further clarified that the new legislation, which targets the creation and distribution of nudifying apps, will ensure that "those who profit from them or enable their use will feel the full force of the law."

Nudification, often referred to as "de-clothing" technology, leverages generative AI to create hyper-realistic depictions of individuals appearing to be stripped of their clothing in images or videos. Experts have sounded increasingly urgent alarms regarding the proliferation of these applications, highlighting the profound and devastating harm that can be inflicted upon victims through the creation and dissemination of fake nude imagery. The potential for these tools to be used in the creation of child sexual abuse material (CSAM) is a particularly grave concern, prompting widespread calls for action. In response to these escalating threats, Dame Rachel de Souza, the Children’s Commissioner for England, advocated for a complete prohibition on nudification apps in April. She articulated in a report that "The act of making such an image is rightly illegal – the technology enabling it should also be."
In parallel with these legislative efforts, the government announced its intention to "join forces with tech companies" to collaboratively develop advanced methods for combating intimate image abuse. This collaborative approach includes the continuation of existing partnerships, such as the one with UK safety tech firm SafeToNet. This company has developed sophisticated AI software purportedly capable of identifying and blocking sexual content, and even disabling cameras when such content is detected being captured. These technological advancements build upon the existing content moderation filters implemented by major social media platforms like Meta, which are designed to detect and flag potential nudity, with a primary objective of preventing children from capturing or sharing intimate images of themselves.

The impetus behind the proposed ban on nudifying apps stems from persistent advocacy by child protection charities, which have been urging the government to implement stricter regulations on this type of technology. The Internet Watch Foundation (IWF), an organization that operates the Report Remove helpline enabling individuals under 18 to confidentially report explicit images of themselves online, revealed that a concerning 19% of confirmed reporters indicated that some or all of their imagery had been digitally manipulated. Kerry Smith, the chief executive of the IWF, expressed strong support for the government’s proposed measures. "We are also glad to see concrete steps to ban these so-called nudification apps which have no reason to exist as a product," she stated. "Apps like this put real children at even greater risk of harm, and we see the imagery produced being harvested in some of the darkest corners of the internet."
While children’s charity the NSPCC welcomed the news regarding the ban, its director of strategy, Dr. Maria Neophytou, expressed disappointment that similar "ambition" was not evident in the proposed introduction of mandatory device-level protections. The NSPCC, along with other organizations, has been actively campaigning for tech firms to implement more effective mechanisms for identifying and preventing the spread of CSAM across their services, including within private messaging features. In a related announcement on Thursday, the government indicated its commitment to making it "impossible" for children to capture, share, or view nude images on their mobile phones. Furthermore, the government is actively seeking to outlaw AI tools specifically designed for the creation or distribution of CSAM, underscoring a multi-pronged approach to tackling the most egregious forms of online child exploitation. These legislative and technological interventions signal a robust governmental effort to create a safer online environment, particularly for vulnerable individuals, by proactively addressing the misuse of AI and digital technologies for malicious purposes. The aim is to not only punish offenders but also to disrupt the very tools and platforms that facilitate such harmful activities, thereby creating a significant deterrent effect and protecting individuals from profound digital harm. The broader strategy reflects a commitment to modernizing laws and enforcement mechanisms to keep pace with the rapidly evolving technological landscape and its potential for both innovation and abuse.






