UK to ban deepfake AI ‘nudification’ apps

The United Kingdom government has announced a significant legislative crackdown on "nudification" applications, a type of software that uses artificial intelligence to create non-consensual sexually explicit deepfake images by digitally removing clothing. This move, revealed on Thursday as part of a broader strategy aimed at halving violence against women and girls, will criminalise the creation and distribution of AI tools that enable users to generate such explicit imagery. The new offences will complement existing legislation already in place concerning sexually explicit deepfakes and intimate image abuse.

Technology Secretary Liz Kendall emphasised the government’s commitment to ensuring the safety of women and girls both online and offline, stating, "We will not stand by while technology is weaponised to abuse, humiliate and exploit them through the creation of non-consensual sexually explicit deepfakes." The act of creating deepfake explicit images of an individual without their consent is already a criminal offence under the UK’s Online Safety Act. However, the proposed new offence specifically targets the creation and distribution of the nudifying apps themselves, ensuring that "those who profit from them or enable their use will feel the full force of the law," according to Ms. Kendall.

Nudification or "de-clothing" apps employ generative AI technology to create highly realistic images or videos that falsely depict individuals as having been stripped of their clothing. Experts have consistently warned about the escalating prevalence of these applications and the profound harm they can inflict on victims, particularly when used to generate child sexual abuse material (CSAM). In response to these escalating concerns, Dame Rachel de Souza, the Children’s Commissioner for England, called for an outright ban on nudification apps in April. She articulated her stance in a report, stating, "The act of making such an image is rightly illegal – the technology enabling it should also be."

UK to ban deepfake AI 'nudification' apps

In conjunction with these legislative measures, the government has pledged to collaborate with technology companies to develop innovative methods for combating intimate image abuse. This initiative includes continuing its partnership with SafeToNet, a UK-based safety technology firm. SafeToNet has developed AI software designed to identify and block sexual content, and in certain instances, can disable device cameras when it detects the capture of inappropriate material. These advancements build upon existing content moderation filters already implemented by major platforms like Meta, which aim to detect and flag potential nudity, often with the primary objective of preventing children from creating or sharing intimate images of themselves.

The proposed ban on nudifying apps follows a series of urgent appeals from child protection charities advocating for a robust governmental response to this burgeoning technological threat. The Internet Watch Foundation (IWF), which operates the Report Remove helpline allowing individuals under 18 to confidentially report explicit images of themselves online, has observed a disturbing trend. Their data indicates that 19% of confirmed reporters have stated that some or all of their imagery has been manipulated using AI. Kerry Smith, the chief executive of the IWF, expressed strong support for the government’s proposed measures. "We are also glad to see concrete steps to ban these so-called nudification apps which have no reason to exist as a product," she commented. "Apps like this put real children at even greater risk of harm, and we see the imagery produced being harvested in some of the darkest corners of the internet."

While children’s charity the NSPCC welcomed the government’s announcement, they also conveyed a degree of disappointment. Dr. Maria Neophytou, the charity’s director of strategy, expressed that they were "disappointed" not to see a similar level of "ambition" in introducing mandatory device-level protections. The NSPCC, along with other organisations, has been actively urging the government to compel tech firms to implement more effective measures for identifying and preventing the spread of CSAM across their services, including within private messaging channels.

Underscoring its commitment to safeguarding children, the government also announced on Thursday its intention to make it "impossible" for children to take, share, or view nude images on their mobile phones. Furthermore, the government is actively seeking to outlaw AI tools specifically designed for the creation or distribution of child sexual abuse material. This comprehensive approach aims to address the issue from multiple angles, targeting both the tools that facilitate abuse and the distribution of the resulting harmful content.

UK to ban deepfake AI 'nudification' apps

The broader strategy to halve violence against women and girls, within which these new measures are situated, encompasses a range of initiatives aimed at tackling various forms of abuse and exploitation. The government has acknowledged the pervasive nature of online harm and is seeking to create a safer digital environment for all users, with a particular focus on protecting vulnerable groups. The announcement regarding the ban on nudification apps represents a tangible step towards achieving this goal, addressing a specific and harmful manifestation of AI misuse that has garnered significant public and expert concern. The government’s stated intention to "join forces with tech companies" signals a recognition that effective solutions will require a collaborative effort between regulatory bodies and the industry itself. This partnership is expected to drive innovation in safety technology and foster a more proactive approach to online content moderation and user protection.

The development and deployment of AI technologies have brought about unprecedented advancements, but they have also introduced new challenges and ethical dilemmas. The misuse of AI for malicious purposes, such as the creation of non-consensual deepfakes, poses a serious threat to individual privacy, reputation, and psychological well-being. The UK government’s proactive stance on banning nudification apps reflects a growing global awareness of these risks and a determination to establish clear legal boundaries to prevent the weaponisation of artificial intelligence against individuals. The legislative framework being developed is expected to hold those who develop, distribute, and profit from such harmful technologies accountable, thereby deterring future instances of misuse and fostering a more responsible approach to AI development and application. The effectiveness of these new laws will ultimately depend on their robust enforcement and the continued collaboration between government, law enforcement, and the technology sector.

Related Posts

Elon Musk’s X to block Grok from undressing images of real people

In a significant pivot following a storm of controversy and regulatory scrutiny, Elon Musk’s social media platform X has announced a new policy aimed at preventing its artificial intelligence tool,…

No 10 Welcomes Reports X is Addressing Grok Deepfakes

Prime Minister Sir Keir Starmer has expressed his approval of reports indicating that the social media platform X, formerly Twitter, is taking steps to mitigate the proliferation of sexually explicit…

Leave a Reply

Your email address will not be published. Required fields are marked *