In a landmark move to protect individuals from the devastating consequences of non-consensual intimate image sharing, the UK government has introduced a stringent new law that will compel technology platforms to remove such material within a strict 48-hour timeframe. This legislation, currently being debated and refined within the House of Lords as an amendment to the broader Crime and Policing Bill, signals a significant shift in accountability, placing a robust onus on digital service providers to act swiftly and decisively against harmful content. The government has unequivocally stated its intention to treat intimate image abuse with the same gravity and urgency as child sexual abuse material (CSAM) and terrorist content, acknowledging the profound psychological and social damage it inflicts. Companies failing to comply with these new directives face severe penalties, including substantial fines equivalent to up to 10% of their global annual turnover or, in more extreme cases, the potential blocking of their services within the United Kingdom.
Prime Minister Sir Keir Starmer, speaking with conviction on BBC Breakfast, articulated the government’s commitment to this initiative, describing it as an "ongoing battle" waged on behalf of victims against the powerful influence of platform providers. He emphasised the need for a proactive and robust response, drawing parallels to existing mechanisms for tackling other forms of egregious online harm. Janaya Walker, interim director of the End Violence Against Women Coalition, lauded the proposed legislation, stating that it "rightly places the responsibility on tech companies to act," a sentiment echoed by numerous advocacy groups who have long campaigned for stronger protections for those targeted by such abuse.

A key tenet of the new law is the simplification of the reporting process for victims. Under the proposed regulations, individuals will only need to flag an abusive image once, rather than undertaking the arduous and often re-traumatising task of contacting multiple platforms individually. This streamlined approach aims to reduce the burden on victims and expedite the removal process. Furthermore, tech companies will be legally obliged not only to remove the offending content but also to implement measures to prevent its re-uploading, a crucial step in combating the persistent nature of online abuse. This proactive blocking mechanism is designed to disrupt the cycle of dissemination and minimise further harm.
The legislation also extends its reach to encompass internet service providers, providing them with the necessary guidance and authority to block access to rogue websites that actively host illegal content. This strategic expansion of the law is intended to address a significant loophole, targeting those online spaces that have historically evaded the scrutiny and regulation imposed by measures such as the Online Safety Act. By extending regulatory oversight to these less conventional platforms, the government aims to create a more comprehensive and impermeable digital environment, leaving fewer avenues for perpetrators to exploit.
The disproportionate impact of Intimate Image Abuse (IIA) on women, girls, and LGBT individuals has been a central concern driving this legislative push. However, recent findings highlight a disturbing trend affecting younger demographics as well. A comprehensive government report published in July 2025 shed light on the increasing prevalence of financial sexual extortion, commonly referred to as "sextortion," targeting young men and boys. In these harrowing cases, victims are coerced into paying money under duress, with perpetrators threatening to share intimate images if demands are not met. This evolving landscape of online abuse underscores the urgent need for robust legal frameworks that can adapt to new forms of harm and protect all vulnerable populations.

Further underscoring the urgency of the situation, a parliamentary report released in May 2025 revealed a significant increase in reported incidents of intimate image abuse, with figures showing a 20.9% rise in 2024 alone. This alarming statistic points to a growing crisis that demands immediate and decisive action from both the government and the technology industry. The proposed 48-hour removal window is a direct response to this escalating problem, aiming to stem the tide of abuse before it can inflict irreparable damage.
Prime Minister Starmer elaborated on the practical implications of the new law during his interview, stating that it would liberate victims from the exhausting and often futile task of "whack-a-mole chasing wherever this image is next going up." He drew a direct comparison to the existing duty of care that tech companies already have regarding terrorist material, arguing that the framework for rapid content removal is established and effective. "It can be done. It’s a known mechanism," he asserted, underscoring his belief that the same level of commitment and vigour should be applied to combating intimate image abuse.
Regarding enforcement, Sir Keir indicated that the new law would be bolstered by a combination of oversight bodies responsible for monitoring online content and the prospect of criminal proceedings. While he acknowledged the severity of the penalties, he clarified that he did not anticipate prison sentences for technology executives directly, suggesting a focus on substantial financial penalties and other regulatory measures to ensure compliance. The government’s intention is to create a deterrent that is both impactful and proportionate, driving behavioural change within the tech sector.

Technology Secretary Liz Kendall reinforced the government’s firm stance, declaring that "the days of tech firms having a free pass are over." She expressed empathy for victims, stating, "no woman should have to chase platform after platform, waiting days for an image to come down." This sentiment highlights the government’s commitment to a victim-centric approach, prioritising the immediate safety and well-being of those who have been subjected to online abuse.
This significant legislative announcement follows a period of heightened tension between the government and social media platforms, most notably the standoff with X (formerly Twitter) in January. This incident involved the AI tool Grok generating explicit images of real women, prompting widespread condemnation and public outcry. The eventual removal of the function by X served as a stark reminder of the potential for advanced AI technologies to be weaponised for malicious purposes and underscored the need for more stringent regulatory oversight of emerging technologies. The new law, therefore, not only addresses existing forms of intimate image abuse but also anticipates and seeks to mitigate the risks posed by rapidly evolving digital capabilities. The proposed measures aim to create a safer online environment, ensuring that technological advancements are harnessed for the benefit of society, not for the perpetuation of harm and exploitation.






