What a new law and an investigation could mean for Grok AI deepfakes.

Here’s me, Zoe Kleinman, Technology Editor for the BBC, standing at the end of a pier in Dorset on a seemingly idyllic summer’s day. Or at least, that’s what two out of the three images presented here would have you believe. Two of these visuals were generated by Grok, an artificial intelligence tool freely available and owned by tech magnate Elon Musk. The results are remarkably convincing, and frankly, a little unsettling. I’ve never actually worn the rather fetching bright yellow ski suit, nor the red and blue jacket depicted; the middle photograph, showing me in a black hoodie, is the genuine article. Yet, without tangible proof beyond my own word, the veracity of these AI-generated images could be difficult to dispute, raising significant concerns about the potential for misinformation and manipulation.

Grok is currently facing intense scrutiny, not for redressing individuals, but for the unsolicited and often explicit "undressing" of women, generating images without their consent. The tool has been used to create images of people in bikinis, and in more disturbing instances, has been implicated in the generation of sexualised images of children, with the results widely shared on the social network X, formerly Twitter. In the wake of widespread outrage and condemnation, the UK’s online safety regulator, Ofcom, has launched an urgent investigation into whether Grok’s actions constitute a breach of British online safety laws. The government has pressed Ofcom to expedite this inquiry, emphasizing the need for swift action. However, Ofcom must navigate this investigation with thoroughness and adherence to its established processes to mitigate accusations of stifling free speech, a concern that has shadowed the Online Safety Act since its inception.

What a new law and an investigation could mean for Grok AI deepfakes

Elon Musk, typically vocal on social media, has maintained an uncharacteristic silence on this matter in recent days, perhaps acknowledging the gravity of the situation. His sole public statement on the issue was a post accusing the British government of seeking "any excuse" for censorship. This defence, however, has not resonated with everyone, particularly those who see AI-generated non-consensual imagery as a clear form of abuse rather than an exercise of free speech. As campaigner Ed Newton Rex starkly puts it, "AI undressing people in photos isn’t free speech – it’s abuse." He further elaborates on the pervasive nature of the problem, stating, "When every photo a woman posts of themselves on X immediately attracts public replies in which they’ve been stripped down to a bikini, something has gone very, very wrong."

Given the complexities and the significant public interest, Ofcom’s investigation is likely to be protracted, involving extensive deliberation and communication, potentially testing the patience of both policymakers and the public. This situation represents a critical juncture, not only for the UK’s Online Safety Act, which only fully came into force last year, but also for the regulator itself. Ofcom, which has been previously criticised for perceived leniency and has only issued three relatively small fines for non-compliance to date – none of which have been paid – cannot afford to falter in its handling of this high-profile case.

A significant point of contention is that the Online Safety Act, despite its comprehensive aims, does not explicitly address AI-generated content. While the act criminalises the sharing of intimate, non-consensual images, including deepfakes, the act of requesting an AI tool to create such images has not, until now, been illegal. This legislative gap is about to be closed. The UK government is poised to enact a new law this week that will criminalise the creation of such images. Furthermore, the government intends to amend another piece of legislation currently progressing through Parliament to make it illegal for companies to provide the tools designed for their creation. These forthcoming regulations are not part of the Online Safety Act but are rooted in the Data (Use and Access) Act. Their enforcement has been repeatedly delayed, despite numerous government announcements over many months indicating their imminent implementation. Today’s announcement signals a government determined to counter criticisms of regulatory sluggishness by demonstrating its capacity for swift action when deemed necessary. The implications of these new laws will extend beyond Grok, affecting a wider spectrum of AI tools.

What a new law and an investigation could mean for Grok AI deepfakes

The new law, set to be enforced imminently, could present considerable challenges for other developers and providers of AI tools that possess the technical capability to generate such images. A key question that arises is the practical enforcement of these regulations. Grok only came under intense scrutiny because its output was publicly disseminated on X. The challenge intensifies when an AI tool is used privately by an individual, if they circumvent built-in safeguards, and the resulting content is only shared with a consenting audience. How will such instances come to light and be effectively policed?

Should X be found to have violated the law, Ofcom possesses the authority to impose significant penalties, including fines of up to 10% of its global revenue or £18 million, whichever sum is greater. In more extreme scenarios, Ofcom could even seek to block Grok or X from operating within the UK. However, such an action could also trigger a significant political backlash. Recalling the AI Summit in Paris last year, I witnessed US Vice President JD Vance forcefully state that the American administration was "getting tired" of foreign nations attempting to regulate its technology companies. This pronouncement was met with a stark silence from the assembled global leaders. The tech industry wields considerable influence within the White House, and many of these companies have made substantial investments in AI infrastructure within the UK. The question then becomes: can the UK afford to alienate these powerful entities? The balance between robust regulation and maintaining international technological partnerships is a delicate one, and the outcome of Ofcom’s investigation into Grok and the implementation of new legislation will undoubtedly set a precedent for how the UK navigates this complex landscape. The country’s ability to effectively regulate advanced AI technologies while fostering innovation and maintaining international relations will be put to the test.

Related Posts

No 10 Welcomes Reports X is Addressing Grok Deepfakes

Prime Minister Sir Keir Starmer has expressed his approval of reports indicating that the social media platform X, formerly Twitter, is taking steps to mitigate the proliferation of sexually explicit…

Tech Life – When will a robot do my laundry? – BBC Sounds

The quest for domestic robotic assistance, particularly for mundane chores like laundry, has long been a staple of science fiction and a persistent aspiration in technological development. In a recent…

Leave a Reply

Your email address will not be published. Required fields are marked *