EU investigates Elon Musk’s X over Grok AI sexual deepfakes

This move by the EU regulator mirrors a similar inquiry announced in January by the UK’s communications watchdog, Ofcom, highlighting a growing international consensus on the urgent need to address the proliferation of AI-generated harmful content. The investigations center on the alarming potential for Grok, X’s proprietary generative AI, to create "manipulated sexually explicit images" and the platform’s alleged failure to adequately prevent their distribution to users within the EU and UK.

Regina Doherty, a Member of the European Parliament representing Ireland, articulated the gravity of the situation, stating that the Commission would rigorously assess whether such illicit content has indeed reached users in the EU. Her comments reflect a broader legislative intent behind the DSA, which aims to establish a safer, more predictable, and trustworthy online environment for all European citizens. Doherty emphasized the "serious questions" surrounding platforms like X and their adherence to legal obligations "to assess risks properly and to prevent illegal and harmful content from spreading." She underscored that "The European Union has clear rules to protect people online," and these "rules must mean something in practice, especially when powerful technologies are deployed at scale. No company operating in the EU is above the law."

In response to initial criticisms and reports, X’s official Safety account had previously issued a statement indicating that the social media platform had taken steps to prevent Grok from digitally altering pictures of people to remove their clothing in "jurisdictions where such content is illegal." However, this retroactive measure has been met with skepticism and strong condemnation from campaigners and victims of deepfake abuse, who argue that the capability to generate such sexually explicit pictures using the AI tool "should have never happened" in the first place. Their sentiment reflects a demand for proactive prevention rather than reactive mitigation, asserting that platforms bear a fundamental responsibility to design their AI tools with robust ethical safeguards from inception. Ofcom, the UK regulator, confirmed that its investigation into X’s practices remains ongoing, signaling a sustained scrutiny of the platform’s content governance.

The EU regulator has indicated its authority to "impose interim measures" should X fail to implement meaningful adjustments to its systems and content moderation policies. These interim measures could range from requiring specific changes to Grok’s functionality or X’s recommender systems to more stringent operational restrictions, all aimed at mitigating immediate risks. Ultimately, non-compliance with the DSA can lead to substantial financial penalties, potentially reaching up to 6% of a company’s global annual turnover, a deterrent designed to ensure adherence among tech giants.

Beyond the specific concerns surrounding Grok, the European Commission has also extended its ongoing investigation, initially launched in December 2023, into risks associated with X’s recommender systems. This broader inquiry examines the algorithms that recommend specific posts to users, assessing their potential role in amplifying illegal content, disinformation, and harmful narratives. The DSA mandates that VLOPs conduct rigorous risk assessments of their algorithmic systems and implement effective mitigation strategies to protect users from systemic risks. The extension of this investigation suggests that the Commission perceives X’s algorithmic architecture as potentially contributing to the spread of problematic content, including the alleged deepfakes.

Elon Musk, X’s owner, has adopted a defiant posture against these regulatory pressures. Prior to the Commission’s formal announcement, he posted a picture on X appearing to make light of the new restrictions being placed on Grok. This follows a pattern where Musk has previously criticized those scrutinizing the app’s image-editing functions, particularly the UK government, dismissing such oversight as "any excuse for censorship." His comments reflect a fundamental divergence in philosophy between his vision of uninhibited online expression and the regulatory frameworks that prioritize user safety and the prevention of harm.

The sheer scale of AI-generated content further amplifies the regulatory challenge. On Sunday, the official Grok account on X claimed that over 5.5 billion images were generated by the tool in just 30 days. This staggering volume underscores the immense difficulty—and critical importance—of effectively moderating and preventing the misuse of such powerful generative AI technologies. The rapid creation and potential dissemination of harmful content at this scale necessitate robust, scalable, and proactive safety mechanisms, which regulators like the European Commission believe X has yet to adequately implement.

The Irish media regulator, Coimisiún na Meán, voiced its strong support for the EU’s action, with a spokesperson stating, "There is no place in our society for non-consensual intimate imagery abuse or child sexual abuse material." This statement aligns with the core principles of the DSA, which explicitly prohibits the dissemination of such content and places a legal obligation on platforms to combat it effectively.

This current investigation is not an isolated incident but rather the latest in a series of confrontations between X and EU regulators. Just a month prior, the EU had fined X €120 million (£105 million) over its "blue tick" verification badges, asserting that they "deceive users" because the firm was not "meaningfully verifying" the identities behind the accounts. That fine, too, sparked a heated reaction from the US, with figures like Secretary of State Marco Rubio and the Federal Communications Commission (FCC) accusing the EU regulator of attacking and censoring US firms. Rubio stated, "The European Commission’s fine isn’t just an attack on X, it’s an attack on all American tech platforms and the American people by foreign governments." Musk publicly endorsed these remarks, reposting them and adding "absolutely."

The escalating regulatory scrutiny of X, particularly under the comprehensive framework of the Digital Services Act, highlights the EU’s determination to enforce its digital rules. The DSA, which became fully applicable to VLOPs like X in August 2023, imposes wide-ranging obligations including robust content moderation, transparency in algorithms, diligent risk assessment and mitigation, and enhanced user protection. The investigation into Grok’s alleged misuse for deepfakes directly tests X’s compliance with these critical provisions, especially those designed to prevent the spread of illegal and harmful content. The outcome of this investigation will not only determine X’s immediate future in the EU market but also set a crucial precedent for how AI-driven platforms are held accountable for the ethical and societal impacts of their technologies worldwide.

Related Posts

Brewdog: Bars close and hundreds lose jobs as beer firm sold in £33m deal

Administrators from AlixPartners confirmed the deal, highlighting that it has successfully preserved 733 jobs within the company’s UK operations. However, the rescue package came at a significant cost: 484 employees…

Thousands more flights cancelled as Iran strikes continue

The escalating conflict in the Middle East has triggered an unprecedented wave of flight cancellations, with thousands more services grounded on Monday as the joint Israeli and US war on…

Leave a Reply

Your email address will not be published. Required fields are marked *