The Paris prosecutor’s cyber-crime unit has conducted raids on the French offices of Elon Musk’s social media platform X, formerly known as Twitter, as part of a burgeoning investigation into suspected offenses including unlawful data extraction and complicity in the possession of child sexual abuse material (CSAM). In a parallel development that has intensified scrutiny on Musk’s ventures, the UK’s Information Commissioner’s Office (ICO) has announced a fresh probe into Grok, Musk’s controversial artificial intelligence (AI) tool, specifically examining its potential to generate and disseminate harmful, sexualized imagery and video content.
The French prosecutor’s office has confirmed that both Elon Musk and former X chief executive Linda Yaccarino have been formally summoned to appear at hearings scheduled for April, signaling a significant escalation in the legal proceedings. The investigation in France, which initially commenced in January 2025, focused on content recommended by X’s algorithm. This inquiry was subsequently broadened in July of the same year to encompass Grok, Musk’s ambitious AI chatbot, which has already courted considerable controversy.
Elon Musk, in a characteristic post on X, vehemently denounced the raid, labeling it a "political attack." The company, in a formal statement, expressed its "disappointment" but stated it was "not surprised" by the actions, characterizing the Paris Public Prosecutor’s office’s move as an "abusive act." X has staunchly denied any wrongdoing, asserting that the raid "endangers free speech." Linda Yaccarino, who departed the company last year, also took to X to voice her strong condemnation, accusing French prosecutors of orchestrating "a political vendetta against Americans" and emphatically stating, "To be clear: they are lying."
Following Tuesday’s coordinated raids, French prosecutors have indicated that their investigation will now delve into whether X has violated the law across a spectrum of potential offenses. These include, but are not limited to, complicity in the possession or organized distribution of child sexual abuse material (CSAM), infringement of individuals’ image rights through the creation and dissemination of sexual deepfakes, and fraudulent data extraction perpetrated by an organized group. The breadth of these allegations underscores the seriousness with which French authorities are treating the matter.

In a significant development on the other side of the Atlantic, UK authorities have provided an update on their ongoing investigations into the proliferation of sexual deepfakes, particularly those generated by Grok and subsequently shared on the X platform. These deeply disturbing images, often created using real photographs of women without their consent, triggered a widespread outcry in January from victims, online safety advocates, and political figures. In response to the mounting pressure and investigations launched by regulatory bodies such as Ofcom, X eventually implemented measures to curb this practice.
However, the UK’s regulatory landscape is complex, and while Ofcom confirmed on Tuesday that it continues to investigate the platform, treating the matter with "a matter of urgency," it acknowledged limitations. Specifically, Ofcom stated it was currently unable to directly investigate the creation of illegal images by Grok within this specific case due to a lack of sufficient legal powers pertaining to chatbots.
This regulatory gap was swiftly addressed by the Information Commissioner’s Office (ICO). Shortly after Ofcom’s announcement, the ICO revealed it was launching its own comprehensive probe, in conjunction with Ofcom, into the processing of personal data in relation to Grok. William Malcolm, the ICO’s executive director for regulatory risk and innovation, articulated the gravity of the situation, stating, "The reports about Grok raise deeply troubling questions about how people’s personal data has been used to generate intimate or sexualised images without their knowledge or consent, and whether the necessary safeguards were put in place to prevent this." A spokesperson for the European Commission also confirmed that it is in communication with French authorities regarding the search of X’s Paris office, indicating a potential for broader European Union-level coordination.
Adding a layer of broader commentary on the regulatory climate, Pavel Durov, the founder of the popular messaging app Telegram, took to X to criticize the French authorities. Durov asserted that France is "the only country in the world that is criminally persecuting all social networks that give people some degree of freedom." He further declared, "Don’t be mistaken: this is not a free country." Durov himself has had recent run-ins with French authorities, having been arrested and detained in France in August 2024. The Paris prosecutor’s office at the time cited alleged moderation lapses on his messaging app, which they claimed had failed to adequately curb criminal activity. Durov was eventually permitted to leave the country in March of the following year after Telegram implemented certain operational changes following his arrest.
The raids in France and the new investigation in the UK highlight a growing international effort to hold social media platforms and their associated AI technologies accountable for the content they host and generate. The allegations of data extraction, complicity in child abuse material, and the creation of non-consensual sexual imagery strike at the heart of digital safety and privacy concerns. The responses from X and Elon Musk, characterized by defiance and accusations of political motivation, suggest that these legal and regulatory battles are likely to be protracted and highly contentious, potentially shaping the future of online content moderation and AI development. The involvement of multiple regulatory bodies across different jurisdictions underscores the transnational nature of these challenges and the increasing need for international cooperation to address them effectively. The ICO’s focus on the processing of personal data by Grok points to a critical area of concern: how AI models are trained and what safeguards are in place to prevent the misuse of individuals’ information, particularly in the creation of deeply harmful and exploitative content. The convergence of these investigations signifies a pivotal moment in the ongoing debate surrounding the responsibilities of technology giants in the digital age.







