The artificial intelligence company OpenAI confirmed that an account belonging to Jesse Van Rootselaar, the suspect in the horrific mass shooting that claimed eight lives in Tumbler Ridge, British Columbia, was banned over seven months prior to the devastating attack. The AI giant stated that their abuse and enforcement detection systems, which are designed to identify instances of AI being used to further violence, flagged Van Rootselaar’s account in June 2025. While OpenAI identified the account as problematic, they did not alert authorities at that time, explaining that the usage did not meet their internal threshold for a credible or imminent threat of serious physical harm to others. This decision has since come under intense scrutiny following the tragic events of February 12th, which marked one of the deadliest attacks in Canadian history.
The Wall Street Journal was the first to report on internal discussions at OpenAI, revealing that approximately a dozen staff members debated the appropriate course of action regarding Van Rootselaar’s online activity. Some employees reportedly recognized the suspect’s engagement with the AI tool as a potential precursor to real-world violence and advocated for alerting law enforcement. However, according to the report, company leadership ultimately decided against such a notification. In a subsequent statement, an OpenAI spokesperson elaborated, "In June 2025, we proactively identified an account associated with this individual [Jesse Van Rootselaar] via our abuse detection and enforcement efforts, which include automated tools and human investigations to identify misuses of our models in furtherance of violent activities." The company has pledged to continue cooperating with the ongoing police investigation into the massacre.
The Royal Canadian Mounted Police have been contacted by the BBC for comment on the matter. OpenAI maintains a policy of alerting authorities only when there is an immediate and imminent risk, arguing that broader notifications could potentially lead to unintended consequences and harm. The company also asserts that its AI, ChatGPT, is trained to actively discourage imminent real-world harm when it detects a dangerous situation and to refuse requests from individuals attempting to use the service for illegal purposes. OpenAI has indicated that it is continuously reviewing its referral criteria in consultation with experts and is conducting a thorough review of this specific case to identify areas for improvement in its detection and reporting protocols.
The devastating attack at Tumbler Ridge Secondary School not only resulted in eight fatalities but also left 27 other individuals injured. Jesse Van Rootselaar was discovered deceased at the school, having died from a self-inflicted gunshot wound. Police have stated that Van Rootselaar was born biologically male but identified as a woman. Tragically, among the victims were Van Rootselaar’s mother and half-brother, both of whom were found deceased at a local residence. The motive behind this horrific act of violence remains unknown and is a central focus of the ongoing investigation.

The revelation that OpenAI had flagged Van Rootselaar’s account months before the shooting raises significant questions about the efficacy and ethical implications of AI companies’ policies regarding the detection and reporting of potential threats. While OpenAI’s stance on not alerting authorities unless a threat is imminent is presented as a measure to prevent undue alarm, the Tumbler Ridge tragedy highlights the potential devastating consequences of such a policy when a threat, however perceived at the time, ultimately materializes with catastrophic results. The internal debate within OpenAI, as reported by The Wall Street Journal, suggests a divergence of opinion among its staff regarding the appropriate level of intervention, with some employees clearly recognizing the gravity of the situation and advocating for more proactive measures.
The complexity of identifying and responding to potential threats originating from AI platforms is a burgeoning area of concern for law enforcement and technology companies alike. While AI can be a powerful tool for good, its potential for misuse, especially in the hands of individuals with malicious intent, presents a formidable challenge. The case of Jesse Van Rootselaar underscores the critical need for robust and transparent protocols that balance the protection of privacy with the imperative to prevent mass violence. The company’s emphasis on training ChatGPT to refuse harmful requests and discourage real-world violence is a positive step, but the question remains whether these internal safeguards are sufficient or if a more direct line of communication with law enforcement is warranted in certain escalated circumstances, even if the threat doesn’t meet the highest bar of imminence.
The fact that OpenAI’s abuse detection systems identified Van Rootselaar’s account for "misuses of our models in furtherance of violent activities" is a significant detail. This suggests that the AI had detected patterns of usage that were indicative of more than just casual or benign interaction with the technology. The subsequent decision by leadership not to escalate this flagging to law enforcement, based on the threshold of "credible or imminent plan for serious physical harm," will undoubtedly be subject to intense scrutiny and debate. Critics may argue that in cases involving potential for extreme violence, a lower threshold for reporting might be more appropriate, particularly when dealing with a technology that could potentially be used to plan or facilitate such acts.
The inclusion of Van Rootselaar’s mother and half-brother among the victims adds another layer of profound tragedy to an already incomprehensible event. The personal devastation experienced by the families and the community of Tumbler Ridge is immense. As the investigation progresses, the focus will likely remain on understanding the full scope of Van Rootselaar’s activities, both online and offline, and how the AI technology may have played a role in the events leading up to the shooting.
OpenAI’s commitment to reviewing its referral criteria and the case itself for improvements indicates a recognition of the complexities involved. The company’s ongoing efforts to engage with experts and refine its policies are crucial in navigating the evolving landscape of AI and its societal impact. However, the Tumbler Ridge tragedy serves as a stark reminder that the development and deployment of powerful AI technologies carry with them profound ethical responsibilities, and that the mechanisms for ensuring public safety in the digital age are still very much a work in progress. The full implications of this event on AI regulation and corporate accountability are yet to be seen, but it is clear that the conversation around how to prevent AI-enabled violence has been irrevocably shaped by this devastating incident. The community of Tumbler Ridge, and indeed Canada, is left to grapple with the immense loss and the complex questions surrounding the events that led to this tragedy.






