OpenAI, the artificial intelligence company behind the widely used ChatGPT, confirmed that an account belonging to Jesse Van Rootselaar, the alleged perpetrator of a devastating mass shooting in Tumbler Ridge, British Columbia, was banned over seven months prior to the horrific incident. The ban, implemented in June 2025, stemmed from the company’s "abuse and enforcement detection" protocols, a system designed to identify accounts being utilized to promote or facilitate violence. This revelation adds a complex and concerning layer to the investigation of one of Canada’s deadliest attacks, raising critical questions about AI safety, corporate responsibility, and the boundaries of information sharing with law enforcement.
According to OpenAI’s statement, the company’s internal systems flagged Van Rootselaar’s account for misuse. While the specifics of the flagged activity were not disclosed, the company emphasized that such detections can include identifying patterns of behavior or content generation that indicate an intent to further violent activities. Despite this identification, OpenAI stated that it did not alert authorities at the time of the ban. The company’s policy, as articulated, is to report to law enforcement only when usage meets a specific threshold indicating a "credible or imminent plan for serious physical harm to others." In Van Rootselaar’s case, OpenAI determined that his account’s activity, while a violation of their terms of service, did not reach this critical benchmark for immediate intervention by external agencies.
This decision not to alert authorities has drawn significant scrutiny, particularly in light of the tragic events that unfolded on February 12th, 2026. Van Rootselaar is accused of fatally shooting eight individuals in the remote community of Tumbler Ridge, an act that has sent shockwaves across Canada and highlighted the devastating potential of extreme violence. The attack, which also left 27 others injured, occurred at Tumbler Ridge Secondary School, a place intended for learning and safety, now forever marked by tragedy. Van Rootselaar himself was found deceased at the school, having died from a self-inflicted gunshot wound, according to official reports.
The Wall Street Journal, in an exclusive report, revealed that a significant internal debate occurred within OpenAI regarding Van Rootselaar’s account. The report indicated that approximately a dozen staffers were involved in discussions about whether to take further action. Some employees, recognizing the potential real-world implications of the suspect’s use of the AI tool, reportedly voiced concerns and advocated for alerting law enforcement. These staffers perceived the nature of Van Rootselaar’s interactions with ChatGPT as indicative of a dangerous mindset that could translate into violent actions. However, these internal pleas to escalate the matter to authorities were ultimately not heeded at the leadership level, with the company opting to maintain its established reporting threshold.

In their official statement, an OpenAI spokesperson reiterated the company’s stance: "In June 2025, we proactively identified an account associated with this individual [Jesse Van Rootselaar] via our abuse detection and enforcement efforts, which include automated tools and human investigations to identify misuses of our models in furtherance of violent activities." The company expressed its condolences to those affected by the tragedy and affirmed its commitment to cooperating with the ongoing investigation. Following the attack, OpenAI stated it had "proactively" contacted Canadian police to share information regarding the suspect, indicating a shift in their approach after the event occurred.
The company’s rationale for its stringent reporting policy centers on the potential for unintended consequences. OpenAI has argued that broadly alerting authorities to every potential misuse of its technology could lead to an overwhelming volume of information, potentially diluting genuine threats and causing unnecessary alarm or disruption. Their policy aims to strike a balance between identifying and mitigating harm while respecting privacy and avoiding overreach. Furthermore, OpenAI asserts that its AI models are trained to discourage imminent real-world harm. When a dangerous situation is identified, ChatGPT is designed to refuse assistance to individuals attempting to use the service for illegal or harmful purposes.
The case of Jesse Van Rootselaar is prompting a broader societal conversation about the evolving landscape of artificial intelligence and its intersection with public safety. As AI becomes more sophisticated and integrated into daily life, the ethical considerations surrounding its development and deployment become increasingly paramount. OpenAI acknowledges the need for continuous evaluation and improvement, stating that it is "constantly reviewing its referral criteria with experts and is reviewing the case for improvements." This suggests a willingness to learn from the incident and potentially adapt its policies in light of this deeply concerning event.
Adding further layers of complexity to the narrative, police reports indicate that Jesse Van Rootselaar, while identified as biologically male at birth, identified as a woman. This detail, while personal, underscores the multifaceted nature of individuals and the challenges in profiling or predicting behavior. The attack’s motive remains unclear, leaving investigators and the community grappling for answers. Adding to the profound tragedy, Van Rootselaar’s mother and step-brother were among the initial victims, discovered deceased at a local residence. Their inclusion as victims highlights the intimate and devastating reach of the violence, impacting not only the broader community but also the perpetrator’s own family.
The Royal Canadian Mounted Police (RCMP) have been contacted by the BBC for comment, but as of this report, no further details regarding their investigation or their interaction with OpenAI have been released. The incident in Tumbler Ridge serves as a stark reminder of the complex challenges posed by emerging technologies and the urgent need for robust ethical frameworks, transparent communication, and effective collaboration between technology companies and law enforcement to prevent future tragedies. The ban on Van Rootselaar’s ChatGPT account, while a proactive step by OpenAI, has inevitably become a focal point in understanding the timeline and potential warning signs that may have been present before the devastating loss of life in Tumbler Ridge. The ongoing investigation and the subsequent reviews by OpenAI will be critical in determining what lessons can be learned and what measures can be implemented to safeguard communities from the misuse of powerful AI tools in the future. The community of Tumbler Ridge, still reeling from the shock, now faces the arduous task of healing and understanding how such an event could have transpired, with the role of artificial intelligence in the lead-up to the shooting becoming a significant area of inquiry.











