The community of Tumbler Ridge is reeling from a horrific mass shooting on February 12, which left eight people dead and 27 injured at Tumbler Ridge Secondary School. The suspect, Jesse Van Rootselaar, died from a self-inflicted gunshot wound, with their mother and step-brother among the victims. The motive behind this devastating attack remains unclear, though authorities noted the suspect, born male, identified as a woman.
In the wake of this tragedy, a concerning detail has emerged involving OpenAI, the company behind ChatGPT. OpenAI confirmed that Van Rootselaar’s ChatGPT account had been banned more than six months before the attack. Identified in June 2025 through their abuse detection systems designed to flag violent activity, the account was banned for violating terms of service.
However, despite identifying this misuse, OpenAI did not notify law enforcement at the time. The company stated its usage did not meet the threshold for a “credible or imminent plan to cause serious physical harm,” which is their policy for alerting authorities. Internal debates reportedly took place among OpenAI staff about whether to contact law enforcement, with leadership ultimately deciding against it.
Following the attack, OpenAI proactively contacted Canadian authorities, providing information on the suspect. The company emphasizes its commitment to preventing real-world harm and trains ChatGPT to refuse assistance for illegal activities. They are now reviewing the case to improve their referral criteria, aiming for a balance that protects public safety without causing unintended harm through overly broad reporting. This complex situation highlights the difficult ethical tightrope tech companies walk in managing potential threats.
Source: https://www.express.co.uk/news/world/2173921/tumbler-ridge-shooting-suspects-chatgpt