
OpenAI is facing a new lawsuit alleging the company failed to warn police after ChatGPT was linked to one of Canada’s deadliest school shootings. The lawsuit adds to growing scrutiny of how AI companies respond to signs of distress and real-world violence.
According to a report by Ars Technica, the lawsuit was filed on Wednesday in federal court in Northern California by an unnamed 12-year-old minor identified as M.G. and her mother, Cia Edmonds, against OpenAI CEO Sam Altman and several OpenAI entities.
The suit accuses the company of negligence, failing to warn authorities, product liability, and helping to enable the mass shooting.
“Sam Altman and his leadership team knew what silence meant for the citizens of Tumbler Ridge,” the complaint states. “They were focused on what disclosure meant for themselves. Warning the RCMP would set a precedent: OpenAI would be compelled to notify authorities every time its safety team identified a user planning real-world violence.”
The case stems from a mass shooting in Tumbler Ridge, British Columbia, in February. Authorities say 18-year-old Jesse Van Rootselaar killed her mother and 11-year-old stepbrother at home before going to Tumbler Ridge Secondary School and opening fire. Five children and one educator were killed at the school before Van Rootselaar died by suicide.
Among the injured was M.G., who was shot three times and remains hospitalized with catastrophic brain injuries. The complaint says she is awake and aware, but cannot move or speak.
Jay Edelson, founder and CEO of Edelson PC, the attorneys representing several of the families suing OpenAI, said the company’s own internal systems identified the risk, and multiple employees pushed for intervention.
“OpenAI’s own system flagged that the shooter was engaged in communications about planned violence,” Edelson told Decrypt. “Twelve people on their safety team were jumping up and down, saying that OpenAI needed to alert authorities. And, although Sam Altman’s response has been weak, even he was forced to admit last week that they should have called the authorities.”
Edelson said the families and the Tumbler Ridge community are demanding more transparency and accountability from the company.
“OpenAI should stop hiding critical information from the families, and they should not keep a dangerous product on the market, which is bound to lead to more deaths,” Edelson said. “Finally, they need to think long and hard about how they can maintain a leadership team that cares more about sprinting to an IPO than human lives.”
According to the lawsuit, OpenAI’s automated systems flagged Van Rootselaar’s ChatGPT account in June 2025 for conversations involving gun violence and planning. Members of OpenAI’s specialized safety team reviewed the chats and determined the user posed a credible and specific threat, recommending that the Royal Canadian Mounted Police be notified.
The lawsuit alleges OpenAI leaders overruled internal recommendations to alert authorities, deactivated Van Rootselaar’s account without notifying police, and allowed her to return by creating a new account with a different email address.
Plaintiffs claim ChatGPT deepened the shooter’s violent fixation through features like memory, conversational continuity, and its willingness to engage in discussions about violence, while OpenAI weakened safeguards in 2024 by moving away from outright refusals in conversations involving imminent harm.
Last week, Altman publicly apologized to the Tumbler Ridge community for the company’s failure to alert police. In a letter first reported by Canadian outlet Tumbler Ridgelines, Altman acknowledged OpenAI should have reported the account after banning it in June 2025 for activity related to violent conduct.
"The events in Tumbler Ridge are a tragedy. We have a zero-tolerance policy for using our tools to assist in committing violence,” an OpenAI spokesperson told Decrypt. “As we shared with Canadian officials, we have already strengthened our safeguards, including improving how ChatGPT responds to signs of distress, connecting people with local support and mental health resources, strengthening how we assess and escalate potential threats of violence, and improving detection of repeat policy violators."
OpenAI is already facing other lawsuits tied to ChatGPT’s alleged role in real-world harm, including a wrongful death case filed in December accusing OpenAI and Microsoft of “designing and distributing a defective product” in the form of the now-depreciated GPT-4o model. The lawsuit alleges that ChatGPT reinforced the paranoid beliefs of Stein-Erik Soelberg before he killed his mother, Suzanne Adams, and then himself at their home in Greenwich, Connecticut—marking the first lawsuit to link an AI chatbot to a homicide.
“This is the first case seeking to hold OpenAI accountable for causing violence to a third-party,” J. Eli Wade-Scott, managing partner of Edelson PC, told Decrypt at the time. “We're urging law enforcement to start thinking about when tragedies like this occur, what that user was saying to ChatGPT, and what ChatGPT was telling them to do.”