Business Fortitude
    Tumbler Ridge suspect's ChatGPT account banned before shooting
    News

    Tumbler Ridge suspect's ChatGPT account banned before shooting

    David AdamsByDavid Adams··5 min read

    🕐 Last updated: February 24, 2026

    • OpenAI flagged Jesse Van Rootselaar's ChatGPT account in June 2025 for usage patterns suggesting "violent activities" but did not report to police
    • Van Rootselaar killed 8 people including her mother and step-brother at Tumbler Ridge Secondary School in February, six months after the account was banned
    • The company's reporting threshold requires evidence of a "credible or imminent plan for serious physical harm" before notifying law enforcement
    • The attack left 27 others injured, ranking among Canada's deadliest mass shootings below the 1989 École Polytechnique massacre (14 dead) and 2020 Nova Scotia attacks (22 dead)

    A dozen employees at OpenAI spent last June arguing over what to do about Jesse Van Rootselaar's ChatGPT account. Some wanted to alert police. Others worried about overreach. Company leadership sided with caution, banned the account, and moved on.

    Six months later, Van Rootselaar walked into Tumbler Ridge Secondary School in British Columbia and killed eight people, including her own mother and step-brother, before taking her own life.

    The AI giant confirmed this week that it had flagged Van Rootselaar's account through its abuse detection system as early as June 2025, identifying usage patterns that suggested the account was being used "in furtherance of violent activities". But the company's internal threshold for notifying law enforcement requires evidence of a "credible or imminent plan for serious physical harm to others". Whatever Van Rootselaar had written or discussed with ChatGPT—OpenAI has not disclosed the specific content—apparently didn't meet that standard.

    Enjoying this article?

    Get stories like this in your inbox every week.

    Security personnel monitoring digital surveillance systems
    Security personnel monitoring digital surveillance systems

    The decision sits at the uncomfortable intersection of corporate liability, user privacy, and public safety. OpenAI maintains that its restrictive reporting policy exists precisely to prevent "unintended harm" from alerting authorities too broadly. The logic follows that of other tech platforms: cast the net too wide, and you risk turning AI providers into de facto surveillance arms of the state, potentially triggering investigations into users whose dark thoughts never translate to action.

    But that calculus shifted violently on 12 February, when the attack in Tumbler Ridge left 27 others injured in what ranks among Canada's deadliest mass shootings. The body count sits below the 1989 École Polytechnique massacre in Montreal, which killed 14, and the 2020 Nova Scotia attacks, which killed 22. That OpenAI "proactively" contacted Canadian police after the shooting offers little comfort to those wondering whether an earlier conversation might have changed the outcome.

    When algorithms spot danger

    The case exposes how automated content moderation—even when it works exactly as designed—can still fail catastrophically. OpenAI's systems did flag Van Rootselaar. Human investigators reviewed the account. Internal debate followed, suggesting staff took the matter seriously. The process functioned. The outcome was still eight dead.

    This creates a particularly thorny problem for AI companies operating at scale. ChatGPT serves hundreds of millions of users globally. How many accounts get flagged monthly for concerning content?

    How many involve genuine planning versus ideation, venting, or even creative writing? One person's cry for help looks remarkably similar to another's screenplay draft when filtered through an algorithm. The Wall Street Journal report suggests staff were divided on precisely this question.

    Artificial intelligence code and data analysis on computer screens
    Artificial intelligence code and data analysis on computer screens

    What we don't know matters enormously here. OpenAI hasn't revealed what Van Rootselaar actually said or asked that triggered the June ban. Was it explicit planning? Research into methods? Expressions of violent intent without specific details? The gap between "I want to hurt people" and "I plan to shoot up my school on this date" represents the difference between thought crime and conspiracy. Where Van Rootselaar's usage fell on that spectrum remains undisclosed, making it impossible to judge whether OpenAI's threshold was reasonable or dangerously high.

    The liability problem no one has solved

    Tech platforms have wrestled with this dilemma for years, but AI tools sharpen it considerably. Social media posts are public or semi-public by nature. ChatGPT conversations are private interactions with a machine. Users reasonably expect that privacy, even when company terms of service allow monitoring for abuse. Breaking that expectation by reporting to police based on automated flags and human hunches transforms AI assistants into informants.

    Yet the alternative—maintaining strict privacy thresholds whilst users plan violence—looks unconscionable when bodies pile up. Several US states have introduced legislation requiring social media platforms to report credible threats, but enforcement remains patchy and definitions of "credible" vary wildly. No equivalent framework exists yet for AI providers, despite their increasingly intimate role in users' thinking processes.

    OpenAI says it "constantly" reviews its referral criteria with experts. That review will now include a case study in how its current standards performed under real-world testing. The company faces growing regulatory pressure across jurisdictions: the EU's AI Act imposes transparency requirements, the UK's Online Safety Act creates duties of care for platforms, and Canadian authorities will likely scrutinise OpenAI's decision not to report as part of the Tumbler Ridge investigation.

    What's interesting here is how this incident will almost certainly push the pendulum towards broader reporting requirements, regardless of whether that actually improves outcomes.

    Politicians and regulators operate under different incentive structures than tech companies. The political cost of appearing soft on preventable violence vastly outweighs concerns about over-reporting or false positives. When the next AI safety bill lands in Parliament or Congress, expect mandatory reporting thresholds considerably lower than OpenAI's current standard.

    Law enforcement officials discussing digital evidence in an investigation
    Law enforcement officials discussing digital evidence in an investigation

    What happens next

    Whether that makes anyone safer remains genuinely unclear. The motive for the Tumbler Ridge attack isn't yet known. Would police intervention six months earlier have prevented it, or merely delayed it? Could intervention have connected Van Rootselaar with mental health resources that might have changed the trajectory? Or would it have simply meant surveillance of someone who hadn't yet committed a crime, with no legal basis for detention or treatment?

    Those questions will drive policy debates as AI providers, law enforcement, and legislators try to establish where the line sits between privacy and prevention. OpenAI's dozen staffers spent last June trying to answer them. Their decision will be scrutinised in courtrooms, parliamentary inquiries, and corporate risk committees for years to come. The victims of Tumbler Ridge won't be the last to suffer whilst the tech industry and society work out what AI companies owe the public beyond their terms of service.

    • Expect stricter mandatory reporting requirements for AI providers as regulators respond to political pressure, regardless of whether lower thresholds actually prevent violence or simply create more surveillance
    • The gap between automated detection
    David Adams
    David Adams

    Co-Founder

    Former COO at Venntro Media Group with 13+ years scaling SaaS and dating platforms. Now founding partner at Lucennio Consultancy, focused on GTM automation and AI-powered revenue systems. Co-founder of Business Fortitude, dedicated to giving entrepreneurs the news and insight they need.

    More articles by David Adams

    Comments

    💬 What are your thoughts on this story? Join the conversation below.

    to join the conversation.

    More in News

    View all →