What the lawsuits allege

The lawsuits, filed on Wednesday in federal court in San Francisco, centre on the actions of 18-year-old Jesse Van Rootselaar, who carried out the deadly attack at a secondary school in Tumbler Ridge, as first reported by the Guardian. According to the filings, OpenAI employees flagged Van Rootselaar's ChatGPT account and determined it posed "a credible and specific threat of gun violence against real people."

Despite that internal assessment, the company allegedly took no steps to alert law enforcement or any other authority. The plaintiffs are suing both OpenAI and its CEO, Sam Altman, for negligence. Seven families are represented across the claims.

The core allegation is stark: OpenAI had specific, actionable knowledge of a threat and a window of roughly eight months in which to act, yet chose not to. If the facts as pleaded are accepted, the case raises a question that no court has yet resolved in the context of large language models: does an AI company that identifies a credible threat on its own platform owe a legal duty to report it?

The duty-to-report question for AI companies

Platform liability in the United States has long been shaped by Section 230 of the Communications Decency Act, which broadly shields internet companies from liability for content created by their users. Social media firms have relied on this protection for decades. However, the Tumbler Ridge lawsuits test whether that shield extends to interactions with an AI system, where the platform is not merely hosting user-generated content but actively generating responses.

Legal scholars have debated whether a conversation with a chatbot constitutes "user-generated content" at all. When an AI model produces replies, the platform arguably becomes a co-author rather than a passive intermediary. If a court agrees, Section 230 protections may not apply in the same way.

Beyond the statutory question, there is a common-law dimension. In US tort law, a duty to act generally arises when a party has a "special relationship" with either the person who poses a danger or the person at risk. The plaintiffs' argument, according to the Guardian's reporting, appears to rest on the idea that OpenAI's own internal processes, which identified and assessed the threat, created an obligation to follow through.

There is limited but relevant precedent. Courts have previously held that therapists who learn of a patient's violent intentions owe a duty to warn potential victims, under the principle established in the landmark 1976 California Supreme Court ruling in Tarasoff v Regents of the University of California. Whether an AI company's content-moderation team occupies an analogous position is untested.

OpenAI's publicly disclosed safety policies describe systems for monitoring misuse, including automated classifiers and human review of flagged content. The company's usage policies prohibit content that promotes violence. Yet the lawsuits suggest that the gap between detection and action was, in this case, fatal.

Implications for businesses deploying third-party AI tools

The Tumbler Ridge litigation carries consequences well beyond OpenAI's own operations. Thousands of organisations across sectors now integrate large language models into customer-facing products, internal workflows, and decision-support systems. If a court finds that OpenAI owed a duty of care based on its knowledge of a threat, the reasoning could extend to any business that deploys AI tools and gains similar knowledge through their use.

The concept of vicarious liability is central here. An organisation that embeds a third-party AI model into its platform may inherit obligations if that model surfaces information indicating a risk of harm. A financial services firm whose AI assistant flags a client's violent statements, for instance, could face questions about whether it had a duty to escalate.

For UK businesses, the position is shaped by both domestic negligence principles and the evolving regulatory framework. The UK's approach to AI governance, set out in the government's 2023 white paper and subsequent sector-specific guidance, places responsibility on deployers as well as developers. Regulators including the Financial Conduct Authority and the Information Commissioner's Office have signalled that organisations cannot outsource accountability simply by using a third-party model.

Practical steps for boards and compliance teams include reviewing contracts with AI providers to understand data-handling obligations, establishing internal escalation protocols for threats or harmful content surfaced by AI systems, and ensuring that safety governance keeps pace with the capabilities of the tools being deployed.

Insurance and contractual exposure

The litigation also raises questions about insurance coverage. Standard professional indemnity and public liability policies may not explicitly address harms arising from AI-facilitated interactions. Businesses integrating AI tools should audit their coverage and consider whether bespoke provisions are needed.

Where regulation stands, and where it is heading

The regulatory landscape is fragmented but moving quickly.

The EU AI Act, which entered into force in stages from 2024, classifies AI systems by risk level. High-risk systems, including those used in law enforcement, education, and critical infrastructure, face mandatory obligations around transparency, human oversight, and incident reporting. General-purpose AI models such as ChatGPT are subject to a separate set of requirements, including obligations to identify and mitigate systemic risks. Whether a failure to report a specific threat would constitute a breach under the Act has not been tested.

In the United States, there is no comprehensive federal AI law. Several bills have been proposed, including measures that would require AI companies to conduct safety evaluations and report certain risks, but none has yet passed both chambers of Congress. California, where the Tumbler Ridge lawsuits were filed, has been among the most active states in proposing AI-specific regulation.

Canada's Artificial Intelligence and Data Act (AIDA), introduced as part of Bill C-27, would create obligations for "high-impact" AI systems, including requirements around risk mitigation and reporting. The bill has faced delays and criticism over its breadth, but if enacted it would be directly relevant to any AI provider operating in Canada.

For UK operators, the absence of a single AI statute does not mean an absence of obligation. Existing laws, including the Data Protection Act 2018, the Online Safety Act 2023, and sector-specific regulations, already impose duties that interact with AI deployment. The government's stated preference for a principles-based, sector-led approach means that responsibility falls on individual regulators to adapt existing frameworks.

"Employees at the company flagged the shooter's account eight months before the attack and determined that it posed 'a credible and specific threat of gun violence against real people'," according to the lawsuit, as reported by the Guardian.

The Tumbler Ridge case may ultimately be decided on narrow procedural or jurisdictional grounds. But the questions it raises, about what AI companies know, when they know it, and what they are obliged to do with that knowledge, will not go away. For any organisation building with or on top of AI, the prudent course is to assume that the duty of care is expanding, and to govern accordingly.