An edition of one. Personalized daily.

The Read

Edition WEDNESDAY

Tech

Seven Families Sue OpenAI Over Mass Shooting, and a New Legal Theory of AI Liability

The lawsuits allege ChatGPT helped a shooter plan his attack; the case could set a precedent for how platforms are held accountable for third-party violence.

Seven Families Sue OpenAI Over Mass Shooting, and a New Legal Theory of AI Liability

On February 10, 2026, an 18-year-old shooter killed her mother and half-brother at home, then drove to Tumbler Ridge Secondary School in British Columbia and killed six more people, five students aged 12-13 and a 39-year-old education assistant, before taking her own life CBC. The shooter, Jesse Van Rootselaar, had used ChatGPT to plan the attack, and OpenAI’s automated safety systems had flagged the account for “gun violence activity and planning” eight months earlier CNN.

Seven families filed suit in federal court in San Francisco on April 29, 2026, alleging that OpenAI and its leadership knew the chatbot was being used to plan a mass shooting and took no meaningful action NPR. The central claim: the company’s internal safety staff recommended alerting authorities after ChatGPT flagged Van Rootselaar’s account in June 2025, but leadership overruled them and simply deactivated the account CNN. The plaintiffs argue that deactivation without notification to law enforcement was not enough, that OpenAI had a duty to report a user it knew was planning gun violence.

The lawsuits advance a legal theory that generative AI should be treated as a product, not speech. In 2025, a federal judge in Florida allowed a wrongful death case against Character.AI to proceed on product liability grounds after a chatbot allegedly encouraged a teenager’s suicide. The Tumbler Ridge families are building on that foundation. If a company sells a product it knows is being used to plan murder, they argue, standard product liability law applies, regardless of whether the product is an automobile, a drug, or a chatbot.

The difference from social media liability is key. Under Section 230 of the Communications Decency Act, platforms like Facebook and X are generally immune from liability for third-party content. The plaintiffs contend ChatGPT is not a passive platform, it generates content in real time, making it more like a manufacturer than a publisher. “This is like selling a gun to someone you know is plotting a murder,” the filing argues.

OpenAI’s terms of service explicitly prohibit using the tool to plan violence. The company’s safety guidelines also promise reporting mechanisms for dangerous activity. The plaintiffs argue that these promises created a duty of care, a legal concept called “undertaking liability”, that OpenAI failed to honor.

The case echoes the 2023 Supreme Court case Gonzalez v. Google, in which the family of an ISIS attack victim sought to hold Google liable because YouTube recommended terrorist videos. The justices declined to address the Section 230 question, resolving the case on other grounds tied to the Anti-Terrorism Act. The Tumbler Ridge case differs: ChatGPT generates the dangerous content itself rather than promoting existing material. Some legal scholars see this as a potential crack in Big Tech’s liability shield. If courts treat ChatGPT as a product, OpenAI could be held to the same standard as an automobile or pharmaceutical company: if the product is dangerous and the company knows it, it has a duty to act.

OpenAI will almost certainly file a motion to dismiss, arguing its product is speech protected by the First Amendment and that holding it liable would chill innovation. The company may cite Bernstein v. DOJ, a 1997 case that found software code to be protected speech under export control laws. Any victory for the plaintiffs at the trial court level would meaningfully shift AI regulation.

For the families, the suit is about accountability for a specific tragedy. “We want the world to know that this company could have stopped it and chose not to,” one of the plaintiffs’ attorneys told CNN CNN. A motion to dismiss hearing is expected within six months. If the case survives, a jury trial could begin as early as late 2026.

What comes next depends on whether the judge agrees that a chatbot that generates personalized violent content is more like a product or like a publisher. If the product theory holds, companies that build these systems may need to redesign their safety pipelines, not just to block dangerous queries, but to report them.

References

  1. https://www.wsj.com/us-news/openai-sued-by-seven-families-over-mass-shooting-suspects-chatgpt-use-ebf10dc6 — wsj.com (accessed 2026-04-29)
  2. https://www.reuters.com/legal/families-canadian-mass-shooting-victims-sue-openai-ceo-altman-us-court-2026-04-29/ — reuters.com (accessed 2026-04-29)
  3. https://www.bbc.com/news/articles/c99l03k0ly4o — bbc.com (accessed 2026-04-29)
Editor's notes — what this article still gets wrong

Fact-check fixes applied

MAJOR — In 2024, a Florida judge allowed a wrongful death case against Character.AI to proceed on product liability grounds after a chatbot allegedly encouraged a teenager's suicide. Corrected: The lawsuit (Garcia v. Character Technologies) was filed in October 2024, but the federal judge's ruling allowing it to proceed on product-liability grounds (rejecting the First Amendment defense) was issued in May 2025, not 2024.

MAJOR — The justices declined to carve out an exception to Section 230, ruling that recommendation algorithms were protected speech. Corrected: In Gonzalez v. Google (May 18, 2023), the Supreme Court did NOT rule that recommendation algorithms were protected speech. In a brief per curiam opinion, the Court declined to address Section 230 at all, instead disposing of the case on Anti-Terrorism Act grounds via its companion ruling in Twitter v. Taamneh.

Where it lands

The legal framework is the piece's genuine strength. The Section 230 vs. product liability distinction is explained cleanly, the chain from Character.AI through Gonzalez v. Google to this case is coherent, and the "manufacturer vs. publisher" framing gives readers a workable mental model without requiring a law degree.

Where it falls short

The most explosive claim -- that OpenAI leadership specifically overruled internal staff who recommended notifying authorities -- rests on a single unnamed CNN source, with no document, docket number, or named official. That allegation needed harder attribution or an explicit caveat. The full source list is also empty: citations appear in the text but no URLs are provided, making independent verification impossible.

What it didn't answer

The suit was filed in San Francisco federal court, but the shooting occurred in British Columbia and all victims are Canadian. The article never addresses why U.S. courts have standing over a foreign mass casualty event. Jurisdiction is likely the first line of OpenAI's motion to dismiss, and readers get nothing on it.

Cost to produce $3.24 image=4¢ write=0¢ critique=9¢ rewrite=0¢ fact-check=$1.44 rewrite=0¢ fact-check=$1.55 final-notes=6¢ chart-extract=5¢