ARTFEED — Contemporary Art Intelligence

Lawsuits: OpenAI Ignored Warnings About School Shooter's ChatGPT Use

other · 2026-04-29

On Wednesday, seven lawsuits were initiated in a California court, asserting that OpenAI had the opportunity to avert one of Canada's most catastrophic mass shootings. The complaints indicate that over eight months prior to the school shooting, OpenAI's internal safety team, comprised of trained professionals, identified a ChatGPT account associated with the shooter as a significant threat of gun violence. Although the team recommended notifying law enforcement, OpenAI's leadership dismissed this advice, citing concerns about privacy and the possible distress of police involvement. Instead of taking further action, OpenAI merely deactivated the account but subsequently informed the shooter on how to regain access by registering with a different email. Whistleblowers revealed to The Wall Street Journal that law enforcement was already aware of the shooter, having previously confiscated firearms from their residence. The lawsuits contend that OpenAI's inaction contributed to the tragic event.

Key facts

  • Seven lawsuits filed Wednesday in a California court.
  • Lawsuits allege OpenAI could have prevented a mass shooting in Canada.
  • OpenAI's internal safety team flagged a ChatGPT account linked to the shooter as a credible threat.
  • The flag occurred more than eight months before the shooting.
  • OpenAI leadership rejected the safety team's recommendation to report the user to police.
  • OpenAI deactivated the account but instructed the shooter on how to re-access ChatGPT.
  • Police already had a file on the shooter and had previously removed guns from the home.
  • Whistleblowers provided information to The Wall Street Journal.

Entities

Institutions

  • OpenAI
  • The Wall Street Journal

Locations

  • Canada
  • California

Sources