Families of school shooting victims sue ChatGPT maker OpenAI and founder Sam Altman
The families of victims of a school shooting in a Canadian Rockies towntarget="_blank" rel="noopener noreferrer"> are suing artificial intelligence company OpenAI in US federal court, seeking to hold the ChatGPT maker responsible for failing to alert police to the attacker’s alarming interactions with the chatbot.A lawsuit filed on Wednesday on behalf of 12-year-old Maya Gebala, who was critically injured in the February shooting, is among the first of dozens of cases that families in Tumbler Ridge, British Columbia, are planning with claims alleging wrongful death, negligence, and product liability.The plaintiffs’ lawyer Jay Edelson said that decisions made by OpenAI and its chief executive Sam Altman “have destroyed the town. The people are really resilient, but what happened is unimaginable”.Mr Altman sent a letter last week formally apologising to the community that his company did not notify law enforcement about the attacker’s online behaviour.Authorities have said the offender killed their mother and 11-year-old stepbrother in their home on February 10 before opening fire at the nearby Tumbler Ridge Secondary School, killing five children and an educator before killing themselves.Twenty-five people were also injured in the attack, Canada’s deadliest mass shooting in years.Police block an area near Tumbler Ridge Secondary School. File picture: Jesse Boily/APThe case highlights concerns about the harms posed by overly agreeable AI chatbots and what obligations the tech industry has to control them or notify authorities about planned violence by chatbot users.This month, prosecutors investigating the deaths of two University of South Florida doctoral students said that the suspect had asked ChatGPT about body disposal in the lead-up to the students’ disappearance.In response to the lawsuit, OpenAI said in a written statement that the “events in Tumbler Ridge are a tragedy. We have a zero-tolerance policy for using our tools to assist in committing violence”.“As we shared with Canadian officials, we have already strengthened our safeguards, including improving how ChatGPT responds to signs of distress, connecting people with local support and mental health resources, strengthening how we assess and escalate potential threats of violence, and improving detection of repeat policy violators,” the company said.Cases against OpenAI Mr Edelson, a Chicago-based lawyer known for taking on the tech industry, is already juggling a number of high-profile cases against OpenAI, including from the family of a California teenager who killed himself after conversations with ChatGPT and another from the heirs of an 83-year-old Connecticut woman killed by her son after ChatGPT allegedly amplified the man’s “paranoid delusions”.“This is not a passive technology,” Mr Edelson said, comparing the chatbot interactions with a more conventional online search for information.
What we’ve seen in the past is that (for) people who are mentally ill, the chatbot will validate what they’re saying and then amplify what they’re saying.
Mr Edelson recently visited Maya at a children’s hospital in Vancouver, where she remains hospitalised and was reportedly alert but unable to speak.“It was so heartbreaking,” he said.The lawsuits filed on Wednesday represent the families of the five children killed in the school shooting: Zoey Benoit, Abel Mwansa Jr, Ticaria “Tiki” Lampert and Kylie Smith, all 12, and Ezekiel Schofield, 13, and the education assistant, Shannda Aviugana-Durand.OpenAI came forward after the shootings to say that the company had flagged last June that the attacker’s account had been used to discuss violence against other people.The company said it had considered whether to refer the account to the Royal Canadian Mounted Police but determined at the time that the account activity did not meet a threshold for referral to law enforcement.OpenAI banned the account in June for violating its usage policy.The lawsuits filed on Wednesday allege “the victims didn’t learn this because OpenAI was forthcoming, but because its own employees leaked it to The Wall Street Journal after they could no longer stomach the company’s silence”.The Gebala lawsuit accuses OpenAI of negligence involving a failure to warn law enforcement and “aiding and abetting a mass shooting”.Along with damages, the Gebala lawsuit seeks a court order that would require OpenAI to ban users from ChatGPT if their accounts were deactivated for violent misuse, and to require the company to alert law enforcement when their systems identify someone who poses a “real-world risk of violence”.