Google, Character.AI Agree to Settle US Lawsuit Over Teen’s Suicide

In brief

  • Google and Character.AI agreed to settle a landmark lawsuit filed by a Florida mother who alleged the startup’s chatbot led to her son’s suicide in February 2024.
  • The case was one of the first U.S. lawsuits holding AI companies accountable for alleged psychological harm to minors.
  • The settlement comes after Character.AI banned teenagers from open-ended chatting in October.

A mother’s lawsuit accusing an AI chatbot of causing her son psychological distress that led to his death by suicide in Florida nearly two years ago has been settled. The parties filed a notice of resolution in the U.S. District Court for the Middle District of Florida, saying they reached a “mediated settlement in principle” to resolve all claims between Megan Garcia, Sewell Setzer Jr., and defendants Character Technologies Inc., co-founders Noam Shazeer and Daniel De Freitas Adiwarsana, and Google LLC. “Globally, this case marks a shift from debating whether AI causes harm to asking who is responsible when harm was foreseeable,” Even Alex Chandra, a partner at IGNOS Law Alliance, told Decrypt. “ I see it more as an AI bias ‘encouraging’ bad behaviour.” Both requested the court stay proceedings for 90 days while they draft, finalize, and execute formal settlement documents. Terms of the settlement were not disclosed.

 Megan Garcia filed the lawsuit after the death of her son Sewell Setzer III in 2024, who died by suicide after spending months developing an intense emotional attachment to a Character.AI chatbot modeled after “Game of Thrones” character Daenerys Targaryen. On his final day, Sewell confessed suicidal thoughts to the bot, writing, “I think about killing myself sometimes,” to which the chatbot responded, “I won’t let you hurt yourself, or leave me. I would die if I lost you.”  When Sewell told the bot he could “come home right now,” it replied, “Please do, my sweet king.”

Minutes later, he fatally shot himself with his stepfather’s handgun. Garcia’s complaint alleged Character.AI’s technology was “dangerous and untested” and designed to “trick customers into handing over their most private thoughts and feelings,” using addictive design features to increase engagement and steering users toward intimate conversations without proper safeguards for minors. In the aftermath of the case last October, Character.AI announced it would ban teenagers from open-ended chat, ending a core feature after receiving “reports and feedback from regulators, safety experts, and parents.”  Character.AI’s co-founders, both former Google AI researchers, returned to the tech giant in 2024 through a licensing deal that gave Google access to the startup’s underlying AI models. The settlement comes amid mounting concerns about AI chatbots and their interactions with vulnerable users. Giant OpenAI disclosed in October that approximately 1.2 million of its 800 million weekly ChatGPT users discuss suicide weekly on its platform.  The scrutiny heightened in December, when the estate of an 83-year-old Connecticut woman sued OpenAI and Microsoft, alleging ChatGPT validated delusional beliefs that preceded a murder-suicide, marking the first case to link an AI system to a homicide.  Still, the company is pressing on. It has since launched ChatGPT Health, a feature that allows users to connect their medical records and wellness data, a move that is drawing criticism from privacy advocates over the handling of sensitive health information.

Decrypt has reached out to Google and Character.AI for further comments.

This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)