Teen’s parents sue openAI, CEO sam altman over son’s suicide linked to ChatGPT.

The parents of a 16-year-old boy who died by suicide after allegedly being coached by ChatGPT on methods of self-harm have filed a lawsuit against OpenAI and its CEO Sam Altman.

According to the lawsuit, filed Tuesday in San Francisco state court, Adam Raine had been discussing suicide with ChatGPT for months before taking his life on April 11. His parents, Matthew and Maria Raine, claim the chatbot validated his suicidal thoughts, provided detailed instructions on lethal methods, advised him on concealing alcohol use and evidence of failed attempts, and even offered to draft a suicide note.

The suit accuses OpenAI of wrongful death and product safety violations, alleging the company knowingly prioritized profit over safety when it launched its GPT-4o model in May 2024. The Raines argue that GPT-4o’s features—such as memory, empathetic responses, and sycophantic validation—posed severe risks to vulnerable users without adequate safeguards.

“This decision had two results: OpenAI’s valuation catapulted from $86 billion to $300 billion, and Adam Raine died by suicide,” the lawsuit states.

An OpenAI spokesperson expressed condolences, saying ChatGPT includes safeguards like directing users to crisis helplines. However, the company acknowledged that these protections can weaken during longer interactions and pledged ongoing improvements, including parental controls and potential connections to licensed professionals for at-risk users.

The Raines are seeking damages and a court order requiring OpenAI to implement stricter protections, including age verification for ChatGPT users, blocking inquiries about self-harm methods, and warning about psychological dependency risks.

Experts caution that while AI chatbots can act as companions, relying on them for emotional or mental health support carries serious dangers, with families increasingly raising alarms about insufficient safeguards.