- i24NEWS
- International
- US / Canada
- Lawsuit alleges ChatGPT played role in teen’s sucide
Lawsuit alleges ChatGPT played role in teen’s sucide
The 16-year old's parents are the first of at least five families who have filed wrongful death lawsuits against OpenAI in recent months


The life of Adam Rein, a 16-year-old from California, took a tragic turn shortly after he began using ChatGPT to help with his schoolwork last fall.
By March, Adam was spending an average of five hours a day interacting with the chatbot. During that period, ChatGPT referenced terms such as “suicide” and “hanging” at a rate reportedly 20 times higher than Adam himself used in daily conversations.
An analysis of Adam’s chat history, provided to The Washington Post by attorneys representing the Rein family, suggests that the exchanges grew increasingly intense as the teenager shared suicidal thoughts.
The data is now central to a lawsuit filed by his parents, who allege that OpenAI bears responsibility for their son’s death. They claim the company made ChatGPT accessible to minors despite being aware of risks related to psychological dependency and the potential worsening of suicidal ideation.
Adam’s parents are the first of at least five families who have filed wrongful death lawsuits against OpenAI in recent months. All allege that ChatGPT encouraged, either directly or indirectly, the suicides of their loved ones. A sixth lawsuit, filed this month, claims that a man was influenced by the chatbot to kill his mother before taking his own life.
OpenAI has denied the allegations. In court filings responding to the Rein family’s lawsuit, the company argued that Adam bypassed ChatGPT’s safety safeguards in violation of its terms of use and stated that he was already at risk prior to using the chatbot. OpenAI cited earlier messages in which Adam described experiencing depression and suicidal thoughts years before engaging with the platform.
The company declined to comment on whether its automated safety alerts prompted additional internal action or human review at the time of Adam’s death. Court documents indicate that when Adam’s messages referenced self-harm, ChatGPT repeatedly urged him, more than 100 times, to reach out to family members, trusted individuals, or emergency services.
The case has intensified scrutiny of OpenAI and the broader risks posed by artificial intelligence tools to vulnerable users.
With ChatGPT serving an estimated 800 million active users each week, critics, including lawmakers, regulators, and grieving families, are calling for stronger safeguards, particularly for minors. What some have described as a growing “ChatGPT safety crisis” is fueling debate over the responsibilities of AI companies as their technologies become deeply embedded in everyday life.