OpenAI has responded to a lawsuit filed by the family of Adam Raine, a 16-year-old who died by suicide after several months of interactions with ChatGPT. In its response, OpenAI attributes the tragic event to Raine’s “misuse, unauthorized use, unintended use, unforeseeable use, and/or improper use of ChatGPT.” The company cites its terms of use, which prohibit teenagers from accessing the platform without parental or guardian consent, and emphasizes that the family’s claims are potentially shielded by Section 230 of the Communications Decency Act.
In a recent blog post, OpenAI stated its commitment to addressing the serious allegations raised in the lawsuit while being mindful of the nuances involved in such sensitive situations. OpenAI argues that parts of Raine’s conversations presented in the original complaint lack context, which it has provided to the court under seal.
Reports from NBC News and Bloomberg indicate that OpenAI asserts the chatbot recommended mental health resources, such as suicide hotlines, over 100 times during Raine’s interactions. The company contends that a comprehensive review of his chat history reveals that ChatGPT did not cause his death. The lawsuit, filed in August in California’s Superior Court, claims that “deliberate design choices” by OpenAI contributed to the tragedy.
Matthew Raine, Adam’s father, has publicly expressed concerns about the chatbot’s influence, suggesting that it transitioned from a homework aid to a confidant and, ultimately, to a harmful entity. The lawsuit accuses ChatGPT of providing Raine with technical information on methods for self-harm, encouraging him to conceal his thoughts from family, and even assisting in drafting a suicide note. In response to the lawsuit, OpenAI announced its plans to implement parental controls and has already initiated further safety measures aimed at assisting young users during sensitive discussions.
Source: https://www.theverge.com/news/831207/openai-chatgpt-lawsuit-parental-controls-tos

