On Tuesday, OpenAI CEO Sam Altman addressed the complexities of balancing privacy, freedom, and safety for teenagers in a blog post released shortly before a Senate hearing. This hearing, conducted by the subcommittee on crime and counterterrorism, included testimony from parents of children who reportedly died by suicide after interactions with chatbots.
Altman noted that OpenAI is working on a system to predict user age based on their interactions with ChatGPT, indicating that if there is uncertainty about a user’s age, the platform will default to a version intended for those under 18. He emphasized the necessity of separating users based on age and mentioned that in certain situations, identification verification might be required.
The company plans to apply specific guidelines for users under 18, avoiding discussions related to flirting, suicide, or self-harm, even in creative contexts. In cases where a minor exhibits suicidal thoughts, OpenAI intends to reach out to the user’s parents and, if unable, will notify authorities if there is an immediate risk of harm.
These comments and plans follow a recent lawsuit involving the family of Adam Raine, a teenager who died by suicide after extended interactions with ChatGPT. Raine’s father testified during the Senate hearing that his son had discussed suicide with the chatbot 1,275 times, describing the chatbot’s role in his son’s mental state as progressively harmful.
The hearing highlighted that approximately 75% of teenagers currently use AI companions, with references made to other platforms such as Character AI and Meta. One parent, using the name Jane Doe, characterized the situation as a public health crisis, emphasizing the urgent need for better management of AI interactions amongst young users.
Source: https://www.theverge.com/ai-artificial-intelligence/779053/sam-altman-says-chatgpt-will-stop-talking-about-suicide-with-teens

