OpenAI and Anthropic are implementing new measures to identify and manage underage users of their AI systems. OpenAI has revised its ChatGPT Model Spec to include guidance for interactions with users aged 13 to 17, introducing four principles aimed at prioritizing the safety of younger users. The updated guidelines direct ChatGPT to recommend safer options when user interests may conflict with safety concerns, and to promote real-world support by encouraging offline relationships.
To enhance the safety of interactions, ChatGPT is instructed to treat teenage users with respect and warmth, avoiding condescending responses. OpenAI anticipates that these updates will create “stronger guardrails” and encourage teens to seek support from trusted resources when discussions become sensitive or risky. If ChatGPT detects potential signs of imminent danger, it will advise users to contact emergency services or crisis intervention resources.
In addition, OpenAI is developing an age prediction model intended to assess users’ ages automatically. If the system identifies a user as potentially under 18, it will apply specific safeguards. Adult users will have the opportunity to verify their age if the prediction system incorrectly flags them.
Anthropic, which prohibits users under 18 from engaging with its AI system, Claude, is also taking steps to detect underage users. The company is working on a system that recognizes subtle conversational cues to identify users who may be underage. It currently marks users who self-identify as minors during chats.
Furthermore, Anthropic has shared insights about how it trains Claude to respond to sensitive topics like suicide and self-harm, noting recent improvements in minimizing sycophantic responses—integrations aimed at reducing the reinforcement of harmful thinking patterns. The company believes there remains considerable scope for enhancing the behavior and responsiveness of its AI models.
Source: https://www.theverge.com/news/847780/openai-anthropic-teen-safety-chatgpt-claude

