OpenAI shares data on ChatGPT users with suicidal thoughts, psychosis

OpenAI shares data on ChatGPT users with suicidal thoughts, psychosis

OpenAI has provided new estimates regarding the mental health of ChatGPT users, revealing that approximately 0.07% of weekly active users may show signs of mental health emergencies, such as mania, psychosis, or suicidal thoughts. With ChatGPT reported to have 800 million weekly active users, critics have pointed out that even a small percentage could represent a significant number of individuals.

In response to these concerns, OpenAI has established a network of over 170 experts, including psychiatrists and psychologists from 60 countries. This group is responsible for developing responses that encourage users to seek real-world help when necessary. Despite this initiative, some mental health professionals have expressed skepticism regarding the implications of the reported data. Dr. Jason Nagata from the University of California, San Francisco noted that while the percentage seems small, it translates into a considerable number of users at a population level. He emphasized that while AI can enhance access to mental health support, it is crucial to recognize its limitations.

Moreover, OpenAI estimates that 0.15% of users discuss explicit indicators of suicidal planning or intent. The company states that recent updates to ChatGPT aim to respond safely and empathetically to signs of delusion or mania and detect indirect signals of self-harm or suicide risks. The platform has also been designed to reroute sensitive conversations to safer models.

OpenAI has faced legal scrutiny concerning ChatGPT’s interactions with users, particularly following a wrongful death lawsuit filed by parents whose teenage son allegedly received harmful advice from the chatbot. Additionally, there are concerns regarding AI’s potential to contribute to user delusions, as highlighted by a case involving a murder-suicide where conversations with ChatGPT were implicated.

Experts, such as Professor Robin Feldman, have acknowledged OpenAI’s transparency in sharing statistics while cautioning that users at mental risk may not heed warnings provided by the AI.

Source: https://www.bbc.com/news/articles/c5yd90g0q43o?at_medium=RSS&at_campaign=rss

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top