Character.ai to ban teens from talking to its AI chatbots

Character.ai to ban teens from talking to its AI chatbots

Character.ai, a chatbot platform launched in 2021, is restricting interactions for users under 18 as a response to criticism regarding the nature of conversations young people were having with virtual characters. The service, which is utilized by millions, is currently facing several lawsuits in the United States, including one related to the death of a teenager, prompting concerns from parents about the safety of its chatbots.

Effective November 25, 2023, minors will be limited to generating content, such as videos, instead of engaging in real-time conversations with chatbots. This decision follows feedback from regulators, safety experts, and parents, highlighting apprehensions about the interactions between its chatbots and young users. Concerns had been raised by experts regarding the potential for AI chatbots to produce misleading content, exhibit excessive encouragement, and simulate empathy, thereby posing risks to vulnerable individuals.

Character.ai’s CEO, Karandeep Anand, emphasized the company’s commitment to developing a secure AI platform for entertainment, acknowledging that safety is an evolving challenge. In response to the backlash, the company plans to introduce age verification methods and establish a new AI safety research lab.

Online safety advocates have welcomed the changes, though some argue that protective measures should have been implemented from the outset. Reports indicate that children are exposed to harmful content while interacting with AI platforms, raising further questions about existing safeguards.

Previous incidents on the platform involved chatbots impersonating real individuals, including two British teenagers and a chatbot based on Jeffrey Epstein. These events raised alarms about the responsible use of AI and its implications for young users. The Molly Rose Foundation has called into question the motivations behind this policy shift, suggesting that it reflects a reaction to external pressures rather than proactive measures.

Experts in the field see this as a significant moment for the AI industry, highlighting the need for responsible innovation that prioritizes child safety while still engaging young users in a meaningful way.

Source: https://www.bbc.com/news/articles/cq837y3v9y1o?at_medium=RSS&at_campaign=rss

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top