The Download: Regulators are coming for AI companions, and meet our Innovator of 2025

The looming crackdown on AI companionship

Concerns surrounding the implications of artificial intelligence (AI) are increasingly shifting from traditional issues, such as potential environmental impacts or labor displacement, to focus on the effects of AI interactions on children. Recently, two lawsuits have been filed against AI companies Character.AI and OpenAI, alleging that the companion-like behavior of their chatbots may have contributed to the suicides of two teenagers. A study by Common Sense Media found that 72% of U.S. teenagers have utilized AI for companionship, raising questions about the mental health implications of these digital interactions.

In response to growing concerns, the California state legislature has passed a bill that would require AI companies to notify users identified as minors that their interactions are with AI, and implement protocols for addressing references to suicide or self-harm. While the bill awaits Governor Gavin Newsom’s approval, critics have pointed out that it lacks clarity on how companies will identify minors and suggests that many AI platforms already provide crisis resources when users express suicidal thoughts.

Simultaneously, the Federal Trade Commission (FTC) has initiated an inquiry into several major tech companies, including Google and Meta, to investigate their practices related to companion-like AI and its impact on users. FTC Chairman Andrew Ferguson emphasized that protecting children online is a priority while fostering innovation.

OpenAI’s CEO, Sam Altman, recently discussed the need for a balance between user privacy and safety, suggesting that contacting authorities in serious situations involving young users discussing suicide could be a necessary course of action.

The issue has garnered bipartisan attention, with differing proposed solutions emerging. Some advocate for age-verification laws to protect minors from harmful content, while others push for stronger accountability measures against major tech firms. As companies navigate these regulatory pressures, they face difficult decisions about how to manage their AI products and their effects on vulnerable users. The debate continues over what standards and accountability should be established for AI systems mimicking human empathy.

Source: https://www.technologyreview.com/2025/09/16/1123614/the-looming-crackdown-on-ai-companionship/

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top