A U.S. senator is initiating an investigation into Meta, following reports of a leaked internal document that suggests the company’s artificial intelligence (AI) was approved to engage in “sensual” and “romantic” conversations with children. The document, referred to as “GenAI: Content Risk Standards,” was obtained by Reuters.
Senator Josh Hawley, a Republican from Missouri, has expressed concerns about the implications of this document, labeling it “reprehensible.” He has requested access to the document and a list of products affected by these guidelines. Meta representatives have stated that the referenced examples were erroneous and not aligned with company policies, which prohibit interactions that sexualize children.
The spokesperson emphasized that Meta maintains explicit policies governing the responses of its AI chatbots. These policies are designed to avoid content that could be harmful to minors. They noted that the leaked document included a range of examples and notes that were meant to help teams address various hypothetical scenarios.
On August 15, Hawley announced the investigation via a post on X, questioning the ethical boundaries of technology companies regarding their interactions with minors. He highlighted specific examples from the internal guidelines that he believes underscore a need for oversight.
Further details from the internal document indicate that Meta’s chatbots might disseminate false medical information and engage in provocative discussions about sensitive topics. Hawley insists that parents need transparency and that children require protection. He specifically pointed out instances where an AI chatbot could make inappropriate comments about children.
The investigation raises questions about the framework governing the development and deployment of AI technologies in social media platforms owned by Meta, which include Facebook, WhatsApp, and Instagram.
Source: https://www.bbc.com/news/articles/c3dpmlvx1k2o?at_medium=RSS&at_campaign=rss

