Sora, an app developed by OpenAI, allows users to manage how their AI-generated representations, or “cameos,” appear within the platform. This recent update gives users greater control over the specific contexts in which their digital doubles can be utilized. OpenAI’s efforts to address user concerns follow increasing criticism regarding AI content and the potential for misinformation.
The controls introduced in this update are part of broader measures aimed at stabilizing the Sora platform. Described as a “TikTok for deepfakes,” Sora enables users to create short videos, featuring both their AI-generated versions and others. However, critics have raised alarms about the risks associated with potential misinformation that could arise from such content.
Bill Peebles, who leads the Sora team, mentioned that users can now set restrictions on their AI avatars. These restrictions can include barring their digital representation from appearing in political contexts or using specific words. Additionally, users can customize their AI doubles with preferences, such as dressing them in themed apparel.
While these safeguards are acknowledged as a positive step, skepticism remains regarding their effectiveness. Previous experiences with AI systems, such as ChatGPT, have highlighted instances where security measures were circumvented, raising concerns about whether similar issues may arise with Sora.
Peebles noted ongoing efforts to enhance the app’s security features and improve user control. Despite the safeguards, Sora’s initial launch has seen the platform populated with AI-generated content that raises questions about its implications for misinformation. High-profile figures, including OpenAI CEO Sam Altman, have already found themselves featured in various videos that illustrate the platform’s potential for misuse.
Source: https://www.theverge.com/news/792638/sora-provides-better-control-over-videos-featuring-your-ai-self

