On Monday, over 200 prominent individuals, including former heads of state, diplomats, Nobel laureates, and experts in artificial intelligence, reached a consensus advocating for an international agreement delineating “red lines” in AI development. These “red lines” propose prohibitions such as AI impersonating human beings and self-replication.
This initiative, known as the Global Call for AI Red Lines, has garnered support from more than 70 organizations focused on artificial intelligence. The goal is for governments to establish an international political agreement by the end of 2026. Notable signatories include Geoffrey Hinton, Wojciech Zaremba, Jason Clinton, and Ian Goodfellow.
Charbel-Raphaël Segerie, executive director of the French Center for AI Safety (CeSIA), emphasized during a briefing that the aim is proactive risk mitigation, stating, “If nations cannot yet agree on what they want to do with AI, they must at least agree on what AI must never do.”
This announcement precedes the 80th United Nations General Assembly’s high-level week in New York and is associated with initiatives led by CeSIA, the Future Society, and UC Berkeley’s Center for Human-Compatible Artificial Intelligence. Maria Ressa, a Nobel Peace Prize laureate, highlighted the initiative in her opening remarks, calling for global accountability in technology.
While some regional regulations exist, such as the European Union’s AI Act and a U.S.-China agreement on nuclear weapons remaining under human control, a global consensus is still lacking. Niki Iliadis from The Future Society argued that voluntary pledges are inadequate without an independent body to enforce these red lines.
Stuart Russell, a professor at UC Berkeley, remarked that the AI industry should prioritize safety in its technology development, drawing a parallel to the cautious approach of nuclear power development. He contended that regulation does not stifle innovation, advocating for responsible AI development strategies.
Source: https://www.theverge.com/ai-artificial-intelligence/782752/ai-global-red-lines-extreme-risk-united-nations

