The global competition to exert leadership and control over artificial intelligence technologies is dizzying, almost as complicated as the race for leadership over the technologies themselves — and risks avoiding the central dangers AI poses to human rights.
The United States has been especially busy in setting regulations, starting with its Blueprint for an AI Bill of Rights in October 2022, a set of principles and practices meant to guide AI development and protections, and concluding one year later with the Biden Administration’s executive order on the same topic.
China adopted rules for the management of AI systems, effective in August 2023, which requires that such systems “uphold the core socialist values.” The European Union, previously thought to be poised to adopt the Artificial Intelligence Act as early as this year, now faces serious internal debate over its applicability, bindingness, and impact. The Council of Europe’s working draft of the world’s first treaty on AI promises to put human rights at the center of its framework approach to AI governance. The United Kingdom hosted an AI Safety Summit in November, resulting in the Bletchley Declaration, calling for international cooperation to address “frontier AI risk.” In October, United Nations Secretary-General António Guterres launched a high-level advisory body on artificial intelligence, comprising over 30 individuals from corporate, academic, and governmental sectors.