University of Cambridge > Talks.cam > Centre for Research in Contemporary Problems > Artificial General Intelligence Control & Non-Proliferation Treaty: A Blueprint for the Global Governance of Advanced Machine Intellect

Artificial General Intelligence Control & Non-Proliferation Treaty: A Blueprint for the Global Governance of Advanced Machine Intellect

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact D. A. Floudas.

Discussion on a Draft Submission to the EU AI Office Consultation 'Trustworthy General-Purpose AI'

Aperçu:

We propose the following legal framework: machine intellect agents of a significantly higher capability than current models should be treated similarly to Weapons of Mass Destruction. A new international agency (along the lines of IAEA ) must be invested with inspection powers and a UN Security Council-backed mandate to guarantee safe governance and curtail infringements.

Abstract:

The urgent need for an AI control and non-proliferation treaty, along with an international agency to enforce it, is not merely a matter of prudent governance—it is an imperative for human survival. The rapid, unpredictable, and dual-use nature of AI, coupled with the global dynamics of its development, presents a unique challenge that our existing international frameworks are ill-equipped to handle. The catastrophic potential of AI misuse, and its ability to eventually acquire agency and escape human control, poses an existential risk for mankind that we cannot afford to ignore or underestimate.

The international community’s current efforts, while well-intentioned, fall woefully short of addressing the magnitude of this challenge. The AI Convention, set to be signed in September 2024 by 57 countries including major players like the EU, USA , and Britain, has been diluted into a set of general principles that lack real teeth. Similarly, the EU AI Act, despite its laudable intentions, fails to adequately address the rapidly evolving hazards posed by advanced AI systems. These initiatives, focused primarily on regulating everyday AI applications, demonstrate a dangerous lack of foresight in forestalling the truly catastrophic risks on the horizon.

Given the unparalleled perils, the world must implement unprecedented mitigation measures. Every day that passes without comprehensive global controls increases the risk of an eventual catastrophic event. The window for effective action is closing and gradual, incremental measures are a luxury.

The proposal is clear and unequivocal: non-biological brains of significantly higher capability than current models must be treated similarly to Weapons of Mass Destruction. This necessitates a global AI Control & Non-Proliferation Treaty that would prohibit any further development of advanced AI systems on a for-profit basis and subsume AI control to an international agency with sweeping powers.

This agency, modelled on the International Atomic Energy Agency (IAEA), would be invested with unlimited inspection powers over any potentially relevant facilities worldwide. Crucially, it would have a UN Security Council-backed mandate to curtail infringements, including the authorisation to use military force against violators. Such a regime would effectively remove commercial firms, criminals, and private entities from the equation of advanced AI development.

The IAEA ’s approach to nuclear non-proliferation and safety offers a blueprint for this new AI governance body. Its rigorous safeguards system, which has consistently and unfailingly verified states’ compliance with the Non-Proliferation Treaty, could be adapted and enhanced for AI oversight.

This suggestion shall face fierce resistance from tech companies currently at the forefront of AI development. These entities, having invested billions in research and development, would likely view this move as an anathema to their business models and future prospects. However, the potential pushback from the tech industry pales in comparison to the risks of failing to implement such a system. Without robust global controls, the planet faces a future where AI development becomes an uncontrolled arms race, with nations and corporations competing to create ever more powerful systems without adequate safety precautions.

This is not a call for the cessation of AI research and development, but rather a proposal to create a system for its careful, controlled, and deliberate advancement under strict international oversight. The proposed AI Control & Non-Proliferation Treaty and its enforcing agency may represent a fighting chance to harness the immense potential of AI while safeguarding against its existential risks.

About the speaker:

Demetrius A. Floudas is a transnational lawyer, a legal adviser specializing in tech and an AI regulatory & policy theorist. With extensive experience, he has counseled governments, corporations, and start-ups on regulatory aspects of policy and technology. He serves as an Adjunct Professor at the Law Faculty of Immanuel Kant Baltic Federal University, where he lectures on Artificial Intelligence Regulation. Additionally, he is a Fellow of the Hellenic Institute of International & Foreign Law and a Senior Adviser at the Cambridge Existential Risks Initiative. Floudas has contributed policy & political commentary to numerous international think-tanks and organizations, and his insights are frequently featured in respected media outlets worldwide.

In addition, D. Floudas has provided commentary on matters of Foreign Affairs & International Relations to a number of international think-tanks, with his views frequently appearing in the media worldwide (BBC TV & Radio, Voice of America, Financial Times, Daily Telegraph, Washington Post, Politico and others)

He is currently involved in the European AI Office’s Plenary drafting the Code of Practice for General-Purpose Artificial Intelligence and a member of the EUAI Working Group for AI Systemic Risks. He also participates in the Department for Science, Innovation & Technology Focus Group on an independent UK AI Safety Office and is a Reviewer of the Draft UNESCO Guidelines for the Use of AI Systems in Courts and Tribunals.

The lecture will be followed by refreshments

This talk is open to all members of the University, upon prior registration:

This talk is part of the Centre for Research in Contemporary Problems series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2025 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity