AI Governance and Regulation: Comparing EU, US, and China’s Models and global initiatives
- 👤 Speaker: Dr. Nicola Palladino, University of Salerno
- 📅 Date & Time: Wednesday 09 July 2025, 09:00 - 10:30
- 📍 Venue: Computer Laboratory, William Gates Building, Room FW26
Abstract
AI governance is increasingly shaped by a complex interplay of normative approaches. While high-level principles such as fairness, transparency, accountability, and safety are widely recognized across governance frameworks, their implementation varies significantly. The growing geopolitical significance of AI has driven governments to develop distinct strategies and policies, giving rise to 3 main models of AI governance. The Neoliberal Model, championed by the United States, prioritizes market-driven innovation, industry self-regulation, and minimal government intervention. Digital Sovereignty, exemplified by China, reflects a state-controlled and security-driven approach that emphasizes data localization and algorithmic transparency tailored to government priorities, particularly in information control and social stability. The European Union’s Digital Constitutionalism model embeds fundamental rights and democratic oversight into AI regulation, aiming for human-centric, trustworthy, and accountable AI governance. However, the boundaries between these governance paradigms are increasingly blurring. Under the Biden administration, the U.S. briefly moved closer to the EU model before reverting to a neoliberal stance, leveraging Big Tech as proxies of power and security actors. The EU struggles to balance its ambition to lead in Trustworthy AI with competitiveness and security concerns. China, while maintaining strict state control, has introduced selective innovation incentives and consumer rights protections with distinct “Chinese characteristics.” Rather than fostering a cross-fertilization of these models, these shifting boundaries appear to reflect escalating geopolitical tensions, making international consensus on AI governance increasingly difficult to achieve.
Series This talk is part of the Foundation AI series.
Included in Lists
- All Talks (aka the CURE list)
- Artificial Intelligence Research Group Talks (Computer Laboratory)
- bld31
- Cambridge Centre for Data-Driven Discovery (C2D3)
- Cambridge Forum of Science and Humanities
- Cambridge Language Sciences
- Cambridge talks
- Chris Davis' list
- Computer Laboratory, William Gates Building, Room FW26
- Department of Computer Science and Technology talks and seminars
- Guy Emerson's list
- Hanchen DaDaDash
- Interested Talks
- Martin's interesting talks
- ndk22's list
- ob366-ai4er
- PhD related
- rp587
- School of Technology
- Speech Seminars
- Trust & Technology Initiative - interesting events
- yk373's list
- yk449
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)


Wednesday 09 July 2025, 09:00-10:30