Reflection on IASEAI 2026 at UNESCO House in Paris
- Maria Schulz

- 1. März
- 3 Min. Lesezeit
What a week! IASEAI 2026 in Paris was one of those experiences that lingers. With 1200 attendees, from pioneers like Stuart Russell, Yoshua Bengio, and Geoffrey Hinton to researchers from Oxford, Cambridge, Harvard, and my former university in Edinburgh, the conference was a hub of intense discussion. As someone who spent years in AI regulation and governance and served on the review committee for this conference, I knew that the topics that would be discussed are not always on the bright side of affairs. But being there in person brought a different level of energy. While all papers are worth reading, here I would like to suggest a reading list, that stuck in my head after the discussions.
What was discussed and what to read
General discussion about safety and AI
Key takeaways: Currently, no AI can be considered safe in a broader sense due to its complexity, dependencies, and the multitude of unresolved issues. The debates around AI safety as a separate field of study began to take shape in the 2010s. During this time, initial guardrails were identified to mitigate risks. However, recent developments in AI are increasingly bypassing these guardrails, rendering many of them ineffective. As a result, it was repeatedly emphasized throughout the conference that AI safety, as a concept and practice, has largely failed to keep pace with technological advancement.
What to read: International AI Safety Report 2026
Specific discussions
Sovereignty, geopolitics, and governance: The debates on sovereignty, geopolitics, and governance were particularly compelling. A key takeaway was the obvious message that Europe lacks sovereignty in AI and its underlying infrastructure. Instead, the continent remains heavily dependent on the USA and, to a lesser extent, China. Private actors have also emerged as quasi-sovereign entities, wielding enough influence to shape the international geopolitical stage. While these debates are not entirely new, the data presented in one study painted a particularly sobering picture of the current state of affairs.
What to read: Corporate Quasi-Sovereignty: Big Tech and the Politics of Sovereign Authority in the Digital Age; How Sovereign Is Sovereign Compute?; AI and the Social Contract
Regulation struggles to keep up: Regulation simply does not happen fast enough to address the rapid evolution of AI. The race toward superintelligence is widening the gaps in safety and regulatory frameworks, leaving critical vulnerabilities unaddressed. There also have been discussions around computational legal theory and regulation by design.
What to read: AI & Human Rights Index ; An International Agreement to Prevent the Premature Creation of Artificial Superintelligence
Major discussions around alignment: Alignment remains one of the most contentious and urgent topics in AI safety. The conference highlighted ongoing challenges in ensuring that AI systems behave in ways that align with human values, intentions, and societal norms. Many sessions explored whether current approaches to alignment are sufficient, or if entirely new ways are needed to address the risks posed by increasingly capable systems.
What to read: AI Alignment Strategies from a Risk Perspective: Independent Alignment Mechanisms or Shared Failures?
Then there were the sessions on applied, actionable ethics and guardrails, which took things in a more philosophical direction. How do we define trust in AI? What are the guardrails we need to implement, and how to embed non-tangible ethics into something tangible.
What to read: Why Automate This? Exploring Correlations between Desire for Robotic Automation, Invested Time and Well-Being; Guarding the Guardrails: A Taxonomy-Driven Approach to Jailbreak Detection; Democratic Or Authoritarian? Probing A New Dimension Of Political Biases In Large Language Models; Human Amplification Should Replace Intelligent Agents as the Primary Goal of AI Research
But, of course, there were more panels and more conversations: about human-machine interaction, certifications, and audits. Leaving Paris, I felt a real sense of momentum. The questions we’re facing now aren’t abstract. They’re immediate, and they demand our attention and immediate actions.




Kommentare