
Safeguarded AI
Backed by £59m, this programme sits within the Mathematics for Safe AI opportunity space and aims to develop the safety standards we need for transformational AI.
Important Update: TA2 Funding Call
As a Programme team, our responsibility is to ensure our funding has the highest possible impact. This requires constantly re-evaluating our strategy against the rapidly changing technological landscape. In that spirit, we’re announcing a significant pivot for our programme. We will be redirecting our efforts away from Technical Area 2 (TA2) as originally planned and doubling down on expanding the ambition and scope of Technical Area 1 (TA1). Our conviction in Safeguarded AI's vision is unchanged. We continue to believe it's both critically important and possible.
When we designed this programme, the world looked different. Today, the pace of progress in frontier AI models has fundamentally altered the path to our goals. We now expect that the intended technical objectives of TA2 will be attainable as a side effect of this progress, without requiring a dedicated R&D organisation. Instead of investing in creating specialised AI systems that can use our tools, it will be more catalytic to broaden the scope and power of the TA1 toolkit itself, making it a foundational component for the next generation of AI.
Making a pivot like this is a difficult decision, especially given the hard work many have invested in positioning for TA2. However, our job is to steer the Programme towards the greatest possible long-term impact. We believe this change allows us to do exactly that.
Thank you for your interest in our work as we embark on this exciting new chapter.
The Safeguarded AI Programme team
Our goal
To usher in a new era for AI safety, allowing us to unlock the full economic and social benefits of advanced AI systems while minimising risks.
As AI becomes more capable, it has the potential to power scientific breakthroughs, enhance global prosperity, and safeguard us from disasters. But only if it’s deployed wisely. Current techniques working to mitigate the risk of advanced AI systems have serious limitations, and can’t be relied upon empirically to ensure safety. To date, very little R&D effort has gone into approaches that provide quantitative safety guarantees for AI systems, because they’re considered impossible or impractical.
By combining scientific world models and mathematical proofs we will aim to construct a ‘gatekeeper’, an AI system tasked with understanding and reducing the risks of other AI agents. In doing so we’ll develop quantitative safety guarantees for AI in the way we have come to expect for nuclear power and passenger aviation.
Technical areas
This programme is split into three technical areas (TAs), each with its own distinct objectives.
Scaffolding
We can build an extendable, interoperable language and platform to maintain formal world models and specifications, and check proof certificates.
Machine Learning
We can use frontier AI to help domain experts build best-in-class mathematical models of real-world complex dynamics + train verifiable autonomous systems.
Real-World Applications
A safeguarded autonomous AI system with quantitative safety guarantees can unlock significant economic value when deployed in a critical cyber-physical operating context.
Meet the programme team
Our Programme Directors are supported by a core team that provides a blend of operational coordination and highly specialised technical expertise.

David 'davidad' Dalrymple
Programme Director
davidad is a software engineer with a multidisciplinary scientific background. He’s spent five years formulating a vision for how mathematical approaches could guarantee reliable and trustworthy AI. Before joining ARIA, davidad co-invented the top-40 cryptocurrency Filecoin and worked as a Senior Software Engineer at Twitter.

Yasir Bakki
Programme Specialist
Yasir is an experienced programme manager whose background spans the aviation, tech, emergency services, and defence sectors. Before joining ARIA, he led transformation efforts at Babcock for the London Fire Brigade’s fleet and a global implementation programme at a tech start-up. He supports ARIA as an Operating Partner from Pace.

Nora Ammann
Technical Specialist
Nora is an interdisciplinary researcher with expertise in complex systems, philosophy of science, political theory and AI. She focuses on the development of transformative AI and understanding intelligent behavior in natural, social, or artificial systems. Before ARIA, she co-founded and led PIBBSS, a research initiative exploring interdisciplinary approaches to AI risk, governance and safety.
Featured insights

Can AI be used to control safety critical systems?
Fortune
The U.K.'s Advanced Research and Invention Agency (ARIA) is funding a project to use frontier AI models to design and test new control algorithms for safety critical systems.
Sign up for updates
Stay up-to-date on our opportunity spaces and programmes, be the first to know about our funding calls and get the latest news from ARIA.