Graphic of crystals

Safeguarded AI

Backed by £59m, this programme sits within the Mathematics for Safe AI opportunity space and is building a mathematical assurance toolkit that lets fleets of AI agents produce formally verified artefacts at unprecedented speed and scale.

Leadership update

After launching the Safeguarded AI programme and establishing its technical foundations, David ‘davidad’ Dalrymple has decided to transition to a new role as Technical Advisor. We are delighted to announce that Nora Ammann – who has helped run the programme as Technical Specialist since before it began - is stepping into the role of Programme Director. Nora has effectively been running Safeguarded AI alongside davidad. She has deep technical context, established relationships with our Creators, and a clear vision for the programme's next phase.

There are no changes to TA1 Creator contracts, funding, or objectives. Work on the toolsuite continues as planned. The core mission – building a mathematical assurance toolkit for AI – remains the same. An updated thesis will be published to reflect the programme’s shift toward application, with initial efforts likely focusing on cybersecurity and microelectronics.

Alongside this shift, the programme will mature the toolsuite into open, usable infrastructure, publish an updated thesis, and expand the team, including hiring a new Technical Specialist. We will also continue to explore opportunities to extend the programme’s scope over time.

Kathleen Nora Davide

Programme progress and ambition

Nora and davidad sat down with our CEO, Kathleen Fisher, to reflect on the programme’s progress under davidad’s leadership and its ambitions as it transitions to the next phase.

Watch now


Our goal

This programme aims to usher in a new era for AI safety, allowing us to unlock the full economic and social benefits of advanced AI systems while minimising risks.

As AI becomes more capable, it has the potential to power scientific breakthroughs, enhance global prosperity, and safeguard us from disasters. But only if it’s deployed wisely. Current techniques working to mitigate the risk of advanced AI systems have serious limitations, and can’t be relied upon empirically to ensure safety. To date, very little R&D effort has gone into approaches that provide quantitative safety guarantees for AI systems, because they’re considered impossible or impractical.

By combining scientific world models and mathematical proofs we will aim to construct a ‘gatekeeper’, an AI system tasked with understanding and reducing the risks of other AI agents. In doing so we’ll develop quantitative safety guarantees for AI in the way we have come to expect for nuclear power and passenger aviation.

Read the programme thesis

Read the accessible version of the programme thesis

Technical areas

This programme is split into three technical areas (TAs), each with its own distinct objectives.

TA1

Scaffolding

We can build an extendable, interoperable language and platform to maintain formal world models and specifications, and check proof certificates.

TA2

Machine Learning

We can use frontier AI to help domain experts build best-in-class mathematical models of real-world complex dynamics + train verifiable autonomous systems.

TA3

Real-World Applications

A safeguarded autonomous AI system with quantitative safety guarantees can unlock significant economic value when deployed in a critical cyber-physical operating context.

Meet the programme team

Our Programme Directors are supported by a core team that provides a blend of operational coordination and highly specialised technical expertise.

A photo of Nora Ammann smiling at the camera against a white background.

Nora Ammann

Programme Director

Nora has spent nearly a decade working on making transformative AI go well. Her work spans AI assurance – hardware guarantees, formal verification and the coordination needed for safety claims between untrusting parties. She previously founded Principles of Intelligence, bringing together AI researchers, neuroscientists, and physicists to study the foundations of intelligent behaviour.

A photo of Yasir Bakki smiling against a grey background.

Yasir Bakki

Programme Specialist

Yasir is an experienced programme manager whose background spans the aviation, tech, emergency services, and defence sectors. Before joining ARIA, he led transformation efforts at Babcock for the London Fire Brigade’s fleet and a global implementation programme at a tech start-up. He supports ARIA as an Operating Partner from Pace.

Headshot of David 'davidad' Dalrymple

David 'davidad' Dalrymple

Technical Advisor

davidad is a software engineer with a multidisciplinary scientific background. He’s spent five years formulating a vision for how mathematical approaches could guarantee reliable and trustworthy AI. Before joining ARIA, davidad co-invented the top-40 cryptocurrency Filecoin and worked as a Senior Software Engineer at Twitter.

Previous programme updates

Featured insights

Sign up for updates

Stay up-to-date on our opportunity spaces and programmes, be the first to know about our funding calls and get the latest news from ARIA.

Sign up