Programme Update
Late last year, we provided an update that we were redirecting our efforts away from Technical Area 2 (TA2) as originally planned and doubling down on expanding the ambition and scope of Technical Area 1 (TA1). Following this, we’ve been reviewing how the rest of the programme can best meet this new objective. After careful consideration, we have decided not to proceed with the TA3 Phase 2 solicitation as planned, but will pivot our focus on applications to cybersecurity.
Why we are pivoting
When we launched TA3, our objective was to deploy AI solutions in real-world cyber-physical use cases while providing quantitative safety guarantees, to demonstrate a deployment API that would greatly reduce AI risks versus the general-purpose chat API. Since then, however, frontier AI capabilities have advanced at a pace that has exceeded our original projections, falsifying the hypothesis that high-risk dual-use AI might not be made openly available.
Given this acceleration, we believe we can deliver the greatest impact by:
- Broadening the TA1 toolkit – Expanding and strengthening the core technical safety tools currently under development.
- Prioritising cybersecurity – Concentrating our demonstration efforts where the need for robust, guaranteed AI tooling is both most acute and most time-sensitive.
This shift ensures that TA3 remains aligned with the evolving risk landscape and focused on the areas where ARIA can add the most value.
What this means for current projects
- TA3 Phase 2 Solicitation – We will not be proceeding with the planned Phase 2 call for cyber-physical applications.
- Current TA3 Phase 1 Projects – All existing TA3 Phase 1 projects will continue through to their scheduled completion.
- Potential Extensions – We are considering extensions for a small number of TA3 Phase 1 teams that have demonstrated exceptional progress. We will contact those teams directly in due course.
Our goal
To usher in a new era for AI safety, allowing us to unlock the full economic and social benefits of advanced AI systems while minimising risks.
As AI becomes more capable, it has the potential to power scientific breakthroughs, enhance global prosperity, and safeguard us from disasters. But only if it’s deployed wisely. Current techniques working to mitigate the risk of advanced AI systems have serious limitations, and can’t be relied upon empirically to ensure safety. To date, very little R&D effort has gone into approaches that provide quantitative safety guarantees for AI systems, because they’re considered impossible or impractical.
By combining scientific world models and mathematical proofs we will aim to construct a ‘gatekeeper’, an AI system tasked with understanding and reducing the risks of other AI agents. In doing so we’ll develop quantitative safety guarantees for AI in the way we have come to expect for nuclear power and passenger aviation.
Technical areas
This programme is split into three technical areas (TAs), each with its own distinct objectives.
Scaffolding
We can build an extendable, interoperable language and platform to maintain formal world models and specifications, and check proof certificates.
Machine Learning
We can use frontier AI to help domain experts build best-in-class mathematical models of real-world complex dynamics + train verifiable autonomous systems.
Real-World Applications
A safeguarded autonomous AI system with quantitative safety guarantees can unlock significant economic value when deployed in a critical cyber-physical operating context.
Meet the programme team
Our Programme Directors are supported by a core team that provides a blend of operational coordination and highly specialised technical expertise.

David 'davidad' Dalrymple
Programme Director
davidad is a software engineer with a multidisciplinary scientific background. He’s spent five years formulating a vision for how mathematical approaches could guarantee reliable and trustworthy AI. Before joining ARIA, davidad co-invented the top-40 cryptocurrency Filecoin and worked as a Senior Software Engineer at Twitter.

Yasir Bakki
Programme Specialist
Yasir is an experienced programme manager whose background spans the aviation, tech, emergency services, and defence sectors. Before joining ARIA, he led transformation efforts at Babcock for the London Fire Brigade’s fleet and a global implementation programme at a tech start-up. He supports ARIA as an Operating Partner from Pace.

Nora Ammann
Technical Specialist
Nora is an interdisciplinary researcher with expertise in complex systems, philosophy of science, political theory and AI. She focuses on the development of transformative AI and understanding intelligent behavior in natural, social, or artificial systems. Before ARIA, she co-founded and led PIBBSS, a research initiative exploring interdisciplinary approaches to AI risk, governance and safety.
Featured insights

Can AI be used to control safety critical systems?
Fortune
The U.K.'s Advanced Research and Invention Agency (ARIA) is funding a project to use frontier AI models to design and test new control algorithms for safety critical systems.
Sign up for updates
Stay up-to-date on our opportunity spaces and programmes, be the first to know about our funding calls and get the latest news from ARIA.



