Backed by £59m, this programme aims to develop the safety standards we need for transformational AI
As AI becomes more capable, it has the potential to power scientific breakthroughs, enhance global prosperity, and safeguard us from disasters. But only if it’s deployed wisely.
Current techniques working to mitigate the risk of advanced AI systems have serious limitations, and can’t be relied upon empirically to ensure safety. To date, very little R&D effort has gone into approaches that provide quantitative safety guarantees for AI systems, because they’re considered impossible or impractical.
By combining scientific world models and mathematical proofs we will aim to construct a ‘gatekeeper’, an AI system tasked with understanding and reducing the risks of other AI agents.
In doing so we’ll develop quantitative safety guarantees for AI in the way we have come to expect for nuclear power and passenger aviation.
Our goal: to usher in a new era for AI safety, allowing us to unlock the full economic and social benefits of advanced AI systems while minimising risks.
Additional context for this programme
Applicant resources
About ARIA funding
If you require accessible documents, please contact clarifications@aria.org.uk
Building an extendable, interoperable language and platform to maintain real-world models/specifications + check proof certificates
Using frontier AI to help domain experts build best-in-class mathematical models of real-world complex dynamics + leverage frontier AI to train autonomous systems
Unlocking significant economic value with quantitative safety guarantees by deploying a gatekeeper-safeguarded autonomous AI system in a critical cyber-physical operating context
Building an extendable, interoperable language and platform to maintain real-world models/specifications + check proof certificates
Unlocking significant economic value with quantitative safety guarantees by deploying a gatekeeper-safeguarded autonomous AI system in a critical cyber-physical operating context
Deploying a gatekeeper-safeguarded autonomous AI system in a critical cyber-physical operating context to unlock significant economic value with quantitative safety guarantees
The first solicitation for this programme focuses on TA1.1 Theory. We are looking for R&D Creators, individuals and teams that ARIA will fund and support, to research and construct computationally practicable mathematical representations and formal semantics to support world-models, specifications about state-trajectories, neural systems, proofs that neural outputs validate specifications, and “version control” (incremental updates or “patches”) thereof.
Applicants that are shortlisted following full proposal review, will be invited to meet with the Programme Director to discuss any critical questions/concerns prior to final selection.
Successful/unsuccessful applicants for TA1.1 will be notified on 10 July 2024.
The first solicitation for this programme focused on TA1.1 Theory, where we sought R&D Creators – individuals and teams that ARIA will fund and support – to research and construct computationally practicable mathematical representations and formal semantics to support world-models, specifications about state-trajectories, neural systems, proofs that neural outputs validate specifications, and “version control” (incremental updates or “patches”) thereof.
Safeguarded AI has been designed and overseen by Programme Director David ‘davidad’ Dalrymple with feedback from the R&D community, as part of the opportunity space Mathematics for Safe AI.
davidad is a software engineer with a multidisciplinary scientific background. He’s spent five years formulating a vision for how mathematical approaches could guarantee reliable and trustworthy AI. Before joining ARIA, davidad co-invented the top-40 cryptocurrency Filecoin and worked as a Senior Software Engineer at Twitter.
Stay tuned for our upcoming funding call for TA1.4 Sociotechnical integration.
Applications open soon. For now, learn more about TA1.4 in the programme thesis.
The first solicitation for this programme focused on TA1.1 Theory, where we sought R&D Creators – individuals and teams that ARIA will fund and support – to research and construct computationally practicable mathematical representations and formal semantics to support world-models, specifications about state-trajectories, neural systems, proofs that neural outputs validate specifications, and “version control” (incremental updates or “patches”) thereof.
The second funding call sought potential individuals or organisations interested in using our gatekeeper AI to build safeguarded products for domain-specific applications, such as optimising energy networks, clinical trials, or telecommunications networks. Safeguarded AI’s success will depend on showing that our gatekeeper AI actually works in a safety-critical domain. The research teams selected for TA3 will work with other programme teams, global AI experts, academics, and entrepreneurs, in setting the groundwork to deploy Safeguarded AI in one or more areas.
In this first phase of TA3 funding, we intend to allocate an initial £5.4M aimed at eliciting requirements, sourcing datasets, and establishing evaluation benchmarks for relevant cyber-physical domains.
Applications are now closed. Successful/unsuccessful applicants will be notified on 18 November 2024.