ARIA Safeai Image

Safeguarded AI

Backed by £59m, this programme aims to develop the safety standards we need for transformational AI.

'Late' TA2 Phase 1 applications

ARIA is launching a multi-phased solicitation for Technical Area 2 (TA2) to support the development of a general-purpose Safeguarded AI workflow. The programme aims to demonstrate that frontier AI techniques can be harnessed to create AI systems with verifiable safety guarantees. In TA2, we will award £18m to a non-profit entity to develop critical machine learning capabilities, requiring strong organizational governance and security standards. Phase 1, backed by £1M, will fund up to 5 teams to spend 3.5 months to develop full Phase 2 proposals. Phase 2 — which will open on 25 June 2025 —will fund a single group, for £18M, to deliver the research agenda. TA2 will explore leveraging securely-boxed AI to train autonomous control systems that can be verified against mathematical models, improving performance and robustness. The workflow will involve forking and fine-tuning mainstream pre-trained frontier AI models to create verifiably safeguarded AI solutions.

Phase 1, backed by £1M, will fund up to 5 teams to spend 3.5 months developing full Phase 2 proposals. Phase 2 — which will open on 25 June 2025 — will fund a single group, with £18M, to deliver the research agenda. 

TA2 will explore leveraging securely-boxed AI to train autonomous control systems that can be verified against mathematical models, improving performance and robustness. The workflow will involve forking and fine-tuning mainstream pre-trained frontier AI models to create verifiably safeguarded AI solutions. Key objectives of TA2 include: 

  • World-modelling ML (TA2(a)): Develop formal representations of human knowledge, enabling explicit reasoning and uncertainty accounting, to create auditable and predictive mathematical models. 

  • Coherent reasoning ML (TA2(b)): Implement efficient reasoning methods, such as amortised inference or neural network-guided algorithms, to derive reliable conclusions from world models. Safety verification ML (TA2(c)): Create mechanisms to verify the safety of actions and plans against safety specifications, using techniques like proof certificates or probabilistic bounds.

  • Policy training (TA2(d)): Train agent policies that balance task performance with finite-horizon safety guarantees, including backup policies for safety failure scenarios.

For those applicants that do not meet the Phase 1 application deadline (30 April 2025), to make TA2 funding as accessible as possible to as many strong applicant teams, we will accept (shortened) Phase 1 proposals until 17 Aug 2025. These proposals will not be eligible for Phase 1 funding and will be reviewed against the same Phase 1 evaluation criteria. If successful, these teams will be invited to meet with the Safeguarded AI Programme team, including the Scientific Director to discuss their thinking.

 

Apply until 17 August 2025: 

To apply, follow the same instructions as for Phase 1 (see the call for proposals), but limit the submission to 3 pages instead of 4 pages. Email clarifications@aria.org.uk to get an individual application link. 

 

Read the call for proposals

Watch the solicitation presentation

Previous funding calls in this programme

The Creator experience

What you can expect as an ARIA R&D creator.

Learn more

Applicant guidance

Discover the process of applying for ARIA funding and find key resources.

Learn more