A graphic of abstract shapes.

Mathematics for Safe AI

We don’t yet have known technical solutions to ensure that powerful AI systems interact as intended with real-world systems and populations. A combination of scientific world-models and mathematical proofs may be the answer to ensuring AI provides transformational benefit without harm.

What if we could use advanced AI to drastically improve our ability to model and control everything from the electricity grid to our immune systems?

 


Defined by our Programme Directors (PDs), opportunity spaces are areas we believe are likely to yield breakthroughs.

In Mathematics for Safe AI, we are exploring how to leverage mathematics and scientific modelling to advance transformative AI and provide a basis for provable safety.

Core beliefs

The core beliefs that underpin this opportunity space:

1.

Future AI systems will be powerful enough to transformatively enhance or threaten human civilisation at a global scale —> we need as-yet-unproven technologies to certify that cyber-physical AI systems will deliver intended benefits while avoiding harms. 

2.

Given the potential of AI systems to anticipate and exploit world-states beyond human experience or comprehension, traditional methods of empirical testing will be insufficiently reliable for certification —> mathematical proof offers a critical but underexplored foundation for robust verification of AI.

3.

It will eventually be possible to build mathematically robust, human-auditable models that comprehensively capture the physical phenomena and social affordances that underpin human flourishing —> we should begin developing such world models today to advance transformative AI and provide a basis for provable safety

Observations

Some signposts as to why we see this area as important, underserved, and ripe.

Observations image

 

Download as a PDF here, or the accessible version here.

A photo of a group of people sitting around the table and looking at davidad writing on a whiteboard.

Programme: Safeguarded AI

To build a programme within an opportunity space, our Programme Directors direct the review, selection, and funding of a portfolio of projects.

Backed by £59m, this programme looks to combine scientific world models and mathematical proofs ARIA is looking to construct a ‘gatekeeper’ – an AI system designed to understand and reduce the risks of other AI agents. If successful, we’ll unlock the full economic and social benefits of advanced AI systems while minimising risks.

Discover more

Opportunity seeds

Outside the scope of programmes, with budgets of up to £500k, opportunity seeds support ambitious research aligned to our opportunity spaces. 

Earlier this year we launched a call for bold ideas within the Mathematics for Safe AI opportunity space. This seed funding call is now closed. Selected R&D Creators will be announced in the coming weeks.

Sign up for updates

Stay up-to-date on our opportunity spaces and programmes, be the first to know about our funding calls and get the latest news from ARIA.

Sign up