ARIA Safeai Image

Safeguarded AI

Backed by £59m, this programme sits within the Mathematics for Safe AI opportunity space and aims to develop the safety standards we need for transformational AI.

 

TA1 Scaffolding

We can build an extendable, interoperable language and platform to maintain formal world models and specifications, and check proof certificates.

 

Meet the TA1.1 (Theory) Creators

Our TA1.1 Creators will look to unlock novel frontiers for representing different sources of uncertainty, with the ultimate goal of using this semantics to formally model and soundly over approximate a wide range of application domains and safety specifications, against which Safeguarded Al systems will later be verified. The intended output will be a single artefact that is a dissertation-length definition of these languages. With these mathematical representations and formal semantics in place, the programme's other TAs will produce world-models, safety specifications, and proof certificates.

Our TA1.1 Creators have been tasked with defining “syntax” (algebraic construction operators, and version-controllable serialisable data structures), and formal semantics, for language(s) that can be used by teams of humans – and, later, AI systems – to define “world models”, probabilistic specifications, neural network controllers, and proof certificates. These proof certificates present efficiently checkable arguments verifying that a controller composed with a world model satisfies its specification. 

In convening this group, we’ve brought together a wide range of relevant expertise in mathematical modelling (across dynamical systems, imprecise probability, category theory, type theory proof theory and more), as well as expertise in domains that will ensure these approaches are commensurable and computationally practical. 

Open

Safety: Core representation underlying safeguarded AI

Team Lead: Ohad Kammar, University of Edinburgh

Open

Axiomatic Theories of String Diagrams for Categories of Probabilistic Processes

Fabio Zanasi, University College London

Open

SAINT: Safe AI ageNTs – Formal certification of AI agents in the world

Alessandro Abate, University of Oxford Team

Open

ULTIMATE: Universal Stochastic Modelling, Verification and Synthesis Framework

Radu Calinescu, University of York

Open

Quantitative Predicate Logic as a Foundation for Verified ML

Ekaterina Komendantskaya + Robert Atkey, University of Southampton/Heriot-Watt University; University of Strathclyde

Open

Learning-Theoretic AI Safety

Vanessa Kosoy, Association of Long Term Existence and Resilience (ALTER)

Open

Probabilistic Protocol Specification for Distributed Autonomous Processes

Nobuko Yoshida, University of Oxford

Open

Philosophical Applied Category Theory

David Corfield, Independent Researcher

Open

Double Categorical Systems Theory for Safeguarded AI

David Jaz Myers, Topos Research UK

Open

Monoidal Coalgebraic Metrics

Filippo Bonchi, University of Pisa

Open

Doubly Categorical Systems Logic: A theory of specification languages

Matteo Capucci, Independent Researcher

Open

True Categorical Programming for Composable Systems Total

Jade Master + Zans Mihejevs, Glasgow Lab for AI Verification (GLAIVE)

Open

Unified Automated Reasoning for Randomised Distributed Systems

Alexandra Silva, University College London + Cornell University

Open

Employing Categorical Probability Towards Safe AI

Sam Staton, University of Oxford

Open

Syntax and Semantics for Multimodal Petri Nets

Amar Hadzihasanovic, Tallinn University of Technology

Open

Two-dimensional Kripke Semantics and World Models

Alex Kavvos, University of Bristol

Open

Supermartingale Certificates for Temporal Logic

Mirco Giacobbe, University of Birmingham

Open

Hyper-optimised Tensor Contraction for Neural Networks Verification

Stefano Gogioso, Hashberg Ltd

Open

A Computational Mechanics Approach to World Models via Multi-Scale Compositional Abstractions

Fernando Rosas, University of Sussex

Open

String Diagrammatic Probabilistic Logic: Specification and modelling languages

Pawel Sobocinski, Tallinn University of Technology

Open

Profunctors: A unified semantics for safeguarded AI

Nicola Gambino, University of Manchester

Open

Graded Modal Types for Quantitative Analysis of Higher-Order Probabilistic Programs

Vineet Rajani + Dominic Orchard, University of Kent

Previous funding calls in this programme

The projects we are funding have been selected from teams and individuals who applied to our previous funding calls for this programme. You can read more about these calls below.

The Creator experience

What you can expect as an ARIA R&D creator.

Learn more

Applicant guidance

Discover the process of applying for ARIA funding and find key resources.

Find out more