Safeguarded AI

Clarification questions

Answering your questions

Ahead of submitting your application, we encourage you to read our funding FAQs.

If you still have questions related to Safeguarded AI, please reach out to clarifications@aria.org.uk.

We’ll update this page twice a week with answers.

Nb: clarification questions should be submitted no later than 4 days prior to the relevant deadline date. Clarification questions received after this date will not be reviewed. 

 

What is the threshold for funding?

There is no upper budget limit however when estimating your proposed project you should consider that we are looking to fund this Technical Area 1.1 with up to £3.5M in total (including VAT where applicable) for the first year and we expect to make 10 to 16 awards.

 

What is the expected duration for projects?

We expect to commit funding for TA1.1 projects until September 2025, as such applicant proposals should not exceed this timeframe. Based upon the outputs of the initial funding we intend to make decisions in March 2025 about a second phase of funding, to be delivered from September 2025 until September 2026 (Phase 2). 

Where we decide to release funding for Phase 2, existing Creators will be required to submit proposals for Phase 2 funding. At the same time we will release a solicitation for TA1.1 Phase 2 inviting new applicants to submit new proposals.

 

Are we permitted to budget for staff (e.g., postdocs) who are not nominated?

Yes, you should include the costs required to deliver the proposed project. If you know who the postdoc is at the time of submission you should include their details. You should also indicate what proportion of their time will be dedicated to the project (this includes the time a PI will dedicate to the project).

 

Would you organise an event to meet other individuals or would you know individuals who are looking to partner with others for this call?

We are not planning to hold an event for the TA1.1 solicitation but where you believe your proposal would benefit from collaboration with another organisation but you are yet to connect with potential partners you should identify any gaps or potential areas where you may need a partner in your proposal.

Where you identify potential gaps or areas you may need to partner on, we may offer an introduction to any suitable matches we identify (there will be no obligation to meet with these matches).

That said, we do not encourage joint proposals across more than one recipient institution, and/or across groups that are not likely to actually work together on a regular (~weekly) basis.

 

Before preparing a full proposal, we wanted to inquire about the requirement and timeline for submitting a concept paper. The general ARIA project review and selection process (ARIA-project-review-and-selection-process.pdf) outlines a concept paper stage, but the TA1.1 Theory call document does not explicitly mention this requirement or provide a deadline.

Could you please clarify whether a concept paper is expected for this specific call, and if so, what the submission deadline would be? We want to ensure we follow the correct process and timeline.

The review process you refer to outlines the entire process where we release a two stage call (concept papers and full proposals). However, for TA1.1 of the Safeguarded AI programme in order to optimise the process for the programme we have chosen to only have a one stage process with a longer submission timeline. 

As such we have removed the concept paper stage and only request full proposals, award decisions will therefore be made based on the full proposal you submit.

 

Can the costs for research associate work on the projects be included?

Yes you can include research associate costs within your cost estimates for a proposal.

 

Is there a maximum budget for a single proposal?

There is no upper budget limit however when estimating your proposed project you should consider that we are looking to fund this Technical Area 1.1 with up to £3.5M in total (including VAT where applicable) for the first year and we expect to make 10 to 16 awards.

 

Can we budget for small hardware development related to the proposal?

Yes, you can budget for compute, but given the length of the contracts we prefer to fund cloud compute. Implementation on edge devices is out of scope for TA1.1.

 

Do we budget for TA1, TA2 and TA3 together or individually?

For clarity, we are currently only accepting proposals for TA1.1. TA2 and TA3 will be launched in the near future.

 

If we apply for TA1.1 and our application is accepted, are there any explicit or implicit rules that would prevent us from applying to the other stages (e.g. TA1.2, TA2, etc.)?

No, there are no explicit or implicit rules that would prevent you from applying.

 

On p. 9 of the TA1.1 call for proposals, it says that “World models should be able to combine the following kinds of uncertainty: … partiality.” Can you clarify what you mean by “partiality”?

Partiality is in the sense of “partial function” and is sometimes called “nontermination” (I don’t use this term, because nontermination is only one source of a partiality effect; contradiction or inconsistency is another). To elaborate further: partiality is often modelled by the “Maybe monad” (–⨿1). Nondeterminism without partiality is modelled by the nonempty powerset monad; nondeterminism combined with partiality is modelled by the powerset monad. Stochasticity without partiality is modelled by probability; stochasticity combined with partiality is modelled by sub probability. For more on this see citation [35] of the programme thesis.

 

On p. 5 of the TA1.1 call for proposals, it says that “the aspirational aim of TA1.1 as a whole is to define “syntax” (algebraic construction operators, and version-controllable serialisable data structures), and formal semantics, for language(s) that can be used by teams of humans (and, later, AI systems) to define “world models”, probabilistic specifications, neural network controllers, and proof certificates.” Does each project for TA1.1 need to develop a language that can be used to define all of these things? 

Each project need not develop a language that can be used to define all these things. Rather, that is the goal of the entire technical subarea TA1.1, and each project needs to be plausibly critical toward achieving that overall goal. Focusing on a narrower scope is encouraged, but you will be required to coordinate with other projects at quarterly workshops to steer towards technical interoperability and integration into an eventual unified solution.

 

How much detail would you like to see in the proposal about the architecture/structure of the language that will be developed, versus focusing on the criteria that the language will satisfy?

At this stage we expect specific solutions to be tentative and speculative, but fleshing out one or two key concepts that you believe are novel and important will help demonstrate your team’s ability to contribute constructively. Defining the criteria for the problem you hope to solve demonstrates ambition and the ability to ask the right questions. And discussion of alternative approaches and their shortcomings demonstrates rigour and critical thinking. As a rough guideline, a good proposal would spend a comparable amount of ink in each of these three modes.

 

On p. 7 of the TA1.1 call for proposals, it says that “creators in TA1.1 should, in their proposal: *Define their problem with multiple formal criteria (of the kind that could in principle be encoded in a proof assistant such as Lean 4), likely in addition to informal criteria.” Could you clarify what you mean by “formal criteria” in the context of language/representation design?

It means stating hoped-for theorems that would be the “main result” of a paper if your proposed project is carried through to success. See for example Theorem 2 of ‘Sound and Complete Certificates for Quantitative Termination Analysis of Probabilistic Programs’ [PDF]  or section 4 of Verified Density Compilation for a Probabilistic Programming Language [PDF].

 

On p. 7 of the TA1.1 call for proposals, it says that “creators in TA1.1 should, in their proposal: … *Identify and contrast two or three potential approaches to the problem.” Does this mean two or three approaches that the proposer is considering using/pursuing for the project, or two or three approaches that are different from what the proposer is considering using (perhaps with an explanation of why the proposer thinks these approaches are less likely to be successful than their own approach)?

Success in this part of the proposal means for the proposer to demonstrate their ability to critically evaluate the prospects of alternative approaches. This does not mean claiming the superiority of your proposal over others’. Rather, you are encouraged to give some insight into how you are actually beginning to think about the large design space of potential solutions to your chosen problem. An ideal response might identify three variations on a novel idea, which each seem equally good at first, then show why one is inferior to the other two, then list tradeoffs between the remaining two, leaving both options open with specific questions to be investigated further in future analysis if your project goes forward.

 

Given that there is no consensus about the definition of AI, does AI refer to “learning-based approaches” in this programme?

Part of the programme’s thesis is that recent advancements in AI systems, particularly large language models, have enabled the acceleration of translating informal knowledge into formal representations. In TA2, we intend to leverage this capability. While we remain agnostic to the particular AI technology, it’s likely that TA2 will involve gradient-based optimisation of neural networks in a probabilistic setting (noting that this is a much broader class than LLMs, but acknowledging that it is a much narrower class than AI algorithms in general). All that said, TA1 does not require new work in this area. You can think of the overall approach as “neurosymbolic,” with TA2 building the neural aspects and TA1 constructing the symbolic aspects.

 

Since safety and cybersecurity overlap, does the programme also address cybersecurity issues, considering the fact that many security solutions employ AI nowadays?

TA3 will bring in the application areas rather than TA1. Additionally, we would like to note that synthesis of formally verified software and network intrusion detection are potential application areas, but this will be determined later.

A formal verification problem consists of three elements:

* (A) a “System Model” that describes what a system, for instance an AI system, currently does (It is called the World Model in the call description I think),

* (B) a “Specification/Property” that describes what the system is expected to do, and

* (C) an algorithm/procedure/proof checker that checks whether the System Model satisfies the given Specification, e.g., it satisfies the specification with a probability of at least 0.4.

 

Which elements of the verification problem of an AI system — A, B or C — are covered in the first stage of this programme? 

All three, A-C, with the exception of a proof procedure for C. A proof checker is in scope but we are willing to give up decidability of the verification problem to gain expressiveness. Our thesis is that proofs will be synthesised by an AI system constructed in TA2. But these proofs must be checked by a reliable, rigorous symbolic system and that is the part of C which is in scope for TA1.

 

Is this programme interested in theorem provers/proof checkers? 

Any form of proof certificate is welcome, including certifying model checkers, checkers for inductive invariants, or other approaches such as hybrid reachability methods more associated with model-checking than theorem-proving.  We encourage exploration and synthesis of ideas from model checking, cyber-physical systems, interactive theorem proving, automated theorem proving, etc. However, the key differentiator is achieving automation via interaction with AI tools, not through algorithmic decision procedures or primarily human proof construction. This fundamentally changes ergonomic considerations, e.g. opening new proof primitives like neural barrier certificates, which would be impractical for humans or efficient decision procedures to make effective use of.

 

Does this programme seek ML-based approaches to design an algorithm/procedure/proof checker (knowing that ML approaches are not sound)? 

We seek ML-based approaches in TA2, and symbolic approaches in TA1, to be composed in due course into a neurosymbolic approach. The unsoundness of ML must be addressed by requiring the ML system to output a certificate, and/or to engage in an interactive proof system, so that the acceptance of the verification condition by the symbolic part of the system is sound, while freeing the symbolic part of the system from the requirement to synthesise complex proof derivations. An inevitable consequence of this is that the combined approach will not be complete, i.e. there will be no guarantee that the ML system will successfully discover an argument for a verification condition even if it is provably true. We accept this, and hope that rapidly increasing AI capabilities will still make it useful in practical applications.

 

I was wondering if it is encouraged for each institute to submit only one proposal? Or is it possible to have multiple proposals submitted from one institute?

It is possible to submit more than one proposal from the same institution. The ideas in each application must be different and form unique projects, but they could be complementary.

 

I noticed that there will be a second phase for TA1 submissions next year. Will the scope of the 2nd call be the same?

In Phase 2 our focus will primarily be on transitioning the exploratory groundwork laid in Phase 1 towards a more cohesive solution.

Therefore, whilst Phase 2 will be open to new applicants, creators from Phase 1 will be encouraged to apply for funding in Phase 2 given their involvement in the earlier part of the programme.

We intend to make a decision about whether to release a second phase of funding in March 2025.

 

Are you looking for an independent person to objectively expose the various methods and approaches of the Creators as part of Phase 1?

We don’t intend to select this role until Phase 2, under the assumption that the ideal candidate would likely emerge as a standout from Phase 1, as we  think such a person needs to be deeply involved from the outset.

If you believe you’re well-suited for this role, we would encourage you to apply to Phase 1 and instead of, or alongside, proposing your own research direction, consider explaining why you believe you are the best fit (e.g. in terms of coordinating other creators, facilitating technical conversations, and writing reports that take the form of interim drafts), as well as what approach you would take regarding deliverable timelines, collaborating with other creators, and how you would integrate at programme events such as workshops.