Site icon fundsforNGOs

RFPs: Scaling Trust Research and Development Grant Programme

#image_title

Deadline: 24-Mar-2026

The Advanced Research and Invention Agency (ARIA) is funding ambitious R&D projects to strengthen secure agent-to-agent interactions in untrusted environments. Through its Scaling Trust Programme, ARIA supports open-source tooling and fundamental research that enables provable AI security, protocol verification, and cyber-physical trust primitives. Applications close on 24 March 2026 at 14:00 GMT.

Programme Overview

The Advanced Research and Invention Agency (ARIA) is accepting applications under its Scaling Trust Research and Development Grant Programme. The initiative aims to build secure, trustworthy multi-agent systems capable of operating in adversarial or untrusted digital and cyber-physical environments.
The programme focuses on two main pillars:
• Open-source tooling for agent coordination and security
• Foundational research that converts empirical AI security into provable guarantees
The objective is to deliver real-world demonstrations, measurable trustworthiness, and high-impact adoption across academia and industry.

What Problem Is ARIA Addressing?

As AI agents increasingly negotiate, coordinate, and act autonomously, secure agent-to-agent interaction becomes critical. In untrusted environments, agents must:
• Translate vague human intent into formal security requirements
• Negotiate collective policies
• Generate and verify secure protocols
• Provide auditable execution records
ARIA’s Scaling Trust Programme aims to establish theoretical and practical foundations that ensure these processes are verifiable, secure, and interoperable.

Programme Structure: Three Interlinked Tracks

Although the programme includes three tracks, this funding call targets Track 2 and Track 3.

Track 1: Arena (Testing Environment)

A live adversarial environment where tools and research outputs are stress-tested.

Track 2: Tooling (Open-Source Infrastructure)

Supports development of reusable open-source components for secure agent coordination.

Track 3: Fundamental Research

Develops theoretical foundations and new security primitives for AI agent systems.

Track 2: Tooling (Open-Source Agent Infrastructure)

Track 2 funds the development of open-source agents and reusable components that act as baseline participants in the Arena.

Eligible Technical Areas

• Requirement gathering tools that convert ambiguous user input into formal security policies
Negotiation engines that derive shared collective policies
• Security reasoning systems that generate and implement secure protocols
• Reporting and auditing tools that convert execution traces into concise compliance statements

Expected Project Outcomes

Projects must demonstrate:
• Competitiveness within the Arena
• Generality across multiple tasks
• Computational efficiency
• Evidence of community adoption

Funding and Duration

• £200,000 to £2 million per project
• Project duration: 3 to 12 months
• 4 to 6 teams expected to be funded

Track 3: Fundamental Research

Track 3 advances scientific confidence in AI agent security and coordination through theoretical and experimental research.

Core Research Areas

• Formal AI Security with provable guarantees
• Cyber-Physical Trust Primitives using physical or biological trust anchors
• Foundations of Generative Security for automated protocol creation and verification
• Bluesky research exploring novel or unforeseen theoretical challenges

Funding and Duration

• £100,000 to £3 million per project
• Project duration: 6 to 18 months
• 3 large research centres plus smaller exploratory teams
All outputs must be openly published to seed research communities and inform future Arena challenges.

Open-Source and Ecosystem Requirements

All funded software and research outputs must:
• Be released under permissive open-source licences
• Ensure transparency and interoperability
Support long-term ecosystem growth
The programme operates through iterative phases including bootstrap, testing, improvement, and scaling, with quarterly milestones, check-ins, build weeks, and hackathons.

Who Is Eligible?

ARIA encourages applications from:
• Research institutions
• Universities
• Startups and technology companies
• Open-source developers
• Interdisciplinary AI security teams
Non-UK teams may be funded if they significantly enhance UK benefit and ecosystem impact.

How to Apply

  1. Identify the appropriate track (Track 2: Tooling or Track 3: Fundamental Research).

  2. Define the technical contribution and its transformative potential.

  3. Demonstrate differentiation from existing approaches.

  4. Provide a clear roadmap with milestones and measurable outputs.

  5. Outline responsible innovation practices and risk mitigation.

  6. Highlight UK benefit and ecosystem contribution.

  7. Submit the proposal before the deadline.

Key Deadline

Applications close on 24 March 2026 at 14:00 GMT.
Awards are expected to be confirmed by June 2026.

Evaluation Criteria

Proposals are assessed based on:
• Transformative potential
• Technical differentiation
• Clarity and feasibility
• Responsible and secure approach
• Team motivation and expertise
• Benefit to the UK innovation ecosystem

Common Mistakes to Avoid

• Failing to demonstrate open-source commitment
• Lack of measurable milestones
• Weak differentiation from existing research
• Insufficient explanation of security guarantees
• Ignoring UK ecosystem impact

Why This Programme Matters

Secure multi-agent systems are foundational to future AI infrastructure, autonomous coordination, cyber-physical systems, and digital governance. By funding both practical tooling and theoretical breakthroughs, ARIA aims to create globally leading, provably secure agent ecosystems rooted in open research and real-world validation.

FAQs

1. What is the primary goal of the Scaling Trust Programme?
To strengthen secure agent-to-agent interactions in untrusted environments through open-source tooling and foundational research.
2. Which tracks are open in this call?
Track 2 (Tooling) and Track 3 (Fundamental Research).
3. What is the maximum funding available?
Up to £2 million for Track 2 and up to £3 million for Track 3.
4. Are international applicants eligible?
Yes, exceptional non-UK teams may be funded if they significantly benefit the UK.
5. Are outputs required to be open source?
Yes. All funded outputs must be released under permissive open-source licences.
6. What is the project duration?
3–12 months for Track 2 and 6–18 months for Track 3.
7. When is the deadline?
24 March 2026 at 14:00 GMT.

Conclusion

The ARIA Scaling Trust R&D Grant Programme represents a major investment in provable AI security, open-source coordination infrastructure, and next-generation trust primitives. By combining tooling, theoretical research, and adversarial testing, the programme aims to build trustworthy multi-agent ecosystems with global impact. Applicants with ambitious, high-impact ideas in AI security and coordination should prepare proposals before the March 2026 deadline.

For more information, visit ARIA.

Exit mobile version