Deadline: 20-Jan-25
The AI Safety Fund (AISF) has launched the Biosecurity AI Research Fund Program to support technical research that evaluates potential risks and develops safety measures for AI systems operating in biological contexts.
This funding aims to promote responsible development of frontier AI models while establishing robust evaluation frameworks for biorelated capabilities and safety measures.
This proposal will support technical research on frontier AI systems to reduce risks from biosecurity.
Funding Information
- The AISF plans to fund research projects by academic labs, non-profits, independent researchers, and for-profit mission-driven entities across both Biosecurity and Cybersecurity topics in the range of $350- $600K.
Duration
- The research duration must be one year or less.
Eligibility Criteria
- The AISF makes grants for independent researchers affiliated with academic institutions, research institutions, NGOs, and social enterprises across the globe that aim to promote the safe and responsible development of frontier models by testing, evaluating, and/or addressing safety and security risks. The AISF seeks to fund research that accelerates the identification of threats posed by the development and use of AI to prevent widespread harm.
- The U.S. Department of the Treasury’s Office of Foreign Assets Control (OFAC) imposes restrictions on services and transactions with individuals or entities located in countries subject to comprehensive U.S. sanctions. As a result, due to applicable US sanctions, the AISF is unable to award grants to the following countries:
For more information, visit AISF.