This program has now closed. Please keep an eye on our website for future opportunities.
We are launching a program to award grants of between $10,000 - $100,000 to fund research into the impacts of agentic AI systems and practices for making them safe.
We are excited to announce a program to fund research proposals that explore the impacts of agentic AI systems and practices for making them safe. We define an AI system’s agenticness as the degree to which it can adaptably achieve complex goals in complex environments with limited direct supervision. There is a growing trend towards AI systems being made increasingly agentic, with systems like GPTs and the Assistants API able to take actions more autonomously than previous modes of interaction with language models, such as question-answering. We are interested in work that explores both direct and indirect impacts of the adoption of agentic AI systems, across both technical and socioeconomic issues.
Building agentic AI systems carries with it unique risks. Existing harms like bias and inequitable access could be amplified while risks such as critical failures or loss of human control could emerge. We are looking to foster research that not only gauges these impacts but also proposes solutions to potential challenges. We encourage applicants to explore methods and frameworks that prioritize safety, transparency, and accountability in agents.
How to participate
Grants will range from $10,000 to $100,000 depending on the scope of the proposal. We may also grant API credits, ChatGPT Plus subscriptions, or other services if beneficial to the research. The duration of the research projects should be between 3 and 8 months. See our “Areas of Interest” for guidance on the kinds of questions we hope research proposals will explore.
Anyone may apply, but we particularly welcome applications from academic, non-profit, or independent researchers. Note: this is not a program for funding commercial products.
Applications will be judged based on their ability to produce actionable research that addresses the impacts and risks of agentic systems. This will include relevance to our areas of interest, proposed methodology, the background and skills of the applicant(s), and alignment with our mission.
Timeline
January 20, 2024 9:00 pm Pacific Time: Deadline to submit grant application
February 9, 2024: Selected proposals (pending compliance checks) will be notified
Areas of interest
We are excited to fund research that addresses the issues raised in our paper. We have summarized some of the key areas below, though we are open to proposals that explore other questions related to governance of increasingly agentic AI systems. As discussed in the paper, there are many challenges to consider as this technology evolves over time, though for this program we are particularly interested in research that provides actionable solutions that can influence the immediate development of agentic AI systems (e.g., in the next 12 months).
- Evaluating Suitability for the Task: How can we evaluate whether an agentic AI system is appropriate for a given use case? By what practices can we determine whether agents meet a given threshold of reliability, particularly in critical domains?
- Constraining the Action-Space and Requiring Approval: Under what circumstances should actions require explicit human approval? What design practices could ensure users have sufficient context before approving an action?
- Setting Agents’ Default Behaviors: What behaviors and assumptions should be instilled in agentic AI systems by default for increased safety?
- Legibility of Agent Activity: How can we design agentic systems to create visibility into the internal reasoning and communication of agents, and ensure that this presented reasoning is faithful to the logic the agent truly employed? How can users identify mistakes or ill-informed actions?
- Automatic Monitoring: How can we solve the practical and technical challenges of automatically monitoring AI agents, such as monitoring systems’ reliability and vulnerability to adversarial attacks? What are the key agent failure modes to monitor for? When is human oversight still necessary?
- Attributability: How can we practically enable robust AI agent identity verification, or other attribution methods?
- Interruptibility and Maintaining Control: How can we make sure to preserve a graceful fallback option for when the agent is interrupted (including in the middle of an action sequence)?
- In addition to the above, we are also interested in the indirect effects of agentic AI systems and research on how to prepare for them. This includes labor displacement and economic impacts, adoption races, over-reliance (particularly in high-stakes domains), shifting the offense-defense balance of dual-use technology, correlated failures, and other effects not discussed in the paper.
Research into Agentic AI Systems
This program has now closed. Please keep an eye on our website for future opportunities.
We are launching a program to award grants of between $10,000 - $100,000 to fund research into the impacts of agentic AI systems and practices for making them safe.
We are excited to announce a program to fund research proposals that explore the impacts of agentic AI systems and practices for making them safe. We define an AI system’s agenticness as the degree to which it can adaptably achieve complex goals in complex environments with limited direct supervision. There is a growing trend towards AI systems being made increasingly agentic, with systems like GPTs and the Assistants API able to take actions more autonomously than previous modes of interaction with language models, such as question-answering. We are interested in work that explores both direct and indirect impacts of the adoption of agentic AI systems, across both technical and socioeconomic issues.
Building agentic AI systems carries with it unique risks. Existing harms like bias and inequitable access could be amplified while risks such as critical failures or loss of human control could emerge. We are looking to foster research that not only gauges these impacts but also proposes solutions to potential challenges. We encourage applicants to explore methods and frameworks that prioritize safety, transparency, and accountability in agents.
How to participate
Grants will range from $10,000 to $100,000 depending on the scope of the proposal. We may also grant API credits, ChatGPT Plus subscriptions, or other services if beneficial to the research. The duration of the research projects should be between 3 and 8 months. See our “Areas of Interest” for guidance on the kinds of questions we hope research proposals will explore.
Anyone may apply, but we particularly welcome applications from academic, non-profit, or independent researchers. Note: this is not a program for funding commercial products.
Applications will be judged based on their ability to produce actionable research that addresses the impacts and risks of agentic systems. This will include relevance to our areas of interest, proposed methodology, the background and skills of the applicant(s), and alignment with our mission.
Timeline
January 20, 2024 9:00 pm Pacific Time: Deadline to submit grant application
February 9, 2024: Selected proposals (pending compliance checks) will be notified
Areas of interest
We are excited to fund research that addresses the issues raised in our paper. We have summarized some of the key areas below, though we are open to proposals that explore other questions related to governance of increasingly agentic AI systems. As discussed in the paper, there are many challenges to consider as this technology evolves over time, though for this program we are particularly interested in research that provides actionable solutions that can influence the immediate development of agentic AI systems (e.g., in the next 12 months).
- Evaluating Suitability for the Task: How can we evaluate whether an agentic AI system is appropriate for a given use case? By what practices can we determine whether agents meet a given threshold of reliability, particularly in critical domains?
- Constraining the Action-Space and Requiring Approval: Under what circumstances should actions require explicit human approval? What design practices could ensure users have sufficient context before approving an action?
- Setting Agents’ Default Behaviors: What behaviors and assumptions should be instilled in agentic AI systems by default for increased safety?
- Legibility of Agent Activity: How can we design agentic systems to create visibility into the internal reasoning and communication of agents, and ensure that this presented reasoning is faithful to the logic the agent truly employed? How can users identify mistakes or ill-informed actions?
- Automatic Monitoring: How can we solve the practical and technical challenges of automatically monitoring AI agents, such as monitoring systems’ reliability and vulnerability to adversarial attacks? What are the key agent failure modes to monitor for? When is human oversight still necessary?
- Attributability: How can we practically enable robust AI agent identity verification, or other attribution methods?
- Interruptibility and Maintaining Control: How can we make sure to preserve a graceful fallback option for when the agent is interrupted (including in the middle of an action sequence)?
- In addition to the above, we are also interested in the indirect effects of agentic AI systems and research on how to prepare for them. This includes labor displacement and economic impacts, adoption races, over-reliance (particularly in high-stakes domains), shifting the offense-defense balance of dual-use technology, correlated failures, and other effects not discussed in the paper.