Skip to main content

Collaborative Research: RI:Small: Use-Inspired RL for Many-Agent Contexts: Co-opetition, Partial Observability, and Dynamic Types

NSF

open

About This Grant

Artificial intelligence (AI) is evolving from answering our questions to acting on our behalf. However, we live in a complex world where individuals and groups must both cooperate and compete to achieve their goals, often without clear insight into others' behavior or intentions. For example, in business, teams work toward shared objectives while balancing their own priorities and competing for resources, frequently uncertain about how others will act. Similarly, AI agents acting on our behalf must navigate this uncertainty, learning when to collaborate and when to pursue their own goals. As another example, defender agents on a cybernetwork collaborate with other defenders while being adversarial against attackers. These scenarios raise important questions about how AI agents can learn to both cooperate and compete with one another, and how large multi-agent systems can be guided toward desirable outcomes. This project explores these challenges by studying how agents learn from experience, anticipate others' actions, and determine the amount of data needed to learn effectively. The project steps out of disciplinary boundaries to bring concepts from statistical mechanics, control theory, and management sciences to bear upon these challenges. The project will also educate students in the theory and practice of AI that is relevant to learning and will produce program libraries for public use. This project studies reinforcement learning (RL) for an agent sharing its environment with a large collection of other learning agents whose features may change. The approach seeks concurrency and Bayesian optimality of many-agent RL via full decentralization, and spans three research thrusts. The first thrust investigates techniques from statistical mechanics to let an RL agent effectively model a collective of other learning agents organized in various topologies, and studies the emergent behavior in the system. The second thrust investigates computational representations for mixed-motive settings and the stability of decentralized learning in such settings, specifically exploiting Lyapunov techniques from control theory. The third thrust investigates RL under agent type dynamism due to unknown events. The research results will be validated using existing benchmarks and in two use-inspired domains: one that models a business organization and another one that simulates a cybersecurity environment. The broader impact of this project is creating a foundation for the science of autonomous decentralized learning in systems with many agents with an emphasis on data efficiency. This will inform the management science related to future businesses and agentic organizations, as well as the science of successful human-AI teaming. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Focus Areas

research

Eligibility

universitynonprofitsmall business

How to Apply

Funding Range

Up to $300K

Deadline

2028-12-31

Complexity
Medium
Start Application

One-time $749 fee · Includes AI drafting + templates + PDF export

AI Requirement Analysis

Detailed requirements not yet analyzed

Have the NOFO? Paste it below for AI-powered requirement analysis.

0 characters (min 50)