Skip to main content

CAREER: Foundations of Scalable and Resilient Distributed Real-Time Decision Making in Open Multi-Agent Systems

NSF

open

About This Grant

Advances in artificial intelligence and machine learning provide the opportunity for using autonomous multi-agent systems to solve important social and economic problems, such as the application of multiple robots in wildfire monitoring, search-and-rescue, manufacturing, etc. In these systems, agents autonomously cooperate to make decisions in real-time to perform complex tasks. Reinforcement learning, a data-driven control method that enables agents to autonomously learn desired tasks by interacting directly with the environment, has emerged as one of the predominant frameworks for this kind of real-time decision making. While reinforcement learning provides a powerful and flexible framework, it suffers fundamental challenges in its scalability and resilience. Specifically, existing methods require a vast amount of data and computational power and can be unstable in the presence of various types of errors and adversaries. These challenges are the main barriers to the wide applicability of reinforcement learning for real-world problems. This CAREER project will develop new foundations of scalable and resilient distributed reinforcement learning for real-time autonomous cooperation in open multi-agent systems. The overarching goal is to design new learning and control methods that enable agents to interact effectively in open systems, adapt gracefully in time-varying environments, and be resilient to unexpected failures and adversaries. The project will also contribute to education and workforce development by integrating the research findings with rigorous educational and outreach activities, course development, student training, and public partnerships. The central idea of this project is to establish new fundamentals of two-time-scale stochastic approximation for non-monotone systems. The key approach is to leverage extrapolation techniques in optimization and singular perturbation theories in control to address the instability issues of stochastic approximation under non-monotone settings. New theoretical principles will be studied to characterize the finite-time complexity of the proposed methods. By leveraging these new results of two-time-scale stochastic approximation, this project will advance several foundational aspects of distributed learning and control in open multi-agent systems. The focus is to develop new scalable and resilient distributed multi-time-scale reinforcement learning methods that allow agents to cooperate efficiently in real-time under diverse practical considerations, including time-varying numbers of agents, unexpected failures, communication constraints, and adversaries. During the course of this project, the proposed research activities will be evaluated systematically through a series of simulations and field experiments of multi-robot navigation. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Focus Areas

machine learningeducationsocial science

Eligibility

universitynonprofitsmall business

How to Apply

Funding Range

Up to $518K

Deadline

2029-02-28

Complexity
Medium
Start Application

One-time $749 fee · Includes AI drafting + templates + PDF export

AI Requirement Analysis

Detailed requirements not yet analyzed

Have the NOFO? Paste it below for AI-powered requirement analysis.

0 characters (min 50)