NSF AI Disclosure Required
NSF requires disclosure of AI tool usage in proposal preparation. Ensure you disclose the use of FindGrants' AI drafting in your application.
ERI: Modeling and Control of Disruptions in Complex Autonomous Systems
NSF
About This Grant
This Engineering Research Initiation (ERI) grant will fund research that attempts to enable efficient, on-the-fly learning of optimal control strategies for complex engineering systems operating in uncertain environments, with application to connected and autonomous vehicles, thereby promoting the progress of science and advancing the national prosperity and welfare. The research looks to enable real-time adaptation of autonomous systems to unexpected disruptions, with potential applications to power grids and robotics. Autonomous systems have become increasingly complex, integrating advanced decision-making algorithms to operate in dynamic and uncertain environments, while the sources of disruptions have grown even more diverse and unpredictable. However, current control algorithms lack the ability to correctly categorize and respond to different types of unexpected disruptions, such as sensor noise, mechanical malfunctions, environmental disturbances, or adversarial cyber-attacks without human intervention, leading to performance degradation and potential failures. This research project looks to address this critical gap by developing a unified framework that enables autonomous systems to detect, classify, and mitigate disruptions in real time through game-theoretic modeling and reinforcement learning control strategies. By equipping autonomous systems with the capability to respond dynamically without human intervention, this research seeks to enhance their resilience, efficiency, and safety in other high-stakes applications such as intelligent transportation systems, space exploration, and disaster response. Additionally, this project seeks to contribute to workforce development by integrating research with education, providing interdisciplinary training in artificial intelligence and control systems, and engaging students through outreach and STEM initiatives aimed at expanding the engineering and technology workforce. This project looks to model complex autonomous systems as multi-agent systems, where different components collaborate to achieve a common objective. When a disruption occurs, the affected component is treated as an irrational agent within the system. Passive irrational agents represent faults caused by external disturbances, while active irrational agents exhibit adversarial behavior, such as cyber-attacks. While traditional fault mitigation methods treat all disturbances as adversarial, this research introduces a cooperative index to measure deviations from expected behavior and employs inverse reinforcement learning to distinguish between passive and active irrational agents for improved joint rewards. To ensure optimal system adaptation, the project seeks to develop a novel Actor-Critic-Friend-or-Foe (ACFoF) algorithm, which dynamically updates control policies while maintaining system performance. The researched framework could offer a groundbreaking approach to autonomous disruption detection and mitigation, enhancing the reliability of autonomous systems in critical applications. Additionally, the research supports educational initiatives by providing interdisciplinary training opportunities for students in artificial intelligence, control systems, and cybersecurity, preparing the next generation of researchers and engineers to tackle challenges in autonomous decision-making. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Focus Areas
Eligibility
How to Apply
Up to $199K
2027-07-31
One-time $749 fee · Includes AI drafting + templates + PDF export
AI Requirement Analysis
Detailed requirements not yet analyzed
Have the NOFO? Paste it below for AI-powered requirement analysis.