NSF requires disclosure of AI tool usage in proposal preparation. Ensure you disclose the use of FindGrants' AI drafting in your application.
NSF
Artificial intelligence (AI) technologies are revolutionizing fields such as healthcare, robotics, agriculture, public safety, and many more. However, the increasing reliance on AI also brings concerns about trust, reliability, and real-world conditions that pose significant risks to the long-term value of AI. This project tackles these challenges by developing a human-inspired AI framework that processes data in ways similar to human perception - drawing information from multiple sensory inputs, providing transparent and explainable decisions, performing reliably even with noisy or missing inputs, and operating efficiently with reduced energy use. These innovations will help ensure that AI technologies are safe, and accountable, aligning with national priorities for secure, ethical, and responsible AI development. It will offer hands-on experiences to K–12 students, mentoring to undergraduate and graduate students, and developing new college courses on trustworthy machine learning, thereby cultivating a next generation of scientific leaders. Despite significant advances in AI, current systems remain limited by key challenges: they are often task-specific, brittle to real-world disruptions, computationally intensive, constrained to single modalities, and lack interpretability. To address these limitations, this project develops a unified multimodal analytics framework centered on three core research goals: interpretability, robustness, and efficiency. First, it introduces a trustworthy spatiotemporal perception model that leverages dynamic semantic graph composition to represent scene entities, their attributes, and interactions across multiple levels of granularity, thereby enhancing transparency and interpretability in decision-making. Second, it proposes a robust multimodal learning architecture that fuses diverse sensory inputs, including visual modalities (RGB, depth, infrared, motion) and non-visual modalities (audio, text), through a collaborative expert-agent architecture combining Sparse Multimodal-aware Experts (SMaE) and a Unified Multi-Agent (UMA) system. This design is specifically engineered to maintain performance under real-world conditions involving noisy or missing data. Third, to reduce the computational and energy demands of AI deployment, the project presents a spectrum-preserving energy minimization approach for token merging, inspired by spectral graph theory, which compresses models while preserving critical information. The effectiveness of these innovations will be demonstrated through applications in unified multimodal understanding tasks such as robotic perception, video analytics (including recognition, captioning, and retrieval), and animal behavior analysis, using a wide range of benchmark datasets and real-world deployment scenarios. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Up to $500K
2030-08-31
Detailed requirements not yet analyzed
Have the NOFO? Paste it below for AI-powered requirement analysis.
One-time $749 fee · Includes AI drafting + templates + PDF export
Research Infrastructure: National Geophysical Facility (NGF): Advancing Earth Science Capabilities through Innovation - EAR Scope
NSF — up to $26.6M
AmLight: The Next Frontier Towards Discovery in the Americas and Africa
NSF — up to $9M
CREST Phase II Center for Complex Materials Design
NSF — up to $7.5M
EPSCoR CREST Phase I: Center for Energy Technologies
NSF — up to $7.5M
EPSCoR CREST Phase I: Center for Post-Transcriptional Regulation
NSF — up to $7.5M
EPSCoR CREST Phase I: Center for Semiconductors Research
NSF — up to $7.5M