NSF AI Disclosure Required
NSF requires disclosure of AI tool usage in proposal preparation. Ensure you disclose the use of FindGrants' AI drafting in your application.
CAREER: Advancing Learning-based 3D Vision Systems for Unstructured Environment Exploration
NSF
About This Grant
Understanding and interpreting complex environments is crucial for autonomous systems to operate safely and efficiently. A self-driving vehicle must navigate through uneven terrain, a search-and-rescue drone must identify obstacles in disaster-stricken areas, and an environmental monitoring system must accurately reconstruct large-scale off-road scenes. However, existing computer vision algorithms, primarily designed for structured indoor or urban environments, often fail in these scenarios due to the unpredictable nature of off-road terrain, dynamic environmental conditions, and the scarcity of reliable visual features. This project will develop a 3D computer vision framework that fuses multiple sensing modalities, including RGB cameras, depth sensors, LiDAR, and event cameras, to enhance feature extraction, tracking, and large-scale scene reconstruction, thereby improving perception accuracy and adaptability in unstructured environments. The research will provide a foundation for next-generation autonomous perception systems, enabling significant advancements in autonomous navigation, environmental monitoring, and search-and-rescue operations. Additionally, the project will provide valuable educational opportunities by engaging students in hands-on research and promoting interdisciplinary learning in STEM. This project will introduce a framework for learning robust 3D visual representations in unstructured environments by integrating multi-modal sensing, feature extraction, and dynamic scene reconstruction. The project will address four fundamental research challenges: (1) multi-modal imaging systems -- developing and deploying a multi-camera and multi-sensor setup to capture diverse environmental data; (2) multi-modal feature extraction and matching -- integrating short- and long-range sensor data using transformer-based architectures to improve feature robustness and spatial-temporal consistency; (3) robust 3D reconstruction -- developing adaptive reconstruction algorithms capable of handling varying lighting, weather conditions, and large-scale terrain variations, incorporating hierarchical Gaussian Splatting for scalable scene modeling; and (4) adaptive tracking - designing real-time tracking algorithms to manage occlusions, clutter, and dynamic elements, enabling accurate motion estimation and scene understanding. These research aims will be complemented by an extensive evaluation plan, including systematic benchmarking against state-of-the-art methods and real-world validation across diverse unstructured environments. The findings will contribute to advancements in computer vision, robotics, and artificial intelligence while providing publicly available datasets and educational resources. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Focus Areas
Eligibility
How to Apply
Up to $600K
2030-05-31
One-time $749 fee · Includes AI drafting + templates + PDF export
AI Requirement Analysis
Detailed requirements not yet analyzed
Have the NOFO? Paste it below for AI-powered requirement analysis.