NSF requires disclosure of AI tool usage in proposal preparation. Ensure you disclose the use of FindGrants' AI drafting in your application.
NSF
Artificial Intelligence (AI) has enabled a plethora of applications today, ranging from the most recent chatbots that give you a human-like question/answer experience to autonomous driving cars. But, all these massive feats with AI incur huge costs in terms of energy, memory, and power consumption. In the past decade, Spiking Neural Networks (SNNs) have emerged as a low-power alternative to AI. SNN’s main attraction lies in the fact that they offer low-power architectural implementations, especially for arithmetic operations. Furthermore, unlike traditional neural networks, SNNs process information over time and the temporal dimension, if leveraged suitably, can help enable the next generation of AI applications at lower cost with better performance and robustness. However, training SNNs suitably for realistic tasks has been a long-standing challenge. This project innovates on fundamental optimization strategies, using the temporal features in SNNs to yield new architectures with diverse connectivity and sparsity that yield significant energy-efficiency benefits for distributed low-power edge computing applications. Furthermore, this research will support the interdisciplinary development of Ph.D. and undergraduate students and provides a unique education infrastructure to train the next generation of electrical and computer engineering researchers and practitioners. Today, deploying large-scale spiking neural networks (SNNs) for realistic computer vision and related tasks is a non-trivial challenge. This project targets two directions to build large-scale SNNs: 1) We innovate on Neural Architecture Search (NAS) to yield new SNN architectures with temporal feedback connections (that is in stark contrast to conventional feedforward deep learning networks). 2) We use the SNN-specific NAS optimization to perform distributed learning on multiple agents for vision tasks and demonstrate the benefits of using SNNs for low-power edge computing. Particularly, we develop a zero-shot approach that does not require training to search for the optimal network architecture while leveraging temporal and spatial sparsity with pruning and related techniques. This strategy is expected to shorten the design cycle of SNN architecture search by one to two orders of magnitude over existing work. The proposed NAS search will be integrated into a federated learning framework where multiple devices with different resources and data heterogeneity are learning together. Essentially, this project’s framework for discovering new SNN architectures can yield powerful and radical solutions for learning on multiple devices with extreme resource limitations to enable numerous distributed AI applications. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Up to $369K
2027-12-31
Detailed requirements not yet analyzed
Have the NOFO? Paste it below for AI-powered requirement analysis.
One-time $749 fee · Includes AI drafting + templates + PDF export
Category I: CloudBank 2: Accelerating Science and Engineering Research in the Commercial Cloud
NSF — up to $24M
Category I: Nexus: A Confluence of High-Performance AI and Scientific Computing with Seamless Scaling from Local to National Resources
NSF — up to $24.0M
Research Infrastructure: Mid-scale RI-1 (MI:IP): Dual-Doppler 3D Mobile Ka-band Rapid-Scanning Volume Imaging Radar for Earth System Science
NSF — up to $20.0M
A Scientific Ocean Drilling Coordinating Office for the US Community
NSF — up to $17.6M
Category I: AMA27: Sustainable Cyber-infrastructure for Expanding Participation
NSF — up to $13.8M
Graduate Research Fellowship Program (GRFP)
NSF — up to $9.0M