NSF AI Disclosure Required
NSF requires disclosure of AI tool usage in proposal preparation. Ensure you disclose the use of FindGrants' AI drafting in your application.
STTR Phase I: A Programmable Processor-in-Memory Accelerator for Data-Intensive and Deep Learning Applications
NSF
About This Grant
The broader/commercial impact of this Small Business Technology Transfer (STTR) Phase I project will be to improve the efficiency of real-time data processing in safety-critical applications, such as autonomous driving. The company is proposing to develop a Processor-in-Memory (PIM) that is flexible and capable of few-shot learning using AI methods, which require only a few training samples instead of large datasets. The initial target application domain is autonomous vehicles, in both indoor and outdoor environments. Machine Learning embedded autonomous and connected vehicles revenue in the US market is expected to reach 78.63B$ by 2030, growing at a compound rate of 19.56% per year during 2023-2032. This domain itself has a very broad base, encompassing the automobile industry as well as material handling and manufacturing industries that use automation. Therefore, the research outcomes are expected to influence and impact this multi-billion-dollar AI-driven automation sector, potentially positively affecting millions of human lives. This Small Business Technology Transfer (STTR) Phase I project will develop a Processor-in-Memory (PIM) that is flexible and capable of few-shot learning using AI methods, which require only a few training samples instead of large datasets. The PIM is a hardware accelerator that embeds Processing Elements (PEs) inside dynamic random access memory (DRAM) subarrays, which are adopted in a large majority of computing devices and processing platforms. By eliminating the interconnect bottleneck between the memory subsystem and the PEs, as exists in traditional computers using CPUs and GPUs, the PIM is expected to improve energy efficiency by one or two orders of magnitude. The proposed accelerator hardware is based on modular LookUp Table (LUT) based PEs, which enables both functional flexibility as well as energy efficiency. Therefore, the proposed device can support a variety of applications, encompassing AI algorithms as well as cybersecurity applications such as data encryption/decryption at unprecedented low energy expenditure. Unlike most Deep Learning AI accelerators that rely on large datasets for training, this system will be capable of fast learning and enable automation with minimal downtime. Due to combined energy-efficient hardware and less reliance on training data compared to the state-of-the-art, the company anticipates achieving high accuracy in automation with higher energy efficiency. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Focus Areas
Eligibility
How to Apply
Up to $305K
2026-09-30
One-time $749 fee · Includes AI drafting + templates + PDF export
AI Requirement Analysis
Detailed requirements not yet analyzed
Have the NOFO? Paste it below for AI-powered requirement analysis.