Skip to main content

I-Corps: Translation Potential of Spatial Context for Next-Generation Wearable Technologies

NSF

open

About This Grant

This I-Corps project focuses on the commercial potential of a new acoustic sensing solution that equips wearable audio devices with spatial hearing capabilities. The technology determines both the content of surrounding sounds and the direction they come from, enabling features such as distinguishing speakers in meetings, issuing directional safety alerts, and enhancing voice-based interactions in complex environments. The problem addressed is the absence of spatial awareness in current wearable technologies, which limits their effectiveness in dynamic, real-world settings. This limitation stems from the inability to integrate traditional multi-microphone systems into small form factors like earbuds. The solution responds to the growing societal demand for intelligent and intuitive human-technology interfaces across healthcare, mobility, workplace, and accessibility domains. By enhancing situational awareness in everyday wearables, the technology supports public safety and improves the functionality of voice-first systems for users. The technology advances the national health, prosperity, and welfare through access to intelligent tools, next-generation computing platforms, and the integration of advances in acoustics, materials, and machine learning. This I-Corps project utilizes experiential learning coupled with a first-hand investigation of the industry ecosystem to assess the translation potential of the technology. This solution is based on the development of a microstructure-assisted, acoustic front-end that captures directional information using a single microphone. The system encodes spatial cues into compact signal representations that are processed by a low-power neural network optimized for wearable devices. These spatial features are then aligned with speech embeddings and used as input to a language understanding model that operates in a cloud-supported architecture. The use of directional encoding with real-time, on-device processing minimizes latency and power consumption, making it suitable for continuous use in daily environments. By enabling wearable devices to interpret both linguistic content and physical sound location, the technology improves user interaction, safety, and privacy. The project explores user requirements, market fit, and technical constraints through direct engagements with stakeholders, ultimately guiding the transition of this innovation from research to practical deployment. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Focus Areas

machine learning

Eligibility

universitynonprofitsmall business

How to Apply

Funding Range

Up to $50K

Deadline

2026-08-31

Complexity
Medium
Start Application

One-time $249 fee · Includes AI drafting + templates + PDF export

AI Requirement Analysis

Detailed requirements not yet analyzed

Have the NOFO? Paste it below for AI-powered requirement analysis.

0 characters (min 50)