NSF requires disclosure of AI tool usage in proposal preparation. Ensure you disclose the use of FindGrants' AI drafting in your application.
NSF
Artificial intelligence (AI) systems, especially advanced machine learning models, increasingly support critical decisions in areas such as healthcare. However, many of these AI systems operate as "black boxes", providing outcomes without clear explanations of how decisions were made. The lack of transparency can hinder trust and accountability, particularly when AI decisions significantly affect human lives. This project seeks to address a critical limitation of existing explainable AI techniques: their inefficiency in producing explanations quickly and reliably. By improving the efficiency of these methods, this research aims to broaden the practical use of explainable AI systems in real-world scenarios, such as medical diagnosis and personalized treatments. This advancement in AI interpretability will significantly enhance the national health, prosperity, and welfare by enabling safer and more reliable deployment of AI in critical application scenarios. This project addresses the computational inefficiencies in current explainable AI methods through three interconnected research objectives. First, it will accelerate computationally demanding interpretation algorithms, specifically focusing on two widely used but computationally intensive explanation scenarios: Use a solution approach for distributing gains or costs fairly . This acceleration will be achieved by novel randomized approximation techniques, substantially lowering computational complexity. Second, the project will construct unified explainer models using manifold-based modeling methods, allowing efficient generation of explanations through a single inference computation, thus enabling explanations for large volumes of data simultaneously. Third, the developed methods will be validated and embedded into critical medical applications (i.e., histopathology imaging and single-cell RNA sequencing in cancer prognosis) leveraging domain-specific knowledge prior and expert feedback. Rigorous evaluation using large-scale datasets will be conducted to demonstrate the practical value of these methods. Collectively, this work will establish an innovative theoretical and practical framework, enabling rapid and reliable explanations essential for deploying trustworthy AI systems in health informatics and beyond. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Up to $175K
2027-05-31
Detailed requirements not yet analyzed
Have the NOFO? Paste it below for AI-powered requirement analysis.
One-time $49 fee · Includes AI drafting + templates + PDF export
Research Infrastructure: National Geophysical Facility (NGF): Advancing Earth Science Capabilities through Innovation - EAR Scope
NSF — up to $26.6M
AmLight: The Next Frontier Towards Discovery in the Americas and Africa
NSF — up to $9M
CREST Phase II Center for Complex Materials Design
NSF — up to $7.5M
EPSCoR CREST Phase I: Center for Energy Technologies
NSF — up to $7.5M
EPSCoR CREST Phase I: Center for Post-Transcriptional Regulation
NSF — up to $7.5M
EPSCoR CREST Phase I: Center for Semiconductors Research
NSF — up to $7.5M