NSF AI Disclosure Required
NSF requires disclosure of AI tool usage in proposal preparation. Ensure you disclose the use of FindGrants' AI drafting in your application.
CAREER: Opening the Black Box: Advancing Interpretable Machine Learning for Computer Vision
NSF
About This Grant
Machine learning-based artificial intelligence (AI) is widely used in computer vision, but there are growing concerns regarding responsible use of AI. One particular concern relates to the "black-box" nature of state-of-the-art AI models. These models are incredibly powerful, but they cannot be easily interpreted by humans. Lack of interpretability challenges the responsible use of AI - without model interpretability, human users cannot understand model decisions or correct model mistakes. This can have dire consequences, especially in critical, high-stakes settings. The goal of this project is to improve the interpretability of machine learning models in the context of computer vision. Specifically, this project will develop innovative technologies to allow machine learning-based computer vision models to explain their reasoning processes to human users, and to allow human users to interact with those models to correct their mistakes. By fostering a two-way dialogue between AI and human users, the technologies developed in this project will not only lead to computer vision models with improved interpretability to human users but will also empower users to make more informed decisions. At the same time, the developed technologies will facilitate continuous improvement of computer vision models based on feedback from human users and will lead to more accurate models with refined reasoning processes. This project will advance interpretable machine learning by creating new interpretable models and techniques for computer vision. It will lead to: 1) multimodal interpretable models that unify various forms of interpretability, such as prototype-based interpretability and natural language explanations; 2) interpretable generative models, which can explain the image generation process to human users; 3) interpretable reinforcement learning techniques that can be used to train interpretable policies based on their interactions with the environment; and 4) human-AI interaction techniques that will allow human users to interact with interpretable models to correct mistakes in the models' reasoning and improve the quality of model predictions and explanations. The research effort will push the boundaries of interpretable machine learning, by unifying various forms of interpretability, by extending the use of interpretable machine learning to generative modeling and to reinforcement learning, and by bringing human-AI interaction to a new level. This project also comes with an integrated education component, which will lead to lesson plans that integrate interpretable machine learning, as well as the ethical and responsible use of AI, into high school classrooms. The education effort will bridge the gap between cutting-edge AI research and K-12 classrooms and will inspire the next generation of AI scientists. This project is jointly funded by the Robust Intelligence Program and the Established Program to Stimulate Competitive Research (EPSCoR) Program. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Focus Areas
Eligibility
How to Apply
Up to $584K
2030-06-30
One-time $749 fee · Includes AI drafting + templates + PDF export
AI Requirement Analysis
Detailed requirements not yet analyzed
Have the NOFO? Paste it below for AI-powered requirement analysis.