Skip to main content

Collaborative Research: III: Small: An Information-Theoretic Framework for Explainable and Explanation-Assisted Graph Learning

NSF

open

About This Grant

Graphs are powerful tools for representing relationships in complex systems, from social networks to weather monitoring stations. Graph Neural Networks (GNNs) have emerged as effective methods for analyzing these interconnected systems, but their "black box" nature poses significant challenges in critical applications such as environmental monitoring, healthcare, and finance. This project develops a comprehensive framework for making GNN predictions explainable and trustworthy. The research addresses the urgent need for artificial intelligence systems that can not only make accurate predictions but also explain their reasoning in ways that domain experts can understand and verify. For instance, in South Florida's water management network where monitoring stations form a graph connected by hydrological pathways, emergency managers need to understand which stations and their interconnections are most influential in flood predictions. This capability is essential for building trust in these systems and ensuring their responsible deployment in applications that affect public safety and welfare. The project will train students in interdisciplinary research combining machine learning, information theory, and practical applications, while developing educational materials that bridge theoretical foundations with real-world implementations. This project establishes a unified framework using information theory concepts for explainable GNNs through two complementary research thrusts. The first thrust develops rigorous mathematical foundations for quantifying explainability in graph learning, including necessary and sufficient conditions for classifier explainability, methods to address out-of-distribution challenges, and ways to demonstrate how accurate the finding are. The second thrust translates these theoretical insights into practical architectures and algorithms, including computationally efficient explainers, generative models for robust explainers, and co-design frameworks that balance prediction accuracy with explainability. The research introduces novel concepts such as nonverbal signatures for characterizing explanation patterns and explanation-assisted learning mechanisms that leverage extracted explanations to improve model performance. Extensive evaluations will be conducted on benchmark datasets and specially curated weather forecasting datasets from South Florida's water management systems. The project advances the state-of-the-art by providing both theoretical rigor in quantifying explainability and practical solutions for deploying trustworthy GNN systems in critical applications. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Focus Areas

machine learningeducationsocial science

Eligibility

universitynonprofitsmall business

How to Apply

Funding Range

Up to $333K

Deadline

2028-09-30

Complexity
Medium
Start Application

One-time $749 fee · Includes AI drafting + templates + PDF export

AI Requirement Analysis

Detailed requirements not yet analyzed

Have the NOFO? Paste it below for AI-powered requirement analysis.

0 characters (min 50)