Skip to main content

CRII: OAC: An Evidence-Aided Symbolic Reasoning Framework for Trustworthy Interdependent Multi-Modal Multi-Agent Machine Learning Tasks

NSF

open

About This Grant

The rapid expansion of artificial intelligence (AI) services leads to a new era of agent-based machine learning (ML), where decisions are made by integrating multiple and distinct ML tasks. This introduces critical trust issues in AI outcomes since each ML agent operates independently and uniquely due to multimodality and uncertainty. It presents significant challenges to establishing trust among the distinct ML agents since there are no specific metrics and methods to justify the activities of each independent ML agent during training and execution. This research project aims to facilitate a trustworthy AI research infrastructure for solving interdisciplinary multi-modal and multi-agent ML learning tasks by developing an evidence-based trust metrics framework. This framework establishes foundational engineering principles essential for securing AI systems in several domains such as medical, transportation, military, and critical infrastructure. The framework enables trustworthy AI development compliance to enhance the ability to safely and securely build AI systems based on National Institute of Standards and Technology (NIST) guidance. The success of this research can provide hands-on learning and research opportunities focused on AI risk, machine learning, trustworthy multi-modal and multi-agent AI system design, and data science for undergraduates and graduate students. The project's impact leads to innovation and research in trustworthy AI, experiential learning curriculum development, and diverse workforce development in the domain of safe and secure AI systems, cybersecurity, AI task force, and interdisciplinary AI applications. The research outcome leads to a trustworthy AI development educational platform for mentoring and training high school and K12 students in multi-modal and multi-agent AI system design. This project develops a novel AI cyberinfrastructure testbed and enables an evidence-aided symbolic reasoning framework to facilitate a trustworthy ML development readiness to solve interdisciplinary AI challenges. One of the fundamental goals is to investigate and develop trust metrics as a measure of symbolic reasoning for interdependent ML agents. Those metrics must be capable of combining AI decisions, capturing uncertainty considering all evidence, combining rules, and multi-criteria for decision-making. Therefore, this research 1) investigates evidence theory to develop a symbolic reasoning framework for defining trust metrics of the ML pipeline; 2) develops a novel reasoning algorithm for monitoring and incident response, risk severity analysis, and reliable ML agents designed for interdisciplinary AI application; 3) creates a novel AI cyber-infrastructure testbed to develop, realize, and control the over 100,000 heterogeneous and distributed ML agents; and 4) implements two applications of multi-model multi-agent ML tasks, in the domain of cyber readiness and medical use, for justifying the effectiveness of the framework. The successful compilation of the evidence-aided symbolic reasoning AI development framework establishes trust among ML agents while it facilitates cyberinfrastructure for solving interdisciplinary multi-modal multi-agent ML tasks. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Focus Areas

machine learningengineeringeducation

Eligibility

universitynonprofitsmall business

How to Apply

Funding Range

Up to $175K

Deadline

2026-12-31

Complexity
Medium
Start Application

One-time $749 fee · Includes AI drafting + templates + PDF export

AI Requirement Analysis

Detailed requirements not yet analyzed

Have the NOFO? Paste it below for AI-powered requirement analysis.

0 characters (min 50)