NSF requires disclosure of AI tool usage in proposal preparation. Ensure you disclose the use of FindGrants' AI drafting in your application.
NSF
Pretrained generative language models (LMs) have become popular labor-saving tools in domains like social media, education, business, and medicine. However, while these models have proven useful for generative tasks like programming and writing, it has been difficult to effectively use them as decision-making assistants for tasks like application review or grading. The key problem is verification: these models are opaque and prone to "hallucination" (i.e., making up content), logical mistakes, and other errors, making it hard to verify their advice. Existing research in the field of explainable artificial intelligence (XAI) has sought to solve the problem by exposing the underlying model logic for human scrutiny but has struggled to demonstrate improvements in decision-making performance. This project will develop a new approach to the verification problem in LM-supported decision-making by focusing on settings where existing guidelines (such as grading rubrics, job listings, and medical guidelines) describe how decisions should be made. The team will develop methods for using LMs to apply existing written guidelines to decision-making tasks, and present explanations of recommended decisions that can be verified in terms of those guidelines. The idea is that this approach will lead to more useful explanations and better decisions than existing XAI approaches in practical domains, advancing the science of explainable AI while having beneficial impacts on many societal problems. The project will first develop datasets and annotation tools for guideline-driven decision-making, collecting information on how humans draw links between guideline and input task documents, then compose those links into document-level decisions. The dataset will focus on three task types: short essay grading (with respect to a rubric); and resume screening (with respect to a job listing). The project team will then develop prompting and fine-tuning methods for LMs to identify, calibrate, and compose task-guideline links into verifiable decision recommendations. In parallel, the team will conduct experiments with people to study both human-human and human-model teaming in these tasks. Ultimately, this project will work toward safe, reliable, and well-aligned LM decision support by constraining the models to be "connective tissue" between humans and authoritative sources in the form of existing guidelines, rather than authoritative sources in themselves. This project is jointly funded by Human-Centered Computing and the Established Program to Stimulate Competitive Research (EPSCoR). This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Up to $405K
2030-05-31
Detailed requirements not yet analyzed
Have the NOFO? Paste it below for AI-powered requirement analysis.
One-time $749 fee · Includes AI drafting + templates + PDF export
New York Systems Change and Inclusive Opportunities Network (NY SCION)
Labor — up to $310000020251M
Trade Adjustment Assistance (TAA)
Labor — up to $2779372424.6M
Occupational Safety & Health - Training & Education (OSH T&E)
Labor — up to $590000020.3M
The Charter School Revolving Loan Fund Program
State Treasurer's Office — up to $100000.3M
The Charter School Revolving Loan Fund Program
State Treasurer's Office — up to $100000.3M
CEFA Bond Financing Program
State Treasurer's Office — up to $15000M