NSF AI Disclosure Required
NSF requires disclosure of AI tool usage in proposal preparation. Ensure you disclose the use of FindGrants' AI drafting in your application.
FAI: Using Explainable AI to Increase Transparency in the Juvenile Justice System’s Use of Risk Scores
NSF
About This Grant
Throughout the United States, juvenile justice systems use juvenile risk and need-assessment (JRNA) scores to identify the likelihood a youth will commit another offense in the future. This risk assessment score is then used by juvenile justice practitioners to inform how to intervene with a youth to prevent reoffending (e.g., referring youth to a community-based program vs. placing a youth in a juvenile correctional center). Unfortunately, most risk assessment systems lack transparency and often the reasons why a youth received a particular score are unclear. Moreover, how these scores are used in the decision-making process is sometimes not well understood by families and youth affected by such decisions. This possibility is problematic because it can hinder individuals' buy-in to the intervention recommended by the risk assessment as well as mask potential error in those scores (e.g., if youth have risk scores driven by a particular item on the assessment). To address this issue, project researchers will develop automated, computer-generated explanations for these risk scores aimed at explaining how these scores were produced. Investigators will then test whether these better-explained risk scores help youth and juvenile justice decision makers understand the risk score a youth is given. In addition, the team of researchers will investigate whether these risk scores are working equally well for different groups of youth (for example, equally well for boys and for girls) and identify any potential errors in how they are being used in an effort to understand how equitable the decision making process is for the range of youth involved in juvenile justice. The project is embedded within the juvenile justice system and aims to evaluate how real stakeholders understand how the risk scores are generated and used within that system based on actual juvenile justice system data. More specifically, this project aims to understand how risk assessment scores are currently being used in the juvenile justice system and how interpretable machine learning methods can be used to make black-box risk assessment algorithms more transparent (without reverse engineering them given that most assessments are proprietary). The team of researchers endeavor to understand the way that juvenile justice risk scores are being used through the analysis of quantitative data from the juvenile justice system (which details the risk scores and justice system decisions) and through qualitative data collected via key informant interviews. In the second phase of the work, the team of researchers will train various interpretable machine learning algorithms to predict youth's risk scores (which are currently generated by a proprietary, black-box algorithm). The team will also predict the sentencing dispositions for youth based on these risk scores and other pertinent data collected by the juvenile justice system. The project team will then test and measure how understandable a series of the automated explanations derived from these machine learning methods are to youth, families, judges and probation officers. The goal of this step will be to identify algorithms that are highly predictive of the risk score and dispositions, respectively and then to identify methods that provide clear, human-interpretable explanations of the risk and dispositions to key stakeholders throughout the process. This step will also allow researchers to optimize methods for explaining outcomes by possibly identifying one method that is more understandable for explaining risk scores to youth compared to another method that is more understandable for their families or probation officers, for example. Finally, the project team will also explore the potential for error throughout the process (from risk scoring to the use of the scores) and ways in which these interpretable algorithms can be used to help identify, quantify and mitigate challenges. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Focus Areas
Eligibility
How to Apply
Up to $272K
2027-03-31
One-time $749 fee · Includes AI drafting + templates + PDF export
AI Requirement Analysis
Detailed requirements not yet analyzed
Have the NOFO? Paste it below for AI-powered requirement analysis.