NSF requires disclosure of AI tool usage in proposal preparation. Ensure you disclose the use of FindGrants' AI drafting in your application.
NSF
This project is about understanding the effect of content generated by large language models (LLMs) on people whose jobs involve assessing information online. Though LLMs have many potential uses, there are no guarantees that their outputs are correct: they sometimes "hallucinate" false text and/or can be tricked by cyber-attackers into doing so. This in turn poses risks to online information exchange. This project's goal is to model the risks that LLMs pose toward effective online discourse and develop tools to help information professionals assess LLM-generated online content. Through studying a variety of professional roles that interact with many different kinds of content, the research will create generalizable models of information risks posed by LLMs, as well as methods and tools for creating community-specific guides for assessing and managing LLM content risks. These tools, combined with planned educational and outreach activities, will help information professionals do more informed, effective work and benefit the roles and communities they serve. The research plan starts with activities aimed at understanding the challenges that arise for information professionals with the increasing use of LLM-based content. Through interviews, co-design activities, and surveys across a variety of information-focused professions, the research team will develop an epistemological framework to characterize information risks posed by the use of LLMs. This framework will then be used to develop both proactive and reactive approaches to assessing and detecting risky LLM use. To support proactive planning, the researchers will develop threat modeling and red-teaming techniques that allow individual information professionals to assess the risks that arise in their own jobs and communities. To support reactive detection, the team will create customizable, mixed-initiative intelligent tools for identifying potential risky LLM-generated content that are well-tuned to their particular context and epistemic practices. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Up to $371K
2030-05-31
Detailed requirements not yet analyzed
Have the NOFO? Paste it below for AI-powered requirement analysis.
One-time $749 fee · Includes AI drafting + templates + PDF export
New York Systems Change and Inclusive Opportunities Network (NY SCION)
Labor — up to $310000020251M
Trade Adjustment Assistance (TAA)
Labor — up to $2779372424.6M
Occupational Safety & Health - Training & Education (OSH T&E)
Labor — up to $590000020.3M
The Charter School Revolving Loan Fund Program
State Treasurer's Office — up to $100000.3M
The Charter School Revolving Loan Fund Program
State Treasurer's Office — up to $100000.3M
CEFA Bond Financing Program
State Treasurer's Office — up to $15000M