NSF AI Disclosure Required
NSF requires disclosure of AI tool usage in proposal preparation. Ensure you disclose the use of FindGrants' AI drafting in your application.
CAREER: Calibrating User Trust in GenAI Chatbots: Investigating the Effects of Competing Cues and Interactivity Strategies to Mitigate Unfounded Cognitive Heuristics
NSF
About This Grant
Artificial intelligence (AI) that produces new content - text, images or videos - in response to the user input is known as generative AI. The output of these systems is designed to look and feel like human communication. However, generative AI systems can lead users to trust the system too much, leading them to share inappropriate information for the circumstances. For example, conversational AI systems, or chatbots, talk like humans and may encourage users to trust them without enough objective information to support that trust. This affects decision-making and can lead to unsafe disclosures of personal information. The goal of this research is to find ways to reduce quick user judgments from affecting decisions and help design AI systems that are safer and more responsible. The investigators will design, build, and test strategies that counter quick intuitive judgments using theories of communication and Human-AI interaction. The results of this study will be shared in a toolbox to help others design AI chatbots more ethically and responsibly. The toolbox will also help teach people about AI and chatbots, which will equip them for the workforce of the future. This CAREER project investigates strategies to ensure that user trust of Generative AI (GenAI) chatbots is warranted. Warranted trust involves assessments based on the actual capacities of the AI chatbot, rather than reliance on unfounded heuristics (or cognitive rules of thumb). This will be achieved in two phases. In Phase 1, focus groups and experiments will be conducted to identify (objective 1) and empirically test (objective 2) the cognitive heuristics, and the cues that invoke them, that guide users’ interactions with GenAI chatbots. In Phase 2, a series of experiments will be conducted to test strategies to mitigate unfounded heuristics via competing cues (objective 3) and interactivity strategies (objective 4). The project will advance knowledge in multiple domains, including secure and trustworthy computing and Human-AI interaction. It is innovative because it brings communication and media theory to Human-AI interaction and warranted trust, or trust calibration, research. Findings will be incorporated in a chatbot toolbox to help AI creators develop strategies to ensure ethical and responsible designs that accurately communicate AI attributes to users. The toolbox will also be distributed to non-STEM students, thus advancing AI literacy, and contributing to an informed workforce. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Focus Areas
Eligibility
How to Apply
Up to $335K
2030-05-31
One-time $749 fee · Includes AI drafting + templates + PDF export
AI Requirement Analysis
Detailed requirements not yet analyzed
Have the NOFO? Paste it below for AI-powered requirement analysis.