Skip to main content

CRII: Towards Real-World Robotic Manipulation: Learning Abstract State and Action Representations from Visual and Execution Data

NSF

open

About This Grant

Robotic systems have already found widespread adoption in controlled environments such as manufacturing and logistics, where tasks follow well-defined rules. However, they struggle in unstructured, dynamic, real-world settings that demand adaptability and autonomy -- challenges that humans handle with ease. Humans excel at abstract reasoning, allowing them to perform complex tasks without constant attention to low-level details. This research project aims to equip robots with similar capabilities, enabling them to learn abstract representations of their environment and actions through experience. By enhancing the ability of robotic agents to plan and execute tasks in real world settings, this research could drive advancements in automation, assistive care, and disaster response. Additionally, the project intends to contribute to STEM education through outreach programs for high school students and research opportunities for undergraduate students. The findings will be disseminated through leading robotics conferences and peer-reviewed journals, ensuring broad visibility within the research community. Moreover, the results will inform and enhance robotics courses and all software and datasets produced will be openly shared, fostering collaboration and further advancements in the field. This project aims to advance robotic manipulation in unstructured environments by enabling robots to autonomously learn abstract representations of states and actions from sensory and execution data. Existing planning methods have been successful in controlled settings by using task planners to reason abstractly about complex tasks, providing reliability, explainability, and transparency. However, they rely on human-specified representations, which are impractical in real-world scenarios with unknown objects and noisy sensor data. In contrast, purely data-driven approaches can learn directly from raw sensor data, reducing the need for manual specification but struggling with generalization and lacking interpretability, limiting their deployment in dynamic environments. Research funded by this award seeks to address these issues by developing a framework that learns abstract state and action representations from experience and seamlessly incorporating them into existing manipulation planners. Research activities will focus on designing methods for autonomous real-world data collection, designing algorithms to extract structured representations, and adapting decision-making strategies to leverage learned abstractions. Experimental validation will be conducted on real-world robotic platforms to assess the effectiveness of the learned abstractions. If successful, this research will enhance robotic adaptability, transparency, and efficiency that could significantly expand applicability of autonomous manipulation in open-world environments such as household tasks, assistive care, and construction. Robots capable of abstract reasoning will be better equipped to handle long-horizon tasks like rearrangement, packaging, and sorting. The results will be disseminated through leading robotics conferences and peer-reviewed journals, with software and datasets openly shared to support further research and educational initiatives. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Focus Areas

education

Eligibility

universitynonprofitsmall business

How to Apply

Funding Range

Up to $175K

Deadline

2027-07-31

Complexity
Medium
Start Application

One-time $749 fee · Includes AI drafting + templates + PDF export

AI Requirement Analysis

Detailed requirements not yet analyzed

Have the NOFO? Paste it below for AI-powered requirement analysis.

0 characters (min 50)