Skip to main content

ERI: Autonomous Digital Twinning of Manufacturing Systems Through Deep Learning-Enabled Computer Vision and Unmanned Aerial Vehicles

NSF

open

About This Grant

As manufacturing becomes increasingly digitized through cyber-physical systems and widespread industrial sensor deployment, digital twins (DTs) have emerged as critical tools for improving productivity, decision-making, and system optimization. These virtual representations of manufacturing assets enable real-time monitoring, predictive maintenance, and process planning. However, constructing a DT of a manufacturing system remains a labor-intensive challenge, often requiring manual identification and characterization of production assets. This inefficiency can lead to missing or incomplete representations of key components of the manufacturing system, limiting interoperability and reducing the effectiveness of DT-driven insights. To address this challenge, this Engineering Research Initiation (ERI) project supports research that aims to automate the creation of shop-floor DTs using computer vision and deep learning, significantly reducing required human effort while improving DT accuracy. This research project looks to contribute to advancements in smart manufacturing and autonomous systems, with potential extraneous applications in city planning, agriculture, and aerospace. Additionally, the project looks to provide students with hands-on experience in AI-driven manufacturing research and support workforce development in digital twin technologies through new educational programs and outreach initiatives. This research intends to develop and evaluate a novel framework for fully automated DT generation through three key innovations. First, a dataset of manufacturing shop floor videos will be created and labeled with ISO 23247-compliant annotations of observable manufacturing elements (OMEs), serving as the foundation for training autonomous DT instantiation methods. Next, a neural radiance field (NeRF) model will be developed and trained on this dataset to reconstruct the shop environment as a high-fidelity 3D model. A three-dimensional convolutional neural network will then be implemented to automatically identify and classify OMEs within the NeRF-generated space, eliminating the need for manual asset labeling. Finally, to further automate data collection, commercially available unmanned aerial vehicles (UAVs) will be integrated with confidence-aware pathing algorithms to optimize video capture and enhance DT fidelity. The project looks to also contribute a first-of-its-kind dataset of manufacturing shop floor imagery, enabling future research in computer vision and digital twin development. By combining NeRF, deep learning, and UAV automation, this work intends to significantly advance the scalability, accuracy, and interoperability of manufacturing DTs while laying the groundwork for broader adoption in smart factories and beyond. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Focus Areas

engineeringeducation

Eligibility

universitynonprofitsmall business

How to Apply

Funding Range

Up to $199K

Deadline

2027-06-30

Complexity
Medium
Start Application

One-time $749 fee · Includes AI drafting + templates + PDF export

AI Requirement Analysis

Detailed requirements not yet analyzed

Have the NOFO? Paste it below for AI-powered requirement analysis.

0 characters (min 50)