UMIACS Team Part of $2.8M DARPA Award to Advance Autonomous Robotics
As autonomous robotic systems continue to integrate themselves into our daily lives—flipping burgers, navigating vehicles, sorting parcels, and much more—the need to increase human-like cognitive reasoning within these AI-driven platforms is also growing.
Researchers in the University of Maryland Institute for Advanced Computer Studies (UMIACS) are exploring this topic, leading a multi-institutional effort aimed at enhancing how robots reason, plan and engage in complex, real-world scenarios.
The UMIACS team—Furong Huang, an associate professor of computer science, and Tom Goldstein, a professor of computer science—are collaborating on the project with researchers at the University of Chicago and the University of Texas at Austin.
Ultimately, the researchers say, their work will lead to robust autonomous robotic systems based on symbolic reasoning, which involves the use of symbols and abstract concepts to reason and solve problems, allowing the machine’s AI neural networks to mimic human thought processes.
The research is supported by a $2.8 million grant from the Defense Advanced Research Projects Agency (DARPA) as part of the agency’s Transfer from Imprecise and Abstract Models to Autonomous Technologies program, designed to accelerate the transition of robotic systems from virtual environments to real-world applications.
“Our project aims to transcend the limitations of current robotic systems, which are often siloed and task-specific,” Huang says. “Instead, we envision developing a versatile robotics foundation model—a scalable brain capable of powering a diverse range of robots—from household assistants and industrial machines to next-generation autonomous vehicles and medical devices.”
Goldstein says the DARPA-funded project emphasizes the need for autonomous robotic systems that not only understand their surroundings in theory but can interact with them effectively and safely.
“Currently, while foundation models can interpret static scenes and images well, they struggle with the dynamic aspects of real-world environments,” he says.
The researchers plan to develop robust perceptual systems that can transform sensor data into clear representations of the environment, such as scene graphs and knowledge graphs.
These computational methods are intended to mimic humans’ ability to separate the tasks of sensing and reasoning; people sense objects in their environment and extract their properties, and then plan actions using logical reasoning over the discrete objects they have identified.
Gathering diverse datasets is essential for training and testing these advanced autonomous agents, the researchers say, ensuring the robotic systems can operate effectively across various scenarios including unfamiliar terrain or adverse weather conditions.
Robust computational resources and dedicated lab space are essential for training and developing these types of complex symbolic AI models, Goldstein says, as well as for testing the robotic systems being developed. UMIACS’ resources will be instrumental to the project’s success, he says, offering the necessary infrastructure to support the advanced computing capabilities needed for handling large, complex neural networks.
Zikui Cai, a UMD postdoctoral researcher with expertise in computer vision, computational photography, and foundation model training, will assist Huang and Goldstein on the project.
The team looks forward to sharing their work when completed, expecting their research results to be published and the software they develop to be released in an open-source format, allowing others to contribute to advancing the technology.
“We’re developing foundation models that enable robots to learn, adapt, and perform diverse tasks, transforming the way robotics integrates with everyday technology,” says Huang.
—Story by Melissa Brachfeld, UMIACS communications group