How to Make Self-Driving Vehicles Smarter, Bolder

$7.5M DOD grant for Project NeuroAutonomy will develop bioinspired control systems to enable vehicles to self-navigate land, air, and sea

Graphic courtesy of iStock/Anadmist

“These vehicles are designed for very structured environments, within roads and lanes,” says Paschalidis, a Boston University engineer who uses data science and machine learning to develop new software algorithms and control systems. “They are only programmed to recognize a small number of different types of objects.”

Paschalidis, a BU professor of biomedical, systems, and electrical and computer engineering, has a vision for self-driving vehicles that would launch them from the mundane world of suburban commuting to the most dynamic (and sometimes harsh) places around the globe. “We are interested in developing fundamental principles that can be applied to autonomous vehicles capable of navigating themselves on the ground, underwater, and in the air,” he says.

To make that possible, the Department of Defense has awarded $7.5 million in Multidisciplinary University Research Initiative (MURI) funding for Paschalidis to team up with other scientists from Boston University, Massachusetts Institute of Technology, and Australian research universities.

“Our team spans two continents and brings together some of the preeminent experts in neuroscience—with emphasis on localization, mapping, and navigation functions—with experts in robotics, computer vision, control systems, and algorithms,” says Paschalidis, the team’s principal investigator. “We’re essentially going to use insights from neuroscience to better organize and control engineered systems.”

Their goal? To investigate how the brains of living organisms—namely ants, animals, and humans—process their spatial environments to derive meaningful navigation information. The international research team calls their efforts Project NeuroAutonomy.

“The research that we’ll be doing under this MURI is focused on the most interesting control system out there—the brain and its coordination of the neurosensory and neuromuscular systems in the body,” says co–principal investigator John Baillieul, a BU Distinguished Professor of mechanical, systems, and electrical and computer engineering.

The Australian collaborators, particularly insect navigation expert Ken Cheng of Macquarie University, will draw insight from the way that ants use visual cues to move around. In the United States, BU collaborators will lead teams that examine animal and human spatial navigation.

“This project offers the potential for some major theoretical breakthroughs for understanding cognition,” says co–principal investigator Michael Hasselmo, director of BU’s Center for Systems Neuroscience and a professor of psychological and brain sciences.

Hasselmo will lead the team’s investigation of how rodents navigate their environment. He says although this project is focusing on navigation, elements of the algorithm the team plans to develop could eventually be applied “to a broad range of different types of intelligent behavior.”

To develop the algorithm, the team will zero in on three big gaps between the navigation prowess of current autonomous vehicle technology and biological organisms, says co–principal investigator Chantal Stern, director of BU’s Cognitive Neuroimaging Center. Stern will lead team members in using functional MRI to investigate how humans develop a map of their environment and detect changing elements of their surroundings.

“An example that comes directly from robotics is known as the loop closure problem,” Stern says. “When you wander around in a circle through your house and come back to the kitchen, you know you are back in the kitchen; you have mapped your environment and recognize that you have returned to a location you were in before. In robotics, that’s a difficult problem for an autonomous system. An autonomous system will keep mapping a location it returns to, in the same way a Roomba vacuum keeps cleaning the same spot when it comes back around to the same location.”

Having a fully autonomous vehicle accurately map an area of land or water could be a useful application for military operations, allowing foreign landscapes to be charted without human assistance or putting any lives at risk.

“If you want to [use autonomous vehicles to] develop an accurate map of an area, then you don’t want [the vehicle] to overwrite the map every time [it returns] to the same place,” says Stern, a BU professor of psychological and brain sciences.

Her team will also investigate how humans decide what’s valuable information and what’s visual clutter as they navigate their environment.

“How do you determine what is a landmark? For example, we can use the Citgo sign as a landmark, but we don’t tell people to turn right at the UPS truck,” Stern says. “The UPS truck is not a useful navigational landmark.” In other words, because it’s not a stable part of the environment, the UPS truck is just visual clutter.

The last problem the team will investigate is how we predict the changing dynamics of an environment.

“When you are driving down the street and see a child or a dog or bicyclist on the side of the road, you are already thinking that they might cross the street and you’ll be prepared for [them] to move,” Stern says. “But you know the mailbox isn’t going to move. How does the brain do that? How do you understand that prediction?”

John Leonard, a co–principal investigator and the head of MIT’s Marine Robotics group, says he’s looking forward to making use of recent advances in deep learning and object detection.

“Taking the best from biologically inspired models and biologically derived models and combining those with real robot experiments is very exciting,” Leonard says. “The potential impact of the research is awe-inspiring. The fact that memory formation is coupled to how an animal or human knows their position…perhaps could one day lead to better insights that ultimately might lead to better therapies for memory.”

Paschalidis, Project NeuroAutonomy’s team leader, predicts the biggest challenge for the team is that it will be impossible to read the “code” that animals and humans use to navigate.

“We have to infer that code from observations that we make,” he says. “The second challenge will be to translate those observations into specific, detailed control policies [for autonomous vehicles].”

The US-based team also includes Margrit Betke, a BU professor of computer science, and Roberto Tron, a BU assistant professor of mechanical engineering, and from MIT, Nicholas Roy, a professor of aeronautics and astronautics.


Originally published  in BU Research News (April 16, 2019) 

View all posts