Project Overview

Summary

Reducing the risk to human lives and ensuring mission success while operating in a hazardous or hostile environment has led to the development of unmanned, autonomous and semi-autonomous vehicles for many military applications. At the same time, most robotic ground vehicles currently deployed in the field operate at low- or moderate speeds and have low- to moderate maneuverability, thus making them vulnerable in the battlefield.

Our research program will achieve a quantum leap in the agility and speed of these vehicles. It will have significant benefits in the field and off the field: Increasing vehicle speed and agility will have significant benefits in direct battlefield engagements, as it will increase the chances of evading detection by the enemy or of escaping an ambush. As confirmed by several Army studies, the difficulty of successfully engaging and hitting a target increases disproportionately with the target speed. Support logistics will also become safer and more effective, as even moderate increases in speed can largely increase the capacity of convoys and the throughput of the supply lines of materiel. Finally, the results of this research will contribute to the development of realistic off-road high-speed simulators for training special forces and other military and government personnel.

We will materialize this vision by proposing a radical departure from the standard current practice of "perception/sensing then control," and promote instead a new paradigm based on "perception/sensing-for-control." We will achieve this objective by leveraging attention-focused, adaptive perception algorithms that operate on actionable data in a timely manner. Our inspiration comes from human cognitive (decision) and execution (control) models, especially those of expert human race drivers. By encapsulating the cognitive and reflexive planning layers of human expert drivers, we will make extensive use of prior, context-dependent information, both at the sensing and execution levels. We will use attention as a mediator to develop attention-driven action strategies (including learning where to look from expert drivers). By analyzing the saliency characteristics of a scene we will locate the important "hot-spots" that will serve as anchors for events. At the same time, we will make use of fused exteroceptive and proprioceptive sensing to deduce the terrain properties and friction characteristics to be used in conjunction with predictive/proactive control strategies. By studying and mimicking the visual search patterns and specialized driving techniques of expert human drivers, we will develop perception and control algorithms that will remedy the computational bottleneck that plagues the current state of the art.

Our research efforts are described under the following topics:

Controlling Sliding Turn Maneuvers

Insert research description, members, and link to main content.

Multi-Foveal Tracking

Insert research description, members, and link to main content.

Perception and Control with Optical Flow Templates

We are working to develop high-speed navigation and egomotion estimation methods using learned patterns and correlations in optical flow.  These methods work on cameras with arbitrary optics, distortion, and multiple viewpoints, while running at very high frame rates.

[ Project page ]

 

Miniature High-Performance Car Platform for Perception and Control Development

Insert research description, members, and link to main content.

 

Dynamics and Control Systems Lab

 

Robotic Mobility Group

 

Aerospace Robotics and Embedded Systems Laboratory

About us

We are three research groups from Georgia Tech, the Massachusetts Institute of Technology, and the University of Southern California, collaborating to perform basic research on high-speed autonomous driving.  We are most interested in researching biologically-inspired methods in the realms of both perception and control.

Acknowledgment

This work was supported by the Army Research Office under MURI Award W911NF-11-1-0046.