Perception and Control with Optical Flow Templates

Egomotion Estimation and Motion Anomaly Detection With General Optical Flow Subspaces

The first phase of this project deals with egomotion estimation and motion anomaly detection in a generalized imaging system by exploiting probabilistic subspace constraints on the flow. We deal with the extended motion of the imaging system through an environment that we assume to have some degree of statistical regularity. For example, in autonomous ground vehicles the structure of the environment around the vehicle is far from arbitrary. We exploit this regularity to predict the perceived optical flow due to platform motion. The subspace constraints hold not only for perspective cameras, but in fact for a very general class of imaging systems, including catadioptic and multiple-view systems. Using minimal assumptions about the imaging geometry, we derive a probabilistic subspace constraint that captures the statistical regularity of the scene geometry relative to an imaging system. We propose an extension to probabilistic PCA (Tipping and Bishop, 1999) as a way to learn this subspace from a large amount of recorded imagery, and demonstrate its use in conjunction with a sparse optical flow algorithm. To deal with the sparseness of the input flow, we use a generative model to estimate the full-dimensionality subspace using only the observed flow measurements. Additionally, to identify and cope with image regions that violate the subspace constraints, such as moving objects or gross flow estimation errors, we employ a per-pixel Gaussian mixture outlier process. We demonstrate results of finding the optical flow subspaces and employing them to estimate full-frame flow and to recover camera motion, for a variety of imaging systems in several different environments.

Motion Saliency from Optical Flow Subspaces Paired with Model-Based Tracking

Towards the goal of fast, vision-based autonomous flight, localization, and map building to support local planning and control in unstructured outdoor environments, we present a method for incrementally building a map of salient tree trunks while simultaneously estimating the trajectory of a quadrotor flying through a forest. We make significant progress in a class of visual perception methods that produce low-dimensional, geometric information that is ideal for planning and navigation on aerial robots, while directing computational resources using motion saliency, which selects objects that are important to navigation and planning. By low-dimensional geometric information, we mean coarse geometric primitives, which for the purposes of motion planning and navigation are suitable proxies for real-world objects. Additionally, we develop a method for summarizing past image measurements that avoids expensive computations on a history of images while maintaining the key non-linearities that make full map and trajectory smoothing possible. We demonstrate results with data from a small, commercially-available quad-rotor flying in a challenging, forested environment.



Jack Riderhof
Dynamics and Control Systems Laboratory
Georgia Tech
Richard Roberts's picture
Richard J W Roberts
The BORG Lab
Georgia Institute of Technology

Dynamics and Control Systems Lab


Robotic Mobility Group


Aerospace Robotics and Embedded Systems Laboratory

About us

We are three research groups from Georgia Tech, the Massachusetts Institute of Technology, and the University of Southern California, collaborating to perform basic research on high-speed autonomous driving.  We are most interested in researching biologically-inspired methods in the realms of both perception and control.


This work was supported by the Army Research Office under MURI Award W911NF-11-1-0046.