Optical Flow Templates for Mobile Robot Environment Understanding

TitleOptical Flow Templates for Mobile Robot Environment Understanding
Publication TypeThesis
Year of Publication2013
AuthorsRoberts, R.
Academic DepartmentSchool of Computer Science
DegreePhD
Date Published05/2013
UniversityGeorgia Institute of Technology
Thesis TypeDoctoral
Abstract

In this work we develop optical flow templates. In doing so, we introduce a practical tool for inferring robot egomotion and semantic superpixel labeling using optical flow in imaging systems with arbitrary optics. In order to do this we develop valuable understand- ing of geometric relationships and mathematical methods that are useful in interpreting optical flow to the robotics and computer vision communities.

This work is motivated by what we perceive as directions for advancing the current state of the art in obstacle detection and scene understanding for mobile robots. Specif- ically, many existing methods build 3D point clouds, which are not directly useful for autonomous navigation and require further processing. Both the step of building the point clouds and the later processing steps are challenging and computationally intensive. Ad- ditionally, many current methods require a calibrated camera, which introduces calibration challenges and places limitations on the types of camera optics that may be used. Wide- angle lenses, systems with mirrors, and multiple cameras all require different calibration models and can be difficult or impossible to calibrate at all. Finally, current pixel and su- perpixel obstacle labeling algorithms typically rely on image appearance. While image appearance is informative, image motion is a direct effect of the scene structure that deter- mines whether a region of the environment is an obstacle.

The egomotion estimation and obstacle labeling methods we develop here based on optical flow templates require very little computation per frame and do not require building point clouds. Additionally, they do not require any specific type of camera optics, nor a calibrated camera. Finally, they label obstacles using optical flow alone without image appearance.

In this thesis we start with optical flow subspaces for egomotion estimation and de- tection of “motion anomalies”. We then extend this to multiple subspaces and develop mathematical reasoning to select between them, comprising optical flow templates. Using these we classify environment shapes and label superpixels. Finally, we show how per- forming all learning and inference directly from image spatio-temporal gradients greatly improves computation time and accuracy.

Citation Key165
 

Dynamics and Control Systems Lab

 

Robotic Mobility Group

 

Aerospace Robotics and Embedded Systems Laboratory

About us

We are three research groups from Georgia Tech, the Massachusetts Institute of Technology, and the University of Southern California, collaborating to perform basic research on high-speed autonomous driving.  We are most interested in researching biologically-inspired methods in the realms of both perception and control.

Acknowledgment

This work was supported by the Army Research Office under MURI Award W911NF-11-1-0046.