Vision-based FastSLAM without Odometry

home contact publications cv software
blog koan simra.net

[Update June 13, 2006]: This work received the "Best Robotics Paper" award at Computer and Robot Vision 2006. The paper is here:

[Update Feb 9, 2006]: Our latest ICRA paper describing this work:

[Update July 27, 2005]: Other papers describing this work:

Some map construction movies (click to view, DivX codec required for windows users: www.divx.com): Older results below..

[Update November 11, 2005 ]: New results- Movie of SLAM using a mixture proposal.
UBC Collaborative Robotics Lab (6 DOF, no odometry, mixture proposal distribution)
[input sequence**]
UBC Laboratory for Computational Intelligence (West) (3 DOF, using odometry)
[input sequence**]

** for full stereo input sequences and calibration information, please send me an email.. simra @ cs . ubc . ca

[Update July 21, 2005]: Much of the text below is outdated (of course). The key advances in the last few months is that now we can handle six degrees of freedom, we've implemented a mixture proposal distribution to detect loop closure, and we have a much more robust system in general. More details to come. :-)

This is joint work with Pantelis Elinas, Matt Griffin, Alex Shyr and Jim Little. The map above was constructed using a Rao-Blackwellised particle filter (RBPF) with 400 particles. The maps are 3D landmarks constructed from SIFT features. The key contribution of this work is that where most RBPF implementations for SLAM rely on odometric observations for a motion model, we rely on motion estimates computed directly from sequential pairs of stereo images (similar to FastSLAM 2.0, but without taking into account *any* odometric information). This map consists of roughly 11,000 3D landmarks, associated with a subset of 38,000 SIFT features (SIFT features observed more than 3 times are promoted to landmarks). It should be evident that the robot traversed between two rooms. The computation finished after 4000 frames. We haven't tried smaller numbers of particles, and it is quite possible that the map quality will be just as good. The total trajectory length is 65.7m. The room is approximately 18m by 9m in size. Average compute time was 11.9s/frame. However, for memory-related issues we are not using fast methods for SIFT correspondence (such as a kd-tree), nor have we spent much effort on optimization.

Sample Results

[Oct 25, 2005]: Here's a run where the robot successfully closes a loop (about 30m tall), 3000 frames, using 500 particles, vision-based odometry and a mixture proposal distribution for global localization.

More Results

Legend:
Yellow:Max-weight Particle trajectory
Blue:Robot odometer
Pink:Visual odometer (see below)
Odometry

A map constructed using the robot's odometry alone (for comparison purposes only-- odometry was not used for the other maps). The start of the trajectory is on the very left. Click to enlarge...
Visual Odometry

A map constructed using 'visual odometry'-- in other words the frame-to-frame motion estimate. The individual motions in this estimate were used to compute the proposal distribution for propagating the RBPF.
KEY RESULTS: RBPF Map

The coup de grace: the map constructed by the RBPF (actually, the map corresponding to the particle with the highest weight-- there are the equivalent of 400 maps maintained simultaneously by the RBPF.)

More images...

Detail


A detail view of part of the map. The red blob is the set of particles. The green lines indicate the viewing direction of the camera. The "knots" (regions where the trajectory jumps around) in the yellow trajectory correspond to locations where the robot was rotating in place-- there is some bias in the translation estimate, which can be seen as slowly curving sections in the pink trajectory (visual-odometry), and we had to insert some noise into the proposal to correct for this.

Robert Sim, Last modified: 13 Jun 2006