Look, ma- no lasers! Real-time exploration and mapping using vision.
Click on the image at right to view an
animation of vision-based occupancy
mapping. Over the last few months I've implemented an autonomous exploration
and mapping system that uses *only* vision to explore an environment
and construct a useful map. The system uses our
RBPF SLAM implementation to keep an accurate
map, and stereo occupancy grids for planning and obstacle avoidance.
The key to achieving accuracy is to perform SLAM on a sparse landmark
map, and compute the occupancy grid as a by-product of the RBPF- we
never try to localize in the grid. This work is described in our
upcoming
IROS paper, which
has been nominated for best paper. :-)
Why doesn't FastSLAM scale
In theory, RBPF-based SLAM filters scale linearly with the number of
samples. In actual practice this isn't the case. Click
here for the reason.
6DoF Vision-Based Mapping with Rao-Blackwellised Particle Filters
I've been involved in an ongoing project to implement vision-based
mapping for large-scale environments (bigger than a single room).
With the help of Matt Griffin, Pantelis Elinas, Alex Shyr, and Jim
Little, we've implemented a system that can handle hundreds of
thousands of visual landmarks over long
6-DOF trajectories, without control or odometric information. Click
here for more details. This work received the
"Best Robotics Paper" award at CRV 2006.
From the Vaults: Vision-based robotics, ca. 1997
Vintage quicktime footage of Invader at the AAAI 97 Mobile Robotics Competition
in Providence, RI. The robot used supervised colour-space training to
detect objects and obstacles with a monocular camera at 10Hz. We took
home first place two years running. (Sorry for the low image
quality - this was high tech digital imagery at the time). The second
link is more of a 'making-of', featuring yours truly as the mad scientist.
Bearings-only SLAM
I'm currently working on exploration
strategies for constructing accurate maps from bearings-only sensor
observations. Click the image at left for more details.
Visual Maps
My
graduate work focused on the problem of learning a representation
of the visual world from an ensemble of images. The work focused
on modeling the behaviour of salient features extracted from a
scene, as a robot moves through the environment. Click on the
image at left for a short video of a set of feature models
rendered along an imaginary trajectory.
See also my
Ph.D. thesis.
Aqua
I'm involved with the McGill/York/Dalhousie
AQUA underwater
robot project. In January, 2004 we were at the Bellairs marine research station in
Barbados for
field
trials. (Including pictures and video footage). more footage is
available at this
CIM ARL
page. York also has some more footage
here.
Update
(2004/03/12): Today Wired ran a good
story about the project.
Pose-calibrated images
Some archives of
pose-calibrated images for localization and structure-from-motion research.