Hi STeven et. al.,
I was wondering if you guys are still finalizing the forthcoming Visual Odometry module? I ask because I have just completed a fairly extensive series of tests using omnidirectional images for robot navigation and I think I have squeezed as much as I can out of my own methods.
I thought some of the forum readers might be interested in my results so here is a brief summary.
First, it turns out to be fairly easy to find the relative orientation between two omnidirectional images--for example, the current image and one stored in memory. Starting with the raw image, I unwrap it using RoboRealm's Polar module. Then I crop away the top and bottom of the image so we don't see the robot chassis. Rotations of the robot now map into left-right shifts of this rectangular image, including a wrap around at the right edge back to the left edge. After reading a number of papers on the subject, I found that it suffices to collapse the image down to a 1-pixel high horizontal grayscale "slice". Thanks to
the latest enhancements to the Pixelate module, this can now be done by setting X=1 and Y=100. The result looks like a grayscale barcode of the scene. We now have two such slices or barcodes--one from memory and one from the current image. I am using the RoboRealm API to pull the image into a C# program where I do a 1-D circular cross-correlation of these two patterns to find the best alignment. The shift required to align the images is basically the angle you need to rotate the robot to align with the earlier image. So that part is done.
The harder part has to do with figuring out the displacement between the two images. The above procedure tells us how to align the robot in the same direction as the stored image, but it doesn't tell us whether we need to move left, right, backward or forward to get to the same *place* that the earlier picture was taken from. STeven suggested in an earlier posting that I use the Radial module to take a look-down perspective, then use image matching techniques to figure out the displacement--hopefully this is what the forthcoming Visual Odometry module will allow me to do. In the meantime, I discovered the following technique that seems to work fairly well.
Using the same horizontal slices from above, we take four small sub-slices each about 1/6 the width of the image. The first two sub-slices are centered at the front and rear of the image. As the robot moves left or right, these two sub-slices will shift relative to the reference image. By doing a template match on the two sub-slices against the stored slice, we can determine if we have moved left or right and by how much. To measure front-back motion we use the same procedure but the two small sub-slices are taken from the left and right of the image. A forward or backward motion will cause these to shift accordingly.
Combining the left-right and front-back displacements, we can use some simple geometry (arctangent) to find the displacement angle. I have tested this technique with various displacement directions and magnitudes (up to about 30 inches) and it works well enough to have the robot servo in on the location of the original image.
However, I think I will get more reliable results if I can use some kind of image registration technique on the radial versions of the images rather than the polar unwrapped versions. This would operate the same way as the Visual Anchor module but would allow the comparison of two arbitrary images. The one catch might be that the robot chassis forms a fairly large fixed portion of the image which might throw off a registration algorithm. However, I guess I could crop out four smaller rectangles away from the chassis (one from each quadrant of the image) and run the registration module separately on each rectangle.