loading
 
visual anchor
Malbee Scott from United States  [6 posts]
15 year
Hi Stephen,
I wanted to ask you before I try something that I may tie myself in a knot over to get your input.  Using the visual anchor module, do you think it would be possible to load previous images from disk and use them as way point guides for movement.  In other words, if you wanted the machine to travel a known path, could previously saved images be used as references for the bot.  I would like to try it, but if you think it is a futile effort, I'll go a different way

Thanks for any input.
Mal
Malbee Scott from United States  [6 posts] 15 year
Hi again Steven,
Sorry about the spelling of your name.  My best friend is named Stephen, old habits are hard to break.
Mal
from United States  [214 posts] 15 year
I would love to know this too!  Some time ago I tried to get the Visual Anchor module to work with a previously saved, then loaded image to compare to the current image, but I could never get it to work as I would like--i.e. with the result being the displacement and rotation of the current image compared to the loaded image.

Thanks!
patrick
Anonymous 15 year
Mal & Patrick,

While it is possible to use the visual anchor for this I don't think that would be the best way to go. A better way to do this is to define feature points in an image (not unlike Harris points) that can then be tested on each new frame to past reference frames. The trick is to get repeatable feature points. That's an ongoing research for universities and for us too. If you can get two images to generate similar feature points regardless of translation, rotation, scale and some affine rotation between the two images then you have a good idea of which frame a new image is most similar too. Given that frame you would know approximately where you are in the sequence and which direction to go. There are a couple of university papers that use this technique.

This is an extremely valuable technique that we have ongoing research into. Again, the issue is generating good feature points that are very repeatable while being invariant to many image distortions. That is what we are currently researching.

So unfortunately the visual anchor is not what you want to use but the actual technique is still in research and thus will not be available in the next few weeks unless we get a breakthru. :-(

Keep your fingers crossed!

STeven.
Malbee Scott from United States  [6 posts] 15 year
Thanks for the reply Steven.
I guess I'll go a different way.  To be honest, I'm not sure if I'm smart enough to try the other meathod you discussed, but given time maybe.  I would like to take a moment to thank you for your efforts in producing such an awesome visual system.
Mal
from United States  [214 posts] 15 year
Thanks for your reply STeven!

My quick followup question is: would it be possible to modify the Visual Anchor module so that one can load a previously saved image as the target and then compare it to the current image?  I realize that this will generally fail to produce match parameters but in the cases where the saved image is very similar to the current image, might it not work?

Here now is the longer version of my question.

Yes, I agree that this is a challenging problem and I have read a number of papers on the subject, as well as played with some techniques such as cross-correlation of image patches and matching arrays of Harris Corners but without much success.

One of the reasons I thought I might be able to use the Visual Anchor module is that I have an omini-directional vision system on my robot that can give a "look down" snap shot of the current location.  I thought such images might be easier to compare and match than the more limited FOV of two forward facing images, especially when the two images are taken from two relatively close locations.

We discussed this a little in an earlier thread (see http://www.roborealm.com/forum/index.php?thread_id=2988#2) where you suggested using radial distortions of the omni-directional images.  I have attached two such images--the original omni-directional images and a radially distorted version of each that roughly straightens out the lines.  The second image was taken with the robot displaced 15 inches to the right of the first image.  So the challenge is to extract this displacement from the image.  In fact, I'd even be happy with just knowing that the displacement is "directly to the right" rather than some other direction.

So what I tried doing was loading the first radial image, then the Visual Anchor module, then the second radial image.  But try as I might by clicking the Re-Target and Re-Acquire buttons, I could not get the Visual Anchor module to treat the first image as the target and the second image as the current image.  Even if this might not work very well, it would be nice if there were a way to try it.

In the meantime, I have been reading up on two popular feature extraction techniques: SIFT and SURF.  I understand that SIFT is patented but was wondering if you were looking into something similar for the image matching problem.

Thanks again for all your work!

--patrick

Here now are my test images.  First the raw omni-directional versions:

- Starting position
- Displaced 15 inches to the right

Now the radial versions:

- Starting position
- Displaced 15 inches to the right



John from Australia  [13 posts] 15 year


This machine seems to be moving left to right?

Have you viewed this with pixel flow?

I think there will be a limit to how accurately
you can determine how far it has moved. The
forward direction for example only appears to
have changed by one pixel to the nearest line
in the direction of travel. The closer the
"objects" the more pixel changes you will
get for any distance moved. A camera under
the robot for example viewing floor texture
would give you the most accuracy.

It comes down to locating features that can
be recognized in both images. In your example
I have used the red blob's centroids.

I doubt there is any advantage in changing
the images. A feature will have an angle and
a distance from the center.






 
John from Australia  [13 posts] 15 year
The image seemed to fail to upload?



 
from United States  [214 posts] 15 year
Hi John,

Thanks for your suggestions.  Using the centroid of the red blob gave me some ideas.  If I use the Segment Colors module together with the Blob Filter module, I may be able to use standard image registration techniques to map one blob set into the other, thereby getting back the required translation parameters.  I'm not sure if this will be any better than using Harris Corners as the features to match, but some experimentation should tell.

As for pixel flow, there isn't necessarily any movement between the two images in real time.  What I mean is that the first image is pulled up "from memory" and the second image is what the robot currently sees.  The goal is to find out how to move from the current position so that the robot ends up in the position represented by the image in memory.  In the example given, the second picture was taken with the robot 15 inches to the right of the first picture location.

I already have a good algorithm for finding the rotation between the two images.  Since the images are omnidirectional, I simply use a circular cross correlation method that nicely spits out the angular disparity between the images.  The scale will also be the same since the camera is in a fixed position relative to the ground.  So all I need are the x and y translation parameters between the two images.

--patrick
pranavtrehun007 from United Kingdom  [33 posts] 15 year
does the C# API have the feature to load a pre saved image for the visual anchor?

btw, visual anchor is really a cool feature. good work guys (esp STeven).
John from Australia  [13 posts] 15 year
The identification of features or feature configurations,
which I guess you hope to do with RoboRealm, is the key
problem, the rest is all math. I would suggest that until
you get a robust system for recognizing natural features
it may be useful to play with a structured environment?
Like a ship at sea using beacons.

I assume you have googled to see how others have
used omnidirectional vision?

http://www.ieeta.pt/atri/cambada/docs/neves.07a.pdf
http://citeseer.ist.psu.edu/old/hundelshausen01omnidirectional.html
from United States  [214 posts] 15 year
Thanks for the links.  Yes, as it turns out, I had already read those reports.  I have also read a number of papers on using omnidirectional vision to home in on previous snapshots and I have tried out several of the algorithms.  The best I was able to do was get back the relative rotational angle between the current image and the saved snapshot.  However, none of the methods for determining the translational parameters worked very well in my environment (typical household setting).  Most of the papers used simple lab environments with fairly structured features.  So yes, going to a structured environment would certainly help.

--patrick


This forum thread has been closed due to inactivity (more than 4 months) or number of replies (more than 50 messages). Please start a New Post and enter a new forum thread with the appropriate title.

 New Post   Forum Index