loading
 
Optic flow from Visual Anchor module?
from United States  [214 posts]
9 years
Hi STeven,

I'm not sure if this is even remotely possible but I thought I'd ask any way.  Is it possible that the algorithm behind your Visual Anchor module could be used as the basis for an "Optic Flow" module and/or a "Distance from Motion" module?  My understanding is that some optic flow algorithms match a number of of interest points between two frames and then plot the vector differences between corresponding coordinates.  One could then return the vectors for optic flow or their magnitudes for distance; i.e., the size of the displacement between two corresponding interest points should be inversely related to the distance the point is from the camera when the camera moves.  Do I have this right?

--patrick
Anonymous 9 years
Yes, your basics are correct. The issues are more of actual execution. We have an optical flow module which works as best as most do ... which means they don't work very well at all. The issue is twofold; speed and accuracy.

Speed in that the computation for each pixel is very high. What others will do to reduce this requirement is not to work with every pixel but to look for specific features and just track those ... but there is no guarantee that the feature you need to track is one of those esp if you are watching a small or thin object ... like a table leg. That's what is referred to as sparse optical flow. We prefer dense optical flow in that it gives better coverage ... but then we're back to the speed issue.

The other issue of accuracy is based on lack of texture. Most rooms have rather flat walls/floors that have no texture and thus cannot be used to "anchor" any point except for the edges of the walls/floors ... which themselves are hard to match since they suffer from occlusions. What you need to do is expand what you are matching ... but that complicates the matching process tremendously ... which again comes back to an issue of speed!

You're correct in your similarity between optical flow and stereo correspondence. They are basically the same problem but with the limitation in stereo that the disparity is in the horizontal direction ... which makes the computation faster than optical flow.

Keep monitoring the tutorials section as we hope to reveal some of our accomplishments in the next several months surrounding these issues. If you find any other good implementations also post them here!

Thanks,
STeven.
from United States  [214 posts] 9 years
Hi STeven,

Many thanks for the explanation--makes sense.  I look forward to watching your progress over the next few months.  As I'm sure you are aware, the OpenCV library has some kind of optical flow module but I haven't tried it yet.  Here is a link to a pretty good demo of its use:

http://ai.stanford.edu/~dstavens/cs223b/

Probably more valuable to me than optical flow would be stereo disparity.  While I don't plan to use two cameras yet, I can imagine using two images taken from two vantage points and then computing a distance map--at least as best as can be done with whatever correspondence points can be computed (e.g. Harris Corners?).  I can even imagine adding a "roll" servo to my robot's pan/tilt head mechanism so that it could tilt its head left then right, just like we do, and note how closer objects move more than objects further away.

--patrick

This forum thread has been closed due to inactivity (more than 4 months) or number of replies (more than 50 messages). Please start a New Post and enter a new forum thread with the appropriate title.

 New Post   Forum Index