loading
 
Andrew F. from United States  [5 posts] 17 year
Here are some sample images.  The images were taken without compression using 16mm lenses from two cameras separated by about 3 or 4 inches.  Currently, the main goal of the depth perception is to do foreground detection.  Our goal is to have our robot manipulate objects with it's grippers.  We use line segment extraction and want to be able to tell which line segments belong to the item in the robots hands and which don't.  We can paint his arms and grippers blue or green so that they are taken out.  While we are still writing the object manipulation code, we are using a lazy susan, which operates similarly in principle (so for now, we are trying to pick out which line segments are part of the object sitting on the lazy susan).

We actually have the cameras mounted so that they can turn inwards so it would be incredibly useful for the algorithm to take an estimate of how "turned in" the cameras are (at first we are fine with parallel cameras, and that is how the sample images I'm sending were taken).  We are interested only in relative distance at the moment.  When we finally get this running on the robot (for now we are manipulating the objects by rotating them on a lazy susan a couple feet away).  In our goal scenario, the objects are roughtly 6 inches away from the robots cameras, which are separated by about 3 inches.

Our lens is farely zoomed in at the moment, to approximate the resolution of the fovea (which is very roughly guestimated as a 5 degree field-of-view with a 200-pixel diameter, which is why we are using 16mm on a 1/3" sensor at the moment).

Thanks for your useful questions.  It's helpful for me to get the parameters straight in my head as well.

-Andrew



This forum thread has been closed due to inactivity (more than 4 months) or number of replies (more than 50 messages). Please start a New Post and enter a new forum thread with the appropriate title.

 New Post   Forum Index