loading
 
Region of Interest (ROI) ideas
from United States  [214 posts]
8 years
Hi STeven and other RoboRealm experts,

I'm trying to find the best method for sampling a region of interest in RoboRealm using some form of motion detection.  The idea is that I want to jiggle an object in front of my robot, then have the robot name its color.  To do this, I need to isolate the object in the image based on its movement, then I'll use the color statistics of the isolated region to feed a neural network to do the color naming.

The best I have been able to do so far is illustrated in the attached .robo file.  Using Optical Flow seems more effective than using the Movement module since it tends to "see" the moving pixels across the entire object rather than just its boundary area.  This then creates a better mask to capture more of the moving object.

If anyone has a better idea, I would love to hear it!

--patrick

program.robo
from United States  [214 posts] 8 years
Hi again,

I think I can explain my goal more clearly than in my original post.  Suppose you start with just the Movement module in RoboRealm.  Now move a colored ball back and forth a foot or two in front of the camera.  The image below is the result for a blue ball.  Note that much of the ball appears black, not blue, presumably because interior blue pixels are just trading places during the movement so their motion is not detected.

What I would like to see is the entire ball including its interior so I can get the correct color statistics.  I've tried a number of combinations of morphology operators but I can't seem to close up the blob into just a single blue disc.

Any ideas?

--patrick



 
Anonymous from United Kingdom  [99 posts] 8 years
No roborealm on this machine to test it, but it seems to me that you would use movement -> blob.  If it is not picking up a single blob that is the entire image, then dilate it out -> erode it back.  This should close in the blob.  Get the geometry of the blob and then crop THE ORIGINAL UNPROCESSED IMAGE accordingly.  This would give you a reduced rectangular image which you could then do the color match against.  (It would have color info for the surroundings,but then you could play with flood fill to get the most common colors)   Off the top of my head you would then just port the most common colors in using the api of your choice and then do a color distance algorithm. (Just have a set of trained colors and minimize the sqrt of (dr^2 + dg^2 + db^2)  

have fun and I have enjoyed reading about your projects!
mmason
from United States  [214 posts] 8 years
Hi Prof Mason,

Many thanks for the great suggestion--it works quite well to isolate the main blob and get its mean color.  The only modification I made to your suggestion was to use the resulting blob to *mask* the original image instead of cropping it--then I don't have to deal with the rectangular corners.

I'll post my results as soon as I can get RoboRealm from not crashing--right now it is crashing whenever I try to use the Blob_Filter module to display just the largest blob...

--patrick

P.S. Your own obstacle avoidance tutorial from some time ago inspired my own attempts.  I hope you'll post some more of your work as it becomes available!
Anonymous from United Kingdom  [99 posts] 8 years
A fellow at the lab suggested that instead of using the minimum RGB color distance that you look at hue space(HLS) and try the minimum distance there (Maybe with some weighting).  Roborealm will automatically convert RGB to hue and you will probably have much better luck with it.   Apparently hue is much less sensitive to external lighting conditions.  You will also want to do some calibration.  He suggested that if you are holding the object in your hand, you could use your hand color for calibration since you can apply techniques for skin identification.  Again these are largely done in hue. (Apparently skin color in hue is largely independent of race!)

let us know how it goes!
mmason
from United States  [214 posts] 8 years
I managed to get a simple color naming script going using normalized RGB space.  But I took your skin idea and used it to subtract skin tones from the final blob which tends to reduce the error of calling smaller objects "Red" when they are not.

So I've attached the .robo file that I have so far.  It only knows about red, green, blue, yellow and white as color names but it is surprising how well it works even on a fairly wide range of colors within those classes.

The camera I am using is a Philips 1300NC with the auto setting turned on.  This helps with lightness constancy.  Normalizing the RGB values also helps against changing lighting conditions.

One thing I noticed with this camera is that greens come out blue-ish.  So the Color_Balance module is set to Manual and I use a large negative number on the Blue component.  No doubt this would have to be tweaked depending on your camera.  Using a negative contrast value in the Contrast module also improved performance considerably.

Just for the fun of it, I threw in the Speak module so if your speakers are turned on, you should hear the TTS speak the color name.  This can get annoying after awhile...

Finally, using RoboRealm version 2.7.0, it was critical to set the Processing and Display frame rates to something other than 0 (I chose 20 fps)--otherwise, the program would crash regularly.  Furthermore, CPU load goes to 100% almost immediately if you use too high a resolution.  I used the lowest available, 160x120.  Even so, CPU load starts off around 65%, then quickly rises to 100% within the first minute or so for some unknown reason.

Perhaps STeven can test this .robo file to see if he sees the same crashing problem with higher processing rates as well as this odd drift in CPU load.

--patrick


program.robo
from United States  [214 posts] 8 years
I just discovered that it appears to be the Speak module at the very end of the pipeline that is spiking the CPU load to 100%.  The load is fine until the first color is named, then it spikes to 100% and stays there, even if the Run button is turned off.  So it might be a good idea to disable or delete the Speak module in the scripted posted above.

--patrick

from United States  [214 posts] 8 years
Here are a few sample images taken using the .robo file I posted above.  The frame resolution is 176x144 and the color of the object in my hand as computed by the script is displayed in the upper left corner.  To get the script to focus on the object, I simply move it a bit in front of the camera--maybe a couple of inches from side to side.

--patrick



          
Anonymous 8 years
Patrick,

Movement segmentation is a tricky task. The robofile that you included above does see to work out quite nicely. I only tried playing around with the movement module setting to check the last 100 frames which gets more of the movement pixels in view and eliminates the need for the erode and close modules. I also replaced the Flood Fill with the Segment Colors which is faster and a little more stable than flood fill.

We took care of the CPU issue of the speech module in 2.7.2

The movement module is probably one of the most active modules in terms of additional developments. Your task is an interesting one with no easy solution unless you can segment the image prior to checking for movement. We'll have to think about this one a little more as it is a really interesting project ... identification using movement is a great way to focus attention. Perhaps there is a trick that will work with a few more rules at the pixel level.

Thanks for the interesting post and great images!

STeven.
from United States  [214 posts] 8 years
Hi STeven,

Thanks for your reply--I had forgotten that Segment Colors would be a better choice than Flood Fill for this kind of thing.

Yes, it would be terrific if you could tweak a module or two so we could completely isolate a moving object.  (Easy for me to say!)  One thought I had was this: since the Center of Gravity module can already put a nice bounding box around a group of pixels, could that be used as a starting point to fill in the group of pixels from their outer members inward?  Alternatively, for a short term solution, could you add an option to the Center of Gravity module so that the bounding box could have a fill option so we can then use it as a mask on the original image?  Even better would be a bounding shape that reflects the overall geometric properties of the pixel cloud--in other words, if the cloud is more circular, use a bounding disc or oval rather than a box.  If it is more rectangular, use a rectangle with the appropriate aspect ratio.

Of course, this is probably all much harder than I imagine to implement!

--patrick




from United States  [214 posts] 8 years
Oh, I forgot to mention--the Speak module in 2.7.2 now does not speak the color name that I am storing in a variable.  It *does* speak when manually entering text in the text box and clicking the Speak button, but it does not appear to speak the contents of variables (I tried some other variables too like Image_Width).

--patrick

Anonymous 8 years
Doh, ok, 2.8.1 has that speech issue back to normal.

We're looking into the object movement and possible improvements to the COG module. Note that you could fill the COG box if you wanted it completely solid by ensure that it does not "Display as Annotation" and then follow by a fill module ... but that only gets you a square. Rectangle, Circular and Elliptical representations of the data would be nice too. Hmm ...

STeven.
from United States  [214 posts] 8 years
Thanks for the speech fix and COG box fill suggestion.  Looking forward to seeing what you might come up with down the road for other ways of grouping the data.

--patrick

This forum thread has been closed due to inactivity (more than 4 months) or number of replies (more than 50 messages). Please start a New Post and enter a new forum thread with the appropriate title.

 New Post   Forum Index