First we start with a typical shot of the field from the point of view of a robot. The task at hand is to identify the yellow tote within the image in order to tell the robot how to move in order to approach the target. In this case, we will be using two features of the Tote to correctly identify it. One feature being color and the other being the retro reflective pattern on the tote.
Then we process the image using the RGB Filter module for the color yellow.
Using color does a good job (in this case) of removing most of what we are not interested in. However, there can always be other items in view (like another robot) that may confuse this process. In order to be really sure we know we are looking at a tote we will continue to process for the visual targets on the totes. We start by removing objects that would be too small to be a tote and apply the Convex hull to each shape and save that as a marker.
We then restore the image to the original RGB image and process for pixels that are gray (i.e. lack color). This produces an image that identifies areas in the image that are lacking in color. This is a good way to eliminate the floor and the yellow totes to leave only a couple objects remaining.
We then use the Blob Filter to remove those white groups of pixels that are not anything like the L shaped visual targets. This is done by removing too small and too large objects and then keeping those objects that are somewhat tiangular in shape.
While we do still have some unwanted objects the next step will remove those. That last step is where we now overlap the current detection with those objects that were detected as being yellow. By colliding these two features we finally get only those objects that are both yellow and contain a triangular non-colored inner pattern. This is done using the Blob Overlap module which keeps objects that overlap each other in two images.
We now create an outline, colorize that outline and merge it back into the original image to ensure that we have indeed selected just those totes.
To try this yourself:
- Load and Run the following robofile in RoboRealm.
- The sample image is already embedded in the robofile
- You can now click on each module to show the processing up until that point.
- Do you understand the individual stages? Can you create a robofile for the Gray ones?