loading

Visual Targeting #2

The ideas introduced in this tutorial have not been verified to be according to FIRST rules. Please verify with the rule manual on any modifications or unique usage of items provided in the FIRST kit of parts.

This tutorial assumes that you have acquired an image of some form and need to process it to determine the location of the upper targets. This information would be used to determine the rotation of the robot to center it for shooting.

Let's assume the following three images are to be processed. These images are courtesy of FIRST and 500 sample images can be downloaded from TeamForge. They show the target at different distances and angles.

#1 Original #2 Original #3 Original

The first thing we want to process the image for the reto-reflective tape. It is assumed that this is illuminated by a GREEN led light. By separating the rectangle from the rest of the image we can proceed to isolate and extract just that object. First we use the RGB Filter module to convert it into a black and green image.

#1 Green Filtered #2 Green Filtered #3 Green Filtered (opps)

We can see that using that module extracts the green U shape quite nicely in the first two images but has difficulty in the last image. The reason for this is that the particular camera used loses the color when the pixels become oversaturated with light. In this case the U appears white intead of green. This shape can still be extracted but NOT by the color filter. You can either switch to another segmentation technique (used in the previous tutorial) or adjust the camera exposure to capture LESS light to allow for the color to be captured correctly.

You will notice that the second target (angled more sharply) DOES get detected correctly. This is because the amount of reflection back to the camera is less on this target and thus will exhibit more of a green color than the other target that reflects too much light.

For now, we will stop using image #3 since a hardware tweak would be needed to fix it.

Since we are looking just for U shape objects we can exploit a couple attributes of the shape to ensure that we can remove all those other blobs (collections of pixels) that do no match what we would expect the U shape to match. We do this using the Blob Filter module to eliminate shapes that are not U shapes. The first property we use is size.

#1 Size Filtered #2 Size Filtered

This eliminates the smaller objects that are too small to be the target from being considered (mostly the led lights). This removes most of the unwanted parts of the image but a reflection of the LEDs against the field wall can still be seen in image #1. To better guarantee that we only remain with a U shape, we add a boxed filter which checks the shape size versus its bounding box and a skewness filter which helps us remove those objects that not skewed in the Y direction. Because the U shape will have a large bounding box relative to actual number of pixels and is very skwed in the Y direction due to the base of the U we can eliminate all but just the U shape. These properties were also recommended on the WPILib site Identifying and Processing the Targets under Coverage Area (aka Boxed) and Moments.

#1 Shape Filtered #2 Shape Filtered

Lastly, we remove all but the largest shape in the image. This ensures that we only focus on the larger or nearer target. You have to pick one but could use other criteria other than size.

#1 Largest #2 Largest

To get a better sense of what we end up with we color the resulting shape red and merge that back into the original image so we can see what has been highlighted.

#1 Final #2 Final

An additional green arrow has been added to show the offset from center of screen to the X coordinate of the found target. The coordinates can be used to understand which direction to move the robot. This iterative feedback is more of an approach than a specific calculation. We can use the visual feedback to constantly tell us how to change our current state in order to achieve a better position. This method requires that the camera sensor be quickly processed and feed new information to actuators that will then update the robot position which will then again be processed by the camera in near real time. This iterative approach does not require any precise calculations that may or may not change during the competition due to worn equipment or lower battery levels.

The actual values fed to the actuator can be very coarse (neutral, near right, right, extreme right, etc.) since over time the values are integrated based on the feedback of the camera and reaction time of the robot.

To try this out yourself:

  1. Download RoboRealm
  2. Install and Run RoboRealm
  3. Load and Run the following  robofile which should produce the above results.

If you have any problems with your images and can't figure things out, let us know and post it in the forum.