loading

Visual Targeting #1

The ideas introduced in this tutorial have not been verified to be according to FIRST rules. Please verify with the rule manual on any modifications or unique usage of items provided in the FIRST kit of parts.

This tutorial assumes that you have acquired an image of some form and need to process it to determine the location of the upper targets. This information would be used to determine the rotation of the robot to center it for shooting.

Let's assume the following three images are to be processed. These images are courtesy of FIRST and 500 sample images can be downloaded from TeamForge. They show the target at different distances and angles. One image is also not as dark as the rest in case the camera you are using does not provide sufficient exposure settings.

#1 Original #2 Original #3 Original

The first thing we want to process the image for the reto-reflective tape. By separating the rectangle from the rest of the image we can proceed to isolate and extract just that object. First we use the Adaptive Threshold module to convert it into a black and white image.

#1 Threshold #2 Threshold #3 Threshold

We can see that using that module extracts the white U shape quite nicely. Since we are looking just for U shape objects we can exploit a property of that shape that makes it unique in the image. This property first starts by replacing each blob or collection of pixels by its convex hull.

#1 Convex Hull #2 Convex Hull #3 Convex Hull

This converts each U shape into a full rectangle. We then subtract the image from the previous image in order to eliminate shapes that were close to convex.

#1 Minus Threshold #2 Minus Threshold #3 Minus Threshold

Next we use the Blob Filter to remove objects smaller than 100 pixels because they are probably not targets and leave the largest rectangular objects.

#1 Blob Size #2 Blob Size #3 Blob Size

To get a better sense of what we end up with we color the resulting shape red and merge that back into the original image so we can see what has been highlighted.

#1 Final #2 Final #3 Final

An additional green arrow has been added to show the offset from center of screen to the X coordinate of the found target. The coordinates can be used to understand which direction to move the robot. This iterative feedback is more of an approach than a specific calculation. We can use the visual feedback to constantly tell us how to change our current state in order to achieve a better position. This method requires that the camera sensor be quickly processed and feed new information to actuators that will then update the robot position which will then again be processed by the camera in near real time. This iterative approach does not require any precise calculations that may or may not change during the competition due to worn equipment or lower battery levels.

The actual values fed to the actuator can be very coarse (neutral, near right, right, extreme right, etc.) since over time the values are integrated based on the feedback of the camera and reaction time of the robot.

To try this out yourself:

  1. Download RoboRealm
  2. Install and Run RoboRealm
  3. Load and Run the following  robofile which should produce the above results.

If you have any problems with your images and can't figure things out, let us know and post it in the forum.