Visual Targeting #2

This tutorial assumes that you have acquired an image of some form and need to process it to determine the location of the upper targets. This information would be used to determine the rotation of the robot to center it for firing.

Let's assume the following images are to be processed.

#1 Original #2 Original #3 Original

The first thing we want to process the image for the reto-reflective tape that is illuminated by a green light/LEDs. By separating the strips from the rest of the image we can proceed to isolate and extract just that target object. First we use the RGB_Filter module to convert it into a black and green image.

#1 RGB Filter #2 RGB Filter #3 RGB Filter

We can see that using that module extracts the green strips nicely. The next step is to remove any objects that are just too small to be the target. This uses the Blob Size module.

#1 Filter Size #2 Filter Size #3 Filter Size

While the results are already quite promising, we want to be sure that only the two target stripes remain. In order to ensure this we can check the relationship between any two remaining objects within the image. Before we generate the statistics based on the image, we first correct the curvature of the stripes that can cause those measurements to be less precise. In order to correct for any curvature of the stripes, we use the Unshear module which will straighten out the curved stripes into straight lines.

#1 Unshear #2 Unshear #3 Unshear
Once corrected we begin the analysis by adding in the Geometric Statistics module which will provide some basic stats on each of the remaining blobs.

The qualities that we would like to measure is:

1. The two stripes should have very similar X center of gravity. I.e. they are aligned ontop of each other.

2. The width of both stripes should be approximately the same.

3. The height of one stripe should be twice that of the other (4 in versus 2 in)

4. The height of the gap between the two stripes should be about the same height as the upper stripe.

All this information is gathered in pieces using the Geometric module but need to be calculated using a custom script. In this case we use the VBScript module to perform this analysis but any of the programming modules can be used (Python, CScript, Javascript).

threshold = 0.75

cogXList = GetArrayVariable("COG_X")
heightList = GetArrayVariable("HEIGHT")
widthList = GetArrayVariable("WIDTH")
maxYList = GetArrayVariable("MAX_Y")
minYList = GetArrayVariable("MIN_Y")
maxXList = GetArrayVariable("MAX_X")
minXList = GetArrayVariable("MIN_X")

if isArray(cogXList) then

	number_of_objects = ubound(cogXList)

	for outer = 0 to number_of_objects

		outerCogX = cogXList(outer)
		outerHeight = heightList(outer)
		outerWidth = widthList(outer)
		outerMaxY = maxYList(outer)
		outerMinY = minYList(outer)

		for inner = outer + 1 to number_of_objects

			innerHeight = heightList(inner)
			innerWidth = widthList(inner)
			innerCogX = cogXList(inner)
			innerMinY = minYList(inner)
			innerMaxY = maxYList(inner)

			' only compare if objects have similar X center of gravity
			if (Abs(innerCogX - outerCogX) / innerWidth)< 0.1 then

				if outerHeight > innerHeight then
					if outerHeight > (innerHeight * 2) then
						score = (innerHeight * 2) / outerHeight
						score = outerHeight / (innerHeight * 2)
					end if
					if innerHeight > (outerHeight * 2) then
						score =  (outerHeight * 2) / innerHeight
						score =  innerHeight / (outerHeight * 2)
					end if
				end if

				if outerWidth > innerWidth then
					score = score + (innerWidth / outerWidth)
					score = score + (outerWidth / innerWidth)
				end if

				' determine height of combined stripes
				if innerMaxYouterMinY then innerMinY = outerMinY

				gapSize = ((innerMaxY - innerMinY) - outerHeight) / 2

				if gapSize > innerHeight then
					score = score + (innerHeight / gapSize)
					score = score + (gapSize / innerHeight)
				end if

				if score > bestScore then
				  bestScore = score
				  bestMinY = innerMinY
				  bestMaxY = innerMaxY
				  bestMinX = minXList(inner)
				  bestMaxX = maxXList(inner)

				  if bestMinX > minXList(outer) then
				  	bestMinX = minXList(outer)
				  end if

				  if bestMaxX < maxXList(outer) then
				  	bestMaxX = maxXList(outer)
				  end if

				end if

			end if


	bestScore = bestScore / 3

	if bestScore > threshold then

		SetVariable "TARGET_SCORE", CINT(bestScore*100)
		SetVariable "TARGET_MIN_X", bestMinX
		SetVariable "TARGET_MIN_Y", bestMinY
		SetVariable "TARGET_MAX_X", bestMaxX
		SetVariable "TARGET_MAX_Y", bestMaxY


		SetVariable "TARGET_SCORE", 0

	end if


	SetVariable "TARGET_SCORE", 0

end if

The code will calculate a final score for each combination of two objects. We are only interested in the highest scoring combination and then check that this core is sufficient to identify an actual target. The resulting combination is then highlighted by drawing a square around the target along with its score.

#1 Stats #2 Stats #3 Stats

An additional green arrow has been added to show the offset from center of screen to the X,Y coordinate of the found target. The coordinates can be used to understand which direction to move the robot. This iterative feedback is more of an approach than a specific calculation. We can use the visual feedback to constantly tell us how to change our current state in order to achieve a better position. This method requires that the camera sensor be quickly processed and feed new information to actuators that will then update the robot position which will then again be processed by the camera in near real time. This iterative approach does not require any precise calculations that may or may not change during the competition due to worn equipment or lower battery levels.

#1 Arrow #2 Arrow #3 Arrow

The actual values fed to the actuator can be very coarse (neutral, near right, right, extreme right, etc.) since over time the values are integrated based on the feedback of the camera and reaction time of the robot.

To determine distance we can use the size of the detected pattern relative to the source pattern. The detected size of the pattern in image #1 is 1000 or 100.0%. This makes sense since that is the image the source pattern was read from (i.e. 10 6 9 10 9). For #2 and #3 we have different sizes that correspond to how much further or closer the target is based on what we used as the source pattern. If you have a desired distance in order to launch your ball you can use that distance as the source pattern and move the robot such that the pattern size is as close to 1000 as you can get it. This will ensure that the source pattern size and the detected pattern size are very close which implies the same distance.

For those that have the ability to target at different distances, one can use the source pattern 10 17 31 32 10 which is the target size in inches * 4. Note the 10's are just borders so we ignore them when coming up with this pattern. The inner 3 numbers are therefore 17 31 32 divided by 8 are approximately 2 4 4 which is the specification of the targets, i.e. lowest strip is 2" thick separated from the upper 4" thick strip by a 4" gap.

From this knowledge and the known field of view of the camera we can determine a distance from the camera to the target using:

minY = GetVariable("TARGET_MIN_Y")
maxY = GetVariable("TARGET_MAX_Y")

if GetVariable("TARGET_SCORE") > 0 then

	targetPixelHeight = maxY - minY

	' calibrated for an Axis camera
	imageHeight = GetVariable("IMAGE_HEIGHT")
	cameraFieldOfView = 47.5
	targetHeight = 100.0

	' determine distance in 8 x inches
	totalDistance = (((targetHeight*imageHeight)/targetPixelHeight)/2)/_

	' convert to ft (12 inch per ft * 8 inch multiplier) = 96
	totalDistance = CInt((totalDistance*100)/96)/100

	' save it for use in other modules
	SetVariable "Distance", totalDistance

end if

To try this out yourself:

  1. Download RoboRealm
  2. Install and Run RoboRealm
  3. Load and Run the following  robofile which should produce the above results.

If you have any problems with your images and can't figure things out, let us know and post it in the forum.