loading
 
Targeting system
MaxK from United States  [2 posts]
8 year
I have a target that I need to have my robot shoot a ball into. The target has a U shape of retro-reflective tape under it. I was planning on using a ring of green LEDs around the camera to shine onto the tape and have it reflect back into the camera. I have figured out how to isolate the U of tape as a blob, but I need to figure out distance and angle from the robot, both vertical and horizontal angle from the camera to the target. How can I do those things? Thanks in advance.
Steven Gentner from United States  [1446 posts] 8 year
Max,

That involves a bit of math ... there are couple places where you can find the appropriate formulas including the FIRST site. But also see

http://www.roborealm.com/tutorial/FIRST/slide030.php

and

http://www.roborealm.com/tutorial/FIRST/slide020.php

and perhaps

http://www.roborealm.com/tutorial/FIRST/slide050.php

from the 2012 competition which has a similar shape to this years.

Its an involved process that requires good calibration of the image to get really good numbers.

Having said that ... I don't believe that all this is really necessary since the precision you get from doing this FAR outperforms anything your robot will be able to do. Can your robot really move 1.5 degrees? Its unlikely unless your team have significant resources to spend in order to get that kind of precision.

Instead, I'd recommend starting with a much simpler approach and see if that works. That is approach can be thought of as an iterative feeback approach.

Start by detecting the target. Once detected decide if the robot needs to move left or right in order to align on the target. (This assumes the camera and shooting mechanism are aligned physically) The amount that you turn is not very important just that it is large enough to actually move the robot. Then repeat the process by grabbing another image, process, determine motor movements until you are within say 40 pixels of the center of the target and stop. Keep in mind that the center of the image (320x240) is 160x120 and NOT 0,0.

If you find that the robot turns left then right then left, etc. lower the motor values since the robot is overshooting. If the robot does not move at all or attempts to, try increasing the values.

By doing this approach fast enough and capturing a new image each time after a movement happens the robot will align itself without having to worry about calibration, degree calculation, etc. All this math can be ignored as long as you have a closed system (i.e. sensor dictates movement and repeats) assuming your motor values and deadzone (i.e. the 40 pixels in the middle of the image) are set to reasonable numbers. Just play with those numbers to empirically figure out the right values. Because the camera system can react much quicker than the robot motors the motors serve as a smoothing filter on what the vision system sends it ..

This is typically a faster, more interactive and a robust solution than figuring out all the math for a precision that you can't achieve anyhow. Its also quick to update should you suddenly change your camera or your battery gets weaker and can't move the motors as quickly.

But if you want to, you can always determine the precise values using the above links to the 2012 competition (I.e the VBScript at the end of the second link) but, personally, I'd just use the iterative approach.

STeven.
MaxK from United States  [2 posts] 8 year
Thanks for the reply! However regarding your suggestion to use an iterative feedback for turning the shooter, it sounds great but our shooter is actually stationary and has a wheel on either side of the ball, and we will have one spin faster than the other to get the ball to curve in to the goal when we are not exactly in front of it, so I was thinking degrees from the target could be useful in the calculations, although I haven't done them yet so I could be totally wrong. Thanks again for your response!
Steven Gentner from United States  [1446 posts] 8 year
Fair enough! Watch out for the wheel speeds though. You will want to put a PID feedback loop on those to ensure that they spin the same rate for a desired speed. On the 2012 contest one of the local teams we were working with didn't have that which caused the wheel speed to change when the battery wears down which will really mess up your aim!

If you are looking for an easier way just test based on the distance in pixels from the target to center of the image. This will not quite be linear (due to camera distortion) but would be accurate enough to give you a fighting chance. So you could test at about 10 locations by manually rotating the shooter, record down the number of pixels from the image center and what speed you needed in order to get the ball to shoot in the right direction. At the end of this you will have a pixel offset to wheel speed table that you can interpolate between known data points to get a good guess.

The advantage to this table technique is that it again avoids any image calibration and robot wheel calibration. The robot wheel speed may not change linearly when different angles are desired ... so I'm not sure what that equation would really look like. Using a lookup table and interpolation between values would encompass any side effects that you'd not be aware off. Its a bit more laborious to get all those test points but that's something one should do anyhow to verify the shooter performance.

The data will also tell you if your shooter is consistent or not .. which I think will be the biggest issue!

Let us know how it goes!

STeven.

This forum thread has been closed due to inactivity (more than 4 months) or number of replies (more than 50 messages). Please start a New Post and enter a new forum thread with the appropriate title.

 New Post   Forum Index