Sandy,
We had heard of the peg target before kickoff but since have not seen that even mentioned. Do you have actual images from this? We never received any from FIRST and therefore assumed this was a change and had been dropped.
Assuming they exist, the Line Pattern should be able to also find those. In terms of driving the robot towards the peg, I would avoid any angle calculation since its very unlikely that your robot will be able to drive that accurately. Instead, take the center of the Line Pattern result and use that to determine if the robot needs to drive hard right, right, center, left, hard left and just test to see what motor values actually work best. I.e. if you find that the robot starts waggling towards the target, reduce the motor values.
For example, if the target is 200 pixels off from center of the image that would imply a stronger left or right move. If the target is within 100 pixels then that's only a slight left or right. If it is within 50 pixels, then move forward. Note these are just guesses as you will have to tune your own robot and camera solution (above assumes 640x480 image AND that the camera is centered on your peg grabbing device).
Repeating these crude directions very quickly will give you a much more reliable targeting than using any angles. BUT this only works when you are running the analysis over and over and processing the image as fast as possible (at least 10hz or fps).
This also deals with the fact that most robots CANNOT even more straight (unless you are using encoders) and in essence uses the target as an absolute measurement of where the robot needs to go (left, right,etc) and thus will home the robot correctly even if you are not using encoders to drive the robot.
STeven.
|
|