Predicting latency
brian turner from United States  [1 posts]
1 year
We are successfully using Roborealm with a winbook 700 tablet and getting great results. We do however have to stop the robot to get a good heading to the target.

We continuously poll the API via a TCP connection using the code in your samples. When the driver presses fire we take our Gyro reading and then calculate the angle in labview. It then turns till the gyro indicates the correct heading and fires. We are only really using one frame plucked from a continuous flow.

I would like to be able to press fire without a pause and use more than one frame. But because I don't have a good way of detecting or predicting the latency between when the frame was captured and when the gyro reading was taken we were getting poor results that way. Is there some way of getting a time stamp of the frame capture? Is a moving camera's image as accurate as a stationary one? Will averaging over several images improve accuracy? We originally tried not using the gyro and the robot would oscillate around the target then hit about 60% of the time. We are now running well over 80% I can't tell you what percent of the failures are the mechanics vs vision vs code.

We don't currently have enough data to be sure that the camera does not have pincushion or barrel effecting the accuracy when we start to fire well off of the target. The angle calculation is very close to 10 pixels per degree. We are using the Microsoft life cam.

I tried to tune my latency prediction, but the target did not settle down. I learned that the FRC Gyro code's rotation rate is an averaged value with an undocumented period that makes it poor for the formula I was trying to use. The Gyro heading is quite fast but I don't know if it also has some latency. I think that primarily the latency is messed with by the polling interval. I also suspect that processing of the image may vary in latency. I think we are using the sample code from your website with just the camera settings adjusted. I had a student do that part and it was so easy he did most of it himself and I did not have to learn Roborealm. So I may be missing something obvious.
Steven Gentner from United States  [1370 posts] 1 year

If you insert the Watch_Variables module you will see additional variables that mention the capture time of the image. I think that's probably what you want .. that will, however, be very close to the actual machine time as the USB delays will be very short. The issue is probably in the network transmission of the data from the Winbook to your dashboard. I'm not sure if you are also transmitting the image but doing so will really slow things down (due to network issues). If you read the bottom of


under system configurations you can learn more about why you are getting oscillations. That's very common in bandwidth limited environments when the round trip time is too slow in comparison to reaction time.

A still image will always be more accurate than a moving image ... but I don't think in your case that will make too much of a difference. As long as the camera has enough light it will not blur the image too much to make that an issue. Unless you really so see the images being very blurred or slurry I'd not worry about that too much.

You may find, however, that the robot vibrates a lot when moving ... which WOULD cause the images to be blurry since the vibrations will be very high frequency. Snap a couple screenshots (using Ptr Sc button on your keyboard) when moving quickly and see what the image looks like.

While the image distortion does play a role in accuracy you would also see this issue when the robot is stationary since the lens will distort regardless of if moving or not. If you find that shots performed when the target is only visible in the the extreme left or right of the image then you have a distortion issue. Firing from slightly off-center will still work since the distortion will be greater on the sides of the image and not in the center.

The issue could be on the polling of the API. The network tables will do a better job on sending changed data ... or if you use the WaitImage in the API command that will ensure that only new data is polled. It is possible to poll faster than 30fps which you will not really need to since only new information is created on a new frame.

what I would check is to see what and how much information is flowing over the network. If you are transmitting the image, try reducing the image size (thats 1/4 the amount of data) which may suddenly run much faster. Keep in mind that any part of the system can have delays (including the Labview calculation) which should be reviewed again.

In terms of RR, keep an eye on the gray numbers to the right of the pipeline. They should all be quite low and the fps should be around 10-20 fps (ideally higher). If the gray numbers are larger than 00:100ms then that module may be slowing things down too. Its how we profile to see what modules might need to be reviewed or upgraded to increase performance.


This forum thread has been closed due to inactivity (more than 4 months) or number of replies (more than 50 messages). Please start a New Post and enter a new forum thread with the appropriate title.

 New Post   Forum Index