loading
 
Synchronization of two cameras and getting processing time?
Anonymous
15 year
Hello,
   I have an application of tracking a quick moving object (about 3m/s). There are two questions should be resolved, but it seems that I have not found an appropriate way.
1 As I want to obtain 3D information, I have to use 2 cameras. Since object moving is very fast, I think the synchronization is very important. Now I use an API to connect two RR instances, then, getting variables after all connected, but I wonder if synchronization is guaranteed by this way. I wonder if using one RR instances can connect two cameras and getting variables of two cameras at the same time.
2 how to get each image processing time in RR as I want to acquire moving velocity of object, if FPS can be used as processing time
Anonymous 15 year
I notice a varialble called "IMAGE_TIME",It confused me as when I change camera 320*240 or  640*480, it is always 0.033ms, but fps has changed. so I guess it is just a capturing time,while fps is just processing time but fps sometimes is not very stable, I have to get an accurate time to caculate velcocity.so  how could I caculate velocity of the quickly moving object.any helpful suggestion would be appreciated.
   many thanks
                  Richard
    
Anonymous 15 year
Richard,

The IMAGE_TIME variable stores the time the frame was captured as reported by the DirectX interface. So that's the number that you are looking for. The fps will vary as the actual rate may depend on other processes taking CPU time or it may change when the image content changes based on the modules in use (some modules will take longer to run on certain image content).

In terms of accurate timing you will have difficulty with this scenario. As windows is not a realtime OS it will not guarantee capturing at a consistent rate. You may have to check into actual stereo cameras that are synced together in hardware and guarantee the two images are captured and delivered at the same time.

Or you could try to construct a stereo image based on a single camera whose image is split in two by a prism or mirror. There are a couple of these designs on the web. This would then just use one camera (at half the resolution) but would at least eliminate the timing issue.

But with fast moving objects you will have issues with motion blur depending on your shutter time and available light.

STeven.
Anonymous 15 year
Steven,
   Thank you so much for your prompt reply. I figured out that “IMAGE_TIME” just represent capture time. But I need image processing time. As I want to calculate object velocity, so I have to get current position and previous position and make a subtraction, then divide by processing time. That is velocity of current object, right? I know the processing time can’t be garantted very accrately. But my application needn’t be very correctly, just obtaining a proper velocity. I wonder if FPS can be used or have some better method.
You mean that using two cameras can’t be synced in RR. I wonder if it is applicable using stereo camera and how much the difference of timing is.
Anonymous 15 year
Richard,

That's correct. You would want to subtract the position of the objects and divide by CAPTURE time. Processing time is not needed here. You want the time that the image was captured not when the image was done processing. Processing time is not needed as it does not add anything to the equation. The IMAGE_TIME reflects the real time that the image was captured which is what can be used to determine velocity.

If I capture an image at time 100ms and another one at time 200ms but it takes me an hour to finish processing image2 (extreme example here) the speed of the object is still distance/100ms regardless of the hour that I used to process the image. Does that make sense?

STeven.
Richard from China  [10 posts] 15 year
STeven.

   Thanks a lot. It seems I got what you mean. I should subtract the position of the objects and divide by CAPTURE time. I measure IMAGE_TIME of my camera,it is very stable, always be 33ms except for begining of connecting camrea, I guess that's because the delay of  camera exposure time. That means I have to finish image process in 33ms, otherwise I should use 2 capture time as time unit. Besides, as you said, windows OS is not a real time, so I can't have a precise timer in it.But how could I use API to connect RR in my program.Must I have to use a longer timer,such as 66ms, instead of actural 33ms, to connect RR in my API.
Anonymous 15 year
Richard,

You should find enough examples in whatever language you want to use for the API in the API examples download

http://www.roborealm.com/downloads/API.zip

When connecting to grab an image I would also grab the IMAGE_COUNT which would tell you if you missing 1 or 2 images and then let you know what scale to divide by (i.e. 33 or 66 or 99, etc.) I would not count on the fact that each image taken is in 33ms as you will very likely get images not within that timeframe.

Note that the RR API will wait for a new image before continuing so you should not need a specific timer in your application .. just grab images as fast as possible and determine the actual time that each image was taken by also grabbing the appropriate variable (after the image is received) to determine timing calculations.

STeven.
Anonymous 15 year
steven,
   I'm still not very clear about grabing image as fast as possible. I want to make a new thread to connect RR API, i wonder if it's ok. the thread is always running, so when a new image come from RR, the API thread will get image and variable at once. then I use  IMAGE_COUNT as a time scale marker, combine the grabbed variable, I can carry on timing calculations, then subtracting variable by time will get velocity,does it go right directiion?

This forum thread has been closed due to inactivity (more than 4 months) or number of replies (more than 50 messages). Please start a New Post and enter a new forum thread with the appropriate title.

 New Post   Forum Index