loading
 
assign xyz axis
Jamal from United Kingdom  [3 posts]
14 year
Hi,

Can Roborealm assign new reference axis. For example, I have 2 objects. Object1 axis is 0,0,0 (x,y,z), then the object2  will use object1 as a reference. How can I do this if roborealm able to assign x,y,z axis? How to assign x,y,z axis?

Thanks
Anonymous 14 year
Jamal,

You can assign a new x,y,z axis but it is done a little differently than in other applications that you might have used. Perhaps if you could explain why you need to reassign the axis we could better illustrate how to do so.

Typically once you know the location of object 1 you can rotate/realign the image to object 1 which in essence realigns the axis to that object. As there are a couple modules that can do this knowing your specific task will allow us to be more precise in our answers.

Thanks,
STeven.
Anonymous 14 year
Hi Steven,

At present, I am doing the analysis of robot fingers. I need to find out the location  x,y,z of each finger with respect to the reference axis which is 0,0,0 before  and after the fingers move. The reference is located at wrist robot. I consider object1 as a reference and object2,3,4,5,6 as 5 fingers. I need to know the new location of these 5 fingers especially when grasping task is implemented. Hope this will help you for the illustration.

Regards
Jamal
Anonymous 14 year
Jamal,

Ok, then your reference point is the wrist. Are you able to identify this point within an image? Ie. perhaps you could include two images of the hand before and after grabbing. You might be able to use the translate and rotate modules (or transform module) to move the reference to the wrist point before calculating the finder positions. But this assume that the wrist point can be identified as that will move depending on the location of the hand.

If the the wrist is identified and the image rotated with respect to this location then the location of the fingers would be relative to this location an in effect translate/rotate the axis.

My concern would be more about how you can segment the individual fingers once the hand is grasped. Unless you modify the environment (i.e. add special finger markers) this will be a difficult process.

STeven
Jamal from United Kingdom  [3 posts] 14 year
Hi Steven,

Thanks for the help. However, I am still confused and struggling to understand transform module. Herewith attached is a photo of robot hand. The round red mark is my reference at (0,0,0:x0,y0,z0) while other green marks are the initial condition of fingers. Once all the fingers moving (grasping), the new location of the green marks should be calculated (x2,y2,z2) for example. So my questions are

1. How can I assign reference coordinate (red mark) as (0,0,0)
2. How do I calculate the initial coordinates of  (x2,y2,z2:all green marks)  with respect to the (0,0,0:x0,y0,z0) .
3. How do I calculate the final coordinates of  (x2,y2,z2:all green marks)   with respect to the (0,0,0:x0,y0,z0) .

At the end, I will find the kinematic relationships for all fingers when grasping task was implemented with respect to the reference coordinate (0,0,0:x0,y0,z0) .

Regards
Jamal

 
Anonymous 14 year
Jamal,

Actually in your case (now that we see what you're trying to do) you would not need the transform module. Basically what we think you are asking for is quite straightforward. Once you know the coordinates of the red circle you just simply subtract its coordinates from all the other green marks. That will then define the green marks with respect to the red mark (i.e. the red mark becomes coordinate 0,0).

So doing that in RR requires detecting all the marks and their coordinates followed by a little VB to translate all the green marks with respect to the red one.

Note that once this is done the coordinates will not longer be able to be displayed on the image as they may contain negative values.

Hopefully the attached script clarifies this.

Note that using one red point only gives you translation invariance, i.e. you can calculate how far the green dots are from the red one ... . To get true hand localization you may need to track 3 stable dots. This will depend on what you are trying to do.

STeven.
program.robo
Anonymous 14 year
Oh, and thanks for the really cool picture! I'd like to see that hand in motion!

:-)

STeven.
Anonymous 14 year
Hi Steve,

Thanks for your help. Just a simple question. How to assign 3 axis in roborealm? Currently I only know x and y axis. I want x, y and z axis.

Regards
Jamal
Anonymous 14 year
Jamal,

By Z axis do you mean depth/distance from the camera?

STeven.
Anonymous 14 year
Hi Steven,

The center of gravity (COG) always show the axis x and axis y coordinates. Any possibility to have z coordinate as well?

Jamal
Anonymous 14 year
In this case the Z would then be depth? Unless you have a stereo system running you will have to estimate the Z value by some other method as the 2D image generated by the webcam has no implicit depth measurement.

One method is if you know the size of the blob being tracked. There is a correlation to the amount of pixels a object occupies based on its depth. Naturally the closer an object is to the camera the more accurate this figure is ... further away and you will lose accuracy. Thus you can think of the area of the blob as the Z coordinate assuming you know that the blob is a fixed size.

The area of an object will depend on your specific configuration (camera, object size, etc.) So the best way to figure this out is to take area measurements at different distances of the object. Once you have this depth versus area data you can extrapolate to a function that provides the depth given an area.

STeven.
zercastel from Spain  [1 posts] 12 year
Hello Im new to robo realm. Any coded example on this? Is there any complete tutorial step by step?

This forum thread has been closed due to inactivity (more than 4 months) or number of replies (more than 50 messages). Please start a New Post and enter a new forum thread with the appropriate title.

 New Post   Forum Index