|
Object Recognition via API Jason von Nieda from United States [2 posts] |
13 year
|
Hi RoboRealm,
I am using RoboRealm to do some basic vision tasks for a PCB pick and place system. I use RoboRealm via the API since I do my development on Mac. I talk to a (licensed) RoboRealm instance running on a VM using Windows.
I need to use the Object Recognition module but the controlling program is in charge of the images that need to be recognized. What is the best way to send template images to RoboRealm via the API and then use them in the Object Recognition module?
One method I was considering is using set_image to set an image, then save_image to store it in a directory and then specifying that directory when I load the module. Is that the best way to do it?
Thanks for your help!
Sincerely,
Jason von Nieda
|
|
|
Anonymous |
13 year
|
Jason,
That sounds about right in term of the directory usage.
If you can provide a little more information on the usage we might be able to offer a better solution. Any sample images would be helpful too.
Do your templates keep changing over time or are you wanting to just add to the template base and have that grow?
Is it recognizing the entire image or just a portion of the image?
As the 'database' is just the file system you can also use UNC folder functions (not sure if they are available on the MAC) and then use the monitor folder option in the Training interface just to retrain when a change in that folder is seen. That would allow you to manage the database remotely but still have the OR module keep up to date.
Or we could provide an API to automatically add an image with a specific portion of the image to the recognition database (i.e. save to the current OR folder) ... assuming that would work better for you.
STeven.
|
|
|
Jason von Nieda from United States [2 posts] |
13 year
|
Hi Steven, thank you for the in depth response.
Here are some more details:
I am the author of an open source pick and place system called OpenPnP ( http://www.openpnp.org ) and one of the things the system needs to do includes computer vision tasks. There are three tasks of interest:
1. Drag Feeder Vision: Where computer vision is used to verify the distance a tape of parts has been dragged.
2. Fiducial Locating: Where vision is used to find a pattern of dots on a circuit board to identify and orient the board.
3. Part Locating: Where vision is used in a camera look up (instead of down) to find the center of a part that has been picked up and adjust an offset if it was picked up off center.
Currently I am working on #1. What happens here is that we have tapes that contain surface mount electronics parts. These tapes have drive holes along the edge and then parts in small pockets. Like this: http://www.antistat.co.uk/shopimages/sections/thumbnails/tape%20&%20Reel.jpg
To save expense, instead of using a mechanical tape feeder which positions and advances the tape, we use what is called a drag feeder. This is just a slot on the table of the machine that the tape fits into. The head of the machine then deploys a pin, inserts it into a hole in the tape and drags it forward by the distance of one part. It then removes the pin and goes to where the fresh part should now and and picks it up.
The problem is that this because unreliable over time. With friction and bumping the drag may be partially or completely unsuccessful and then the new part is not where it is expected to be.
My plan is to use Object Recognition to determine where the part has actually ended up, pick from that position and then use the offset from where we expected it to be to determine the location to drop the pin on the next feed. This should repeatedly correct any error that comes in.
When the user sets up each feeder they define the pick location, drag start position and drag stop position. They then use a selection rectangle to define an image template from the camera of what the part looks like in the tape. Finally, they define an area of interest where vision should search for instances of that template image. When the feeder runs it will use RoboRealm to search that area of interest for the image template and then use the center of the one found with the highest confidence as the pick point.
In general, the template images will probably not change often, but they are configurable from the OpenPnP user interface so the user can change them at will. Typically they would set up a feeder once and then only change the template image when they change the part that that feeder is feeding.
The trick, with RoboRealm, is that the system can have many such feeders and each will have a different template image and area of interest. In an ideal world I would be able to make an API call to set the template image, then set the image to be processed and then perform processing.
Here is an example of one of the template images:
And here is an example of a camera capture where I would be trying to find the template image:
I have tested this in RoboRealm and it works great, now it's just a matter of integrating it into my system.
UNC would work, but I feel it would be fragile. I much prefer sticking to the API where everything is well defined and my users have very little setup to do. All they have to do is launch RoboRealm, provide the IP of the RoboRealm server and OpenPnP takes care of the rest.
Thanks,
Jason von Nieda
|
|
|
Anonymous |
13 year
|
Jason,
Thanks for the description. That really helps ... nice system! We are very interested in help you with the integration as it is a great usage of vision. We will think about this a bit and add in the ability to send templates via the API.
In terms of the multiple feed lines, would your system be able to send some sort of id or name that would identify that feed? This would allow the templates to be updated instead of just added to and keep the number of templates small. So whenever a template is send called "feed1" that would update the template. That would keep the number of templates small.
More later ...
STeven.
|
|