|
Matching algorithms Gabriel [1 posts] |
18 year
|
I'm pretty new to RR, but it would seem to me that the Image_Match and Object_Tracking are among some of the most useful tools in the kit. Unfortunately, I can not find any information which gives me a glimse to how they work, such as black/hite, gray or color, etc. Could someone give me a brief explaination of these?
Also, being very new to vision, there seem to be a ton of really cool filters like edge detection in the package. Finding edges is cool, but what do I do with that? Like what is the general theory for once you find the edges in a room, you can then navigate it.
Thanks,
Gabriel
|
|
|
Machine Vision Anonymous |
18 year
|
Gabriel,
Those tools are a little underdocumented since we're actually still working on how they function as we're not happy with their general performance. They are very useful but only in specific circumstances. For example, the "Image_Match" module is ok at locating scenes that have not changed very much. They are good at detecting that the robot is looking at the living room again versus the kitchen but can only do so when in the approximate location as when the image was first taken. If significant things have changed like furniture has been moved or it is now evening but the initial image was taken in the morning the matching will probably not work.
The object_tracking module utilizes a technique called "mean shift tracking". It is a well known technique for tracking objects currently in the image over successive image movements. This is very useful until the object moves out of the field of view. Then the tracking stops and unfortunately does not resume once the image is back in view. This is something that we're working on solving but is akin to doing a better job on the above Image_Match module since the two are closely related. These two modules share some significant similarities to other object recognition modules that we are also developing. The trick is to find the common ground where they all interrelate and function in a reliable and repeatable way. We hope to explore and document this area much more in the coming months so please excuse the lack of functionality and documentation for now. We do realize that this is a very significant area of development as you pointed out.
You have a good point. What's the use in finding edges? Well, right now not much but we have to walk before we can run. Edge detection research has been and still is an active area of research. The basis of machine vision can be said to rest heavily on detecting edges. Edges are the next abstraction up from points. The next level up from edges are shapes (circles, squares, polygons, etc) and from there we get objects. So you can think of edges as being one of the tools along the way to detecting and understanding objects. But as you probably realize they are just a small part of this process and a lot more needs to be done before we can start to understand arbitrary scenes. However, before we even get there we will have many many interesting discoveries that are very useful. For example, most of the successful object recognition techniques simply look for points in one image and correlate them to the next. They don't know what it is they are tracking nor really understand the image but they do work and are very valuable techniques in many aspects of machine vision.
Machine Vision is still a very active area of research as noone has all the answers ... but that's what makes it so much fun!
STeven.
|
|