Kamlesh Thakur from India  [32 posts]
13 year

Can I get some detail literature on Associative video memory technology which is based on multilevel decomposition of recognition matrices?

I got a block diagram and brief introduction about AVM in AVM_main.html.
However, I am looking for more detailed information about AVM functionality, especially for the Object detection/recognition (face, eye etc.) within an image sequence.
EDV  [328 posts] 13 year
Unfortunately but I not provide source code or detail documentation about AVM algorithm within AVM Navigator project.

However I found recently open source algorithm and it use template principle like AVM.

-= BiGG – Algorithm =-


Source code:

I hope that this information could help you in your project.
ronn0011  [73 posts] 12 year
Algorithm "Associative Video Memory" (AVM) uses a principle of multilevel decomposition of recognition matrices. Inside AVM search tree is stored recognition matrices with associated data. On the upper levels of the tree there are more rough matrices with a small number of coefficients but at lower levels is stored more detailed matrices (larger number of coefficients). The matrices contain information about the location of brightness regions of objects represented in the invariant form (the coefficients of the matrices do not depend on the level of total illumination). Further scanning the image by window we get the input matrix and search it in the AVM tree. If the difference between the coefficients of the input matrix and stored in the AVM matrix by absolute value does not exceed a specified threshold then the object is recognized.

Hi edv, thx for some references. Some q here :

1. As mentioned arriw keys is a signal for odometry processing, it moves correspondinly with coordinates x and y and azimuth ( horizon angle).How the avm tree play a role here, are the tree is merely used to just simulate, or it does the
Matching with the navigation

For illustration: i train the robot in marker mode with initial point A and going to B ( assuming i placed a nice pattern art poster) on B.

Just like human 1 eyes.  We see a poster, however as we move the poster is getting
Larger and we see similar object through the destination  ( assuming there is nothing on the side, clear space. In principle is it the recognition based on the pattern of my poster relative to robot movement . For sure as we move we tend to have some offset on x axis and y axis and is this data stored in avm tree ?

So assuming we using" navi by map" and intentionally place the robot slightly forward and it started initialized, mathematically by bringing the robot forward, the object looked bigger, is the avm able to recognize that this is not my iniial starting point. Therefore some computation is done to estimate the different of initial image size with the current initial ( a bit forward) so the robot will perform moving backward. Although we dont train it.

After that it began to recognize the initial trained object and move forward. Natutally robot will not be able to move straight and veer left and right due to mechanical issue such as floor friction, slippage and some motor unmatched for differential drive. So as it veer, and will the avm recognizes And correct the navigation by recognizing that the veer does. Ot correspond to the trained object so it trigger some action of either left to offset the errors?

If the current object on point B, is the reference would this worked mathematically or logically.

I am not sure for comPlex case with a lot of object and how many object are being used for navigation.
Lets me start wit simple processing to understand te principle.

ronn0011  [73 posts] 12 year
I think this is the good point "If the difference between the coefficients of the input matrix and stored in the AVM matrix by absolute value does not exceed a specified threshold then the object is recognized" what is the value in term of .

Is it the distance of the initial point to the object ?

And the specified threshold, how the threshold is created ?
EDV  [328 posts] 12 year
In our case the visual navigation for robot is just sequence of images with associated coordinates that was memorized inside AVM tree. The navigation map is presented at all as the set of data (such as X, Y coordinates and azimuth) that has associated with images inside AVM tree. We can imagine navigation map as array of pairs: [image -> X,Y and azimuth] because tree data structure needed only for fast image searching. As you properly know, the AVM algorithm can recognize image that was scaled. And this image's scaling really is taking into consideration when actual location coordinates is calculating.

Let’s call pair: [image -> X,Y and azimuth] as location association.

So, each of location association is indicated at navigation map of AVM Navigator dialog window as the yellow strip with a small red point in the middle. You also can see location association marks in camera view as thin red rectangles in the center of screen.

And so, when you point to target position in "Navigation by map" mode then navigator just builds route from current position to target point as chain of waypoints. Further the navigator chooses nearest waypoints and then starts moving to direction where point is placed. If the current robot direction does not correspond to direction to the actual waypoint then navigator tries to turn robot body to correct direction. When actual waypoint is achieved then navigator take direction to other nearest waypoint and so further until the target position will be achieved.

This forum thread has been closed due to inactivity (more than 4 months) or number of replies (more than 50 messages). Please start a New Post and enter a new forum thread with the appropriate title.

 New Post   Forum Index