loading
 
Now AVM is working faster
EDV  [328 posts]
5 years
Now AVM is working appreciably faster after recent algorithm update. It showed real-time action in "Object recognition" mode (less than 40 ms per frame) on resolution 960x720 pixels and 20 ms on 640x480 pixels with approximately 500 associations inside AVM search tree (it was tested on computer with CPU Intel Core 2 Duo E6600).

So, I advise to download recent version of RoboRealm + AVM Navigator for your best results ;-)
 [29 posts] 5 years
That is GREAT!
Thanks for your hard work!
I look forward to testing it.
DM
ronn0011  [73 posts] 5 years
Hi, how do i update avm navigator. I purchased it last version.


I also would like to ask you. For avm navigator, marker mode, i am using netbook camera to track the floor in marker mode. However i do not see the coordinate
Moving. Although i just tested by moving my netbook on my hand wiyh camera
Of netbook moving down.

I would like your advise if i need a real robot that has to be controlled from avm to  get the coordinate  moving and points

As what i know it is just a vision recording so i am curious how to get the
Concept right before getting it run
EDV  [328 posts] 5 years
>> Hi, how do i update avm navigator. I purchased it last version.

You had to receive download link after registration on RoboRealm site.
Just use this link for downloading recent version of RoboRealm package with AVM Navigator.

You could also start RoboRealm application and then click "Option" button and further click "Download Upgrade".


>> I also would like to ask you. For avm navigator, marker mode, i am using netbook camera to track the floor in marker mode. However i do not see the coordinate moving.

1. The camera should see general view (it is must be not floor or cleared wall but textured images). Hold camera forward and parallel to the floor.

2. When you trained robot on route (in "Marker mode") and you want to return it back to the start position then you should first switch AVM Navigator to "Object recognition" or "Nova gate mode" (for manual control). As it showed in tutorial: http://www.youtube.com/watch?v=qVz9iBazqug

3. You should use arrow keys for robot control in "Marker mode" (it is important for route recording.


*What your robot see and how it affect to navigation?

In our case the robot uses only limited sequence of images from camera for navigation. The AVM Navigator application just tries to recognize images so that be able to understand robot location. And if you show in route training process of some image sequence (just lead your robot in "Marker mode") then in automatic navigation mode the robot should have possibility to see the same sequence of images without anyone changes. Otherwise AVM Navigator will not be able to recognize images and it will cause to fail in localization and navigation.

See this topic for more details:
http://www.roborealm.com/forum/index.php?thread_id=4253#
ronn0011  [73 posts] 5 years
Hi, thanks for prompt reply. I would like to undersand as you mentioned " it is important to use arrow key to navigate te robot " . As i awared when te robot is pressed
Using arrow key like up for forward , the robot is moving correspondingly, however since it is using image recording, means that the camera that do recording of action,  not the arrow button of the program.

So are you referrin if we use radio frequency transmitter, by moving te robot and on the marker mode with camera, i would nt get the coordinate moving ?.

I have to use only the arrow button.
EDV  [328 posts] 5 years
Arrow keys is important for route recording in "Marker mode" because it is signals for start of writing to AVM and also for odometry processing when robot move forwards or backwards.

So, you should press arrow key "Up" when you move by hand your Netbook with integrated camera forwards for imitation of robot action in "Marker mode" or you should set value of variable "NV_FORWARD" to "-1".

Note that you can also control your robot through AVM Navigator from external application with helping of control variables:
NV_FIRE, NV_LEFT, NV_RIGHT, NV_FORWARD, NV_BACKWARDS, NV_TURRET_LEFT, NV_TURRET_RIGHT.

The double purpose of control variables (in/out functions) is already implemented since as AVM Navigator v0.7.2.3 was released. And now variables (NV_FIRE, NV_LEFT, NV_RIGHT, NV_FORWARD, NV_BACKWARDS, NV_TURRET_LEFT, NV_TURRET_RIGHT) indicate control status (if use it after AVM Navigator in pipeline) as in previous versions but also it has secondary function as input of control signals from external application.

Now users can set value of these variables to “-1” (before AVM Navigator in pipeline) for activation of control action from scripts or other modules or external application (through API and "control translator" script).
ronn0011  [73 posts] 5 years
Q1 Yes i got it now. And i wonder if we can improve the smoothness of the motion by incorporating joystick as the inputs instead of keyboard. Because i find when the robot is in marker mode , it is npt moving straight as we expecting and we have to toggle between left and right buttons to compebsate. And when we press right , the robot stops for a sec and when we press arrows' s turn as compensating
To keep straight and then forward again, it pause again.


Unlike if we are using rc transmitter where we use throttle, we can get different velocity and the respond between toggling left and right button is almost negligible ( meaning it does nt pause). Therefore i also intend to like half press the keyboard button with intention getting lower acceleration, however i think with keyboard input, it is bot possible, or do you think it is about the roborealm setting or my arduino programs, like i can make different acceleration by pressing the keyboard half down, it get half acceleration.


Or do you think we can make the robot accelerates from 0 to 256 ( just example), so i can better steer the robot,or there is other way of joystick inputs or rc input device that is possible .


Q2 would that ve possible for us to add more button on the avm, now there is 7, I think it would be better if like having another more, for example if we have like humanoid robot where there is motor on the hands, how can i add the button with Less complexity. Anyway thanks for continual imProvement and i also hoping as user we can customize the background as well so i can place 2 d floorplan on background for a good interface after marking mode.
EDV  [328 posts] 5 years
I made an example of control AVM Navigator from external module like joystick (see robo file below).

Could you specify how it differs from what you needed for your robot control?

Note that the motor speed can be adjusted by “Move speed” and “Turn speed” parameters in AVM Navigator dialog window.
program.robo
ronn0011  [73 posts] 5 years
Thanks for the file, does it support gaming joystick. And what kind of joystick it supported.

My robot application actually require smooth movement as it is humanoid and i am doing the experiment for elderly user so i think it would e better the robot to move like an ic car where i can throttle the joystick forward or up to predetermined speed, at faster speed or at medium speed. Meaning i have variety of acceleration.

If i have joystick, i hope it able me to steer at different acceleration at one run.

And do you think it is possible for me to insert image on the background
Of navigation map like floorplan.

And will that be possible for us as user to add more
Control variables

Thanks
EDV  [328 posts] 5 years
If the motor control variables NV_L_MOTOR and NV_R_MOTOR that have range from -100 to 100 for motion control ("-100 " - full power backwards, "100" - full power forwards, "0" - motor off) is not suitable for your case then you can try to develop your own VBScript control program with using of variables: NV_LEFT, NV_RIGHT, NV_FORWARD, NV_BACKWARDS.

See VBScript example program for Oculus robot:
http://www.roborealm.com/forum/program.php?post_id=23546
ronn0011  [73 posts] 5 years
What You mean the program of oculus has what advantages over avm ?.

May i know what program it takes for you to develop this avm navigator. Is it a combination of various software in term of programming.

Anyway i have limitation in programming as i am  not from
Mechanical background.

And i knew some successful application of avm luke vanessa robot. I am not too sure if the background of the floorplan can be alteres in navigation by map. I mean i can place in floorplan of my home.

Thanks
EDV  [328 posts] 5 years
>> What you mean the program of oculus has what advantages over avm?

You said that you have humanoid robot and possibly that for this robot is not convenient to use NV_L_MOTOR and NV_R_MOTOR variables as control for your robot running motors.

And I have advised to develop your own VBScript program for your robot control and to use for this purpose a controlling program of Oculus robot as example.


>> May i know what program it takes for you to develop this avm navigator. Is it a combination of various software in term of programming.

Since October 2007 I developed new object recognition algorithm "Associative Video Memory" (AVM):
http://edv-detail.narod.ru/AVM_main.html

Algorithm AVM uses a principle of multilevel decomposition of recognition matrices it is steady against noise of the camera and well scaled also simply and quickly for training.

This algorithm gave me possibility to develop robot navigation based on visual landmark beacons: "Follow me", "Walking by gates", "Navigation by map":
http://www.youtube.com/watch?v=HTxNlOpm11U
http://www.youtube.com/watch?v=xbCpthKrL0o
http://www.youtube.com/watch?v=qVz9iBazqug

I embodied all these algorithms in AVM Navigator plugin for using within RoboRealm software.

>> And i knew some successful application of avm luke vanessa robot. I am not too sure if the background of the floorplan can be alteres in navigation by map. I mean i can place in floorplan of my home.

The robot navigation with AVM Navigator is based on image recognition. Robot sees the images from camera (marks) then recognize it and further define his current location and it do not use any floorplan grid.

Author of robot "Vanessa" has used for the space orientation in his experiments simultaneously AVM Navigator with his own original navigation algorithm based on floor grid recognition:
http://www.youtube.com/watch?v=LyBPsznn0Fc
ronn0011  [73 posts] 5 years
Hi, i am in the beginning stages to get the robot moving. However noe i do not have external camera . Therefore using laptop that has no turret. It shows on the marker
Mode after localization, i began steer
The robot to the location i wanted and it it straigh motion.


I am not too sure after that i carry the robot to initial
Positionand  i switch to navigation mode, it shows
Errornous movement, left right and left rught
Not staight motion.

Did i make a mistake here. Or i should switch the
Robot to nova gate. And carry it to the original location.


I watched the video on navigation using marker mode and you manually steer the robot to the next room and it created a good path in between.



And then i would like to know once you reach the destination and there is a box. Do you swithc it to recognition mode before carrying robot back or after  i sort out cant catch your steps. I would also curious when u carrying ur robot. Should u click " stop navigation". And then clear the gate . Once on the initial position, u switch it to navigation by map and . Now the robot localize again. And after that . The robot will auto move to the box. End pt. I wonder what is yhe recognition for. As i thought the robot move based on the navigation point( final)


I know my limitation now i do not have the servo and external camera yet .


However,I am trying o use "quake mode" i ve problem also with it. I wish i could post the video of the problem to show it to u ?. I find the system kept hanging and not responding.and secondly, as i try localization on marker mode, the view is not moving left and right, the control panel seemed activate on screen and corresponding to my input, it only the robot does
Not moce It really frustrated me which i thought after purchasing , it solved my problem as i am having a navigation project.
Hope at least
EDV  [328 posts] 5 years
Okay, let’s I try by myself ;)

I have switched off running motors of my robot and now it is just NetTop with webcam like your laptop with integrated camera.

Further I set value of variable "NV_FORWARD" to "-1" by module "Set_Variable" (see robo file below) before AVM Navigator in pipeline.

First the AVM Navigator was in "Object recognition" mode and I cleared mark data and after it was switched to "Marker mode".

Then I moved NetTop with webcam by hand to another room in "Marker mode" further AVM Navigator was switched to "Object recognition" mode and was returned back. I switched it to "Navigation by map" and set "target position" at the map. Furter I again moved NetTop by hand to another room and it led to advancement of robot location indication on navigation map.

See video and robo file for more details:
http://www.youtube.com/watch?v=UBzWJz93jGs

program.robo
Anonymous 5 years
Thanks AVM, you boost my confidence, I am curious if my roborealm files has an error, I ll post it here and I would like to show you my problem

1. my coordinate does not visible as i moving about 5 m straight using marker mode:

My Step:
1.Bring Robot is in initial position
2. clear mark data
3. Nova gata activated, Now localization, I try to move the robot with hand left and right
4. Use keyboard move robot to the target (it is going straight  with turn left a bit)
5.on recognition mode, carry robot to the initial position
6. Switch to Navi by map (here is the problem is)

step 6, I do not see the visible coordinate and just press a click on somewhere the path and it is hang

I think I would like to check if my real robot movement does not tally with the arrow button, is this could be the issues: As if you noticed, my arrow right is for moving straight, and arrow up robot turns left

http://youtu.be/yuHPanzxDdI

Appreciate your view on the problem :)

program.robo
EDV  [328 posts] 5 years
Thank you for your video!
I noticed several mistakes in your experiments with AVM Navigator:

First, just look into these examples of successful navigation more attentively:
http://www.youtube.com/watch?v=qVz9iBazqug
http://www.youtube.com/watch?v=214MwcHMsTQ
http://www.youtube.com/watch?v=G7SB_jKAcyE

The camera of robot saw general view of room environment with good contrast textures (it was not floor or cleared walls but it was a good textured images in front of camera).

You have not enough space inside of your room for enough number of general view images that necessary for recognition and navigation.

Just try to open the door and walk between several rooms for more number of textured images. You should not come too near to cleared walls (AVM can't recognize anything in closely of cleared walls).

Second, arrow keys is important for route recording in "Marker mode" because it is signals for start of writing to AVM and also for odometry processing when robot move forwards or backwards.

You should necessarily to set value of variable "NV_FORWARD" to "-1" by module "Set_Variable". Just download and start this robo file before route recording in “Marker mode”:
http://www.roborealm.com/forum/program.php?post_id=23653
ronn0011  [73 posts] 5 years
Thanks i will repeat my experiment again. But i am using arduino 328. May o know wats the robo file for?


Do you observe my arrow key does not tally with yours. As i press rigjt button for going straight. Is this an issue of not able to create a coordinate axis as your successful navig.

Or my coordinates errors is due to As u mentioned "You should necessarily to set value of variable "NV_FORWARD" to "-1" by module "Set_Variable". Just download and start this robo file before route recording in “Marker mode”: i really didnt set tis. However i would like to check by using ur robo file is it allright for my arduino board ?
EDV  [328 posts] 5 years
First try to repeat your experiment with AVM Navigator without arduino (only laptop with integrated camera).

As I mentioned early, arrow keys is important for route recording in "Marker mode" because it is signals for start of writing to AVM and also for odometry processing when robot move forwards or backwards.

So, you should press arrow key “Up” if you move laptop forwards or press key “Down” if you move it backwards or use NV_FORWARD or NV_BACKWARDS with value “-1”.
EDV  [328 posts] 5 years
It is a testing of new robot for AVM Navigator project:
http://www.youtube.com/watch?v=F3u0rTNBCuA
ronn0011  [73 posts] 5 years
Wow i can say within 20 cm the
Accuracy of the final
Destination. May i kbow your camerq
Spec?. Are u using Logitech.


Observed the robot moved slowly by pause to adjusting left and right. Is it what you have dobe on the manual mode as you cant keep your robOt straight all the time. I would like to know if you could possibly make te robot continously straight ?.

EDV  [328 posts] 5 years
I use in my experiments: Logitech HD Webcam C270 and 3Q NetTop Qoo (Intel Atom 230 1600Mhz) for images processing by AVM Navigator.

I think that continuous motion of my robot is possible but it required some changes in robot motion control algorithm.

For example now AVM Navigator sets values of control variables to NV_L_MOTOR = 0 and NV_R_MOTOR = 100 if it needed the turning to the left side without taking into consideration of previous motion history and it cause to stopping on turn commands.

I will try to develop more convenient motion control algorithm for such rover platform that now I have ;-)
ronn0011  [73 posts] 5 years
Hi, i dont get it what you mean by depending on algorithm. You Mean if i dont want the robot to pause & keep moving straight, is algorithm changeable ?. As you mentioned the caused
Of pause is due to the vbscript NV_L_MOTOR = 0 and NV_R_MOTOR = 100 .

I wish the straight movement can be further improved and i wonder as avm on marker mode it stores matri of images, but how we able to verify i the images we capture are of good images . I think by trial run the robot ita very ineffecient. So it would be good to have estimation like how well the ribot can executes the movemen on the map.

One more things i noticed is the map does not correspondingly create real path, as the robot straight and veer to right, it tends to give me abias to the right after i return the robot to straight motion.
After that on the map it showing veer to right. So when we navugate by map, the execution will be not followed real path.


EDV  [328 posts] 5 years
I just assembled my new rover for AVM Navigator project and I plan to improve motion control algorithm that will provide less jerks and stops in robot motion process when robot go to target position.

>> I wish the straight movement can be further improved and i wonder as avm on marker mode it stores matrix of images, but how we able to verify i the images we capture are of good images.

You already can see captured marks in “Marker mode” if you set checkbox “Show/Marks” in AVM Navigator dialog window.

>> One more things i noticed is the map does not correspondingly create real path

You are right, visual odometry algorithm of AVM Navigator is not perfect yet but any other visual odometry algorithm also has errors during working but "Navigation by map" based on image recognition and it provides possibility to get the same coordinates in the same waypoints of route (the robot sees images/markers that correspond to specified location). So, robot can travel from start point to target position of route successfully even if the map does not correspondingly create real path.

If you have your own robot model and can connect it to AVM Navigator then it would be great help if you could share your experiment results with AVM Navigator. It would help me in further developing.
ronn0011  [73 posts] 5 years
Yes, once it is done i will share my robot. Anyway i am building humanoid. I would prefer te camera on the height of 1.3 m.

Would that be possible to place it on the 1.3 m height and navigate it. Shoult it still be posiion parallel .

2. Here i saw a very smooth robot navigation with mapping. I am not too sure if it uses encoder , sensors and digital compass and pre programmed in matrix form.

http://www.youtube.com/watch?v=kxlHr75BJAw&feature=youtube_gdata_player

Maybe can give comments on that


EDV  [328 posts] 5 years
>> Would that be possible to place it on the 1.3 m height and navigate it?

Seems like this should work. But you should develop hardware obstacle detection system that will prevent collision with walls because if your robot will has big mass you should have hardware secure that will prevent damage if software will has failed.

>> Maybe can give comments on that

It is look like the Loki Robo is processing signal from different sensors for localization.

His author says:

dshinsel007> Oh, he can go faster... but 40 lbs of robot hitting a wall at full speed makes my wife unhappy. :-)

Actually, the biggest limit is how quickly he can process all the sensor inputs for tracking his location and obstacle avoidance. I'm working on optimizing that.

===
But in my case are available only the images from camera for robot navigation and there are no any other sensors for additional information.
ronn0011  [73 posts] 5 years
Hi edv, had just bought an external camera hd and as u said i set
The resolution To 340x 240. I am slow down te robot to 50 %. however it stil swirl left and right and. Really not as fast as your because of swirling motion. I am curious when u demo it is it your house interior and colorful carpet play a part so it has able to detect and math corresponding images and process it.


Also i am not too sure how the red dot is created . Are more red dot in navigation by map indicate the turn / images robot capture / just a random algorithm point you created it at some interval? I need ur comment on this.

There is also a moment where the robot is just go straigjt without swirl left and right for every 5 cm an this is beyond my reason. Hopefully i am intend to find a reason and hope can macimize this moment of going straigjt without zigzag .


What is the navigation mode used for ? . I am not really clear, hope u can do video on te application on navigation mode . So far i just explored marking mode

EDV  [328 posts] 5 years
Thank you for your efforts!

And I think that you have good chances for review my experiments with navigation on your robot. You should just provide your video with testing of AVM Navigator from time to time. In this case I would be able analyze it and could advise how to resolve current problems with adaptation of your robot to AVM Navigator module.

>> I set the resolution to 320x240. I am slow down the robot to 50 %. However it still swirl left and right and really not as fast as your because of swirling motion.

1. Make sure that your camera can provide 25 fps on resolution 320x240 and if it less then AVM Navigator will not be able to control your robot.

2. Force the running motors of your robot to slow down more.

>> I am curious when u demo it is it your house interior and colorful carpet play a part so it has able to detect and math corresponding images and process it.

The color has no sense for AVM because this algorithm use only grayscale matching. So, good object for recognition with AVM should has appreciable texture (it should not look like cleared wall).

>> Also i am not too sure how the red dot is created . Are more red dot in navigation by map indicate the turn / images robot capture / just a random algorithm point you created it at some interval? I need ur comment on this.

First the navigator finds path from actual position of robot to target position (maze solving) and then plans trajectory that consist of waypoints (red dots). And it really can be points for robot turn if it needed when robot go through.

>> What is the navigation mode used for?

Just follow this simple video tutorial for marking and navigation:
http://www.youtube.com/watch?v=qVz9iBazqug
ronn0011  [73 posts] 5 years
Great, ya i ll video my experiment and i am not sure if my webcam haa 25fps on video. When i purchased it stated 30fps but nvr specified what resolution.

http://www.microsoft.com/hardware/en-us/p/lifecam-hd-5000/7ND-00001#details

I noticed this webcam has autofocus and not sure if it affect the navigation as their is probably a selay for autofocus. Maybe i can conform the webcam as if it is not suitable i only has 3 days for exhange now.

EDV  [328 posts] 5 years
If your "LifeCam HD-5000" works faster than 25 fps then it is not problem but really would better if you will be able to set it to 320x240 25 fps.

You should try also to shut down in camera driver any preliminary software processing such as auto focus or less lighting correction for increasing of camera FPS.

Also you can try to buy "Logitech HD Webcam C270" that I used in my experiment and it really is convenient for this task.
Anonymous 5 years
Setting off Microsoft HD5000 Webcam
I have tried to off the autofocus, hopefully, it also considered off in AVM and have not tried in the robot as, I improving the based as their is some mechanical problem as one of the wheel leading at the same speed,



http://youtu.be/oUJ2aAZy6OQ


Here is the experimental when the autofocus is on, Maybe you can give comments and ways of improvement.


I am still wondering how do you check that 25 fps at 320 X240, I know I have to change the setting in Camera to 320 x 240 and I Do not know how do I check fps based on my 320 x 240,

Maybe you can tell me how you know from your logitech C270 ( i Know this is almost the same spec as mine but yours do not have autofocus and for information all high end webcam, should have autofocus, so it is bad for AVM right,


And the photo is the setting of my Microsoft, seemed my laptop can only go for 1280 x 720 HD :) so are you telling me that at 320 x 240, the resolution is small so the fps is faster, as the higher the resolution the slower the fps.



   
EDV  [328 posts] 5 years
I noticed that you really have set resolution to 320x240 30 fps in settings of your camera:
http://www.youtube.com/watch?feature=player_detailpage&v=oUJ2aAZy6OQ#t=13s

However this video look like as there is still some delay that happens from time to time.

I think that your additional software that you use for demo recording could cause this issue.

Can you write video in other way (with using of "Content/Loading/Seving/Write_AVI" module for example)?
Anonymous 5 years
The AVM module is not the issue here. You have the Sparkfun module which is probably not connected to a device which is slowing down the fps. Disable the Sparkfun module for now and you should see the fps increase dramatically.

Those gray numbers at the right side of the pipeline show the speed required for each module. Higher is worse. Thus it is clear from your video that the Arduino module is the issue and NOT the AVM.

STeven.
Anonymous 5 years
Okay, I will try record using Roborealm module, But i dont get it according to Steven, that my arduino is the issue here, How can it relates with the AVM, as It shows, the manual mode, it can move forward, backward, right, left, without any problem, maybe can elaborate on this..


I will try some more the experiment with off the autofocus, anyway, seemed there is misundestood here, the video i created, that is when the " AUTO FOCUS ON", I have not tried with AUTOFOCUS OFF as I AM imrproving my base.

SO the issue could be the AUTO FOCUS and the delay of the camera to focus.
Anonymous 5 years
I had this error when it start , any mistakes in the setting?. I have printed the errors

 
EDV  [328 posts] 5 years
You should just specify correct file name: "C:\Users\newson\Desktop\h.avi" (.avi was not there).
EDV  [328 posts] 5 years
Also I advise to use " XviD codec" for video compression instead of "Microsoft Video 1".

You can download it within "K-Lite Codec Pack" from here: http://www.codecguide.com/download_k-lite_codec_pack_full.htm
ronn0011  [73 posts] 5 years
Thanks. That k lite codec  is it for playing the recorded video ?
EDV  [328 posts] 5 years
No, it is not.
You will have several new codecs on your computer when this installation will be done and among them will be "XviD codec" there. And then you will be able to choose "XviD MPEG-4 Codec" instead of "Microsoft Video 1" in "Write AVI" dialog window.
Anonymous 5 years
http://youtu.be/vlt7rdbdcLU

I am still having a problem in getting the choice you advised me
Anonymous 5 years

http://youtu.be/aNSyCFpwJ_A

Sorry this one , the previous one was failed to upload
Anonymous 5 years
Can you just try the Microsoft Video I codec instead and see if that works at all? Its not clear is the issue is with the codec or perhaps the way things are being recorded. The Microsoft Video codec is a default one that should work on any Win system.

STeven.
Anonymous 5 years
Also be sure that you've added a ".avi" to the extension. It seems that in your video an extension is still missing.

STeven.
ronn0011  [73 posts] 5 years
Yes precisely the.avi is missing. I will record the process as soon my new robot base is done and updates the progress of the avm.
Anonymous 5 years
Update on My Robot:

1.  I am recording using Microsoft Codec
2. Tweak the background with a cushion of my chair, Travelling 1 m Straight
3. Problem still exist as I am navigate the robot at 40 % or my speed, real slow
4. Other Problems / Bugs: AVM does not show the control bar, although the robot is moving accordingly: Solution, Shut down and upload new files
5. I wonder why everytime there is autosaved whenever we open the Roborrealm the latest setting is still there:

6. Video of the Path movement AVM:
http://youtu.be/INjOS3FZgtI

I purposely double recording using My screen recorder as, the first recording using Microsoft codec was selected at best qualities so the file 325 MB . Anyway when the robot is played I did not used any screen recorder.

http://youtu.be/INjOS3FZgtI

7. Video of the Robot movement
http://youtu.be/S2n5qwFtkas
Anonymous 5 years
1. Ok, did that work?

3. Are you purposely driving slow or is the top speed of the robot slower than what you'd expect?

5. That's the default behaviour of RR. You can change that in Options Button->Startup tab

Ok, that looks good. I'd point the camera down a little to keep it as horizontal to the floor as possible. The robot moves as expected ...

Is there still a problem?

STeven.
ronn0011  [73 posts] 5 years
Yup, why it is zigzag walking. Is it normal ?


I mean the speed of the motor i m using 50% of the speed. I am not sure if speed increase would that be any better ?
EDV  [328 posts] 5 years
I see the progress in your experiments. Keep it up the good work!

Now you have navigation like in this video:
http://www.youtube.com/watch?v=G7SB_jKAcyE

The fluctuation of this robot was connected to the extremely high turn speed:
http://www.youtube.com/watch?v=FJCrLz08DaQ

But in your case we have extremely low FPS that has AVM Navigator in the input images stream. I can see it even by visual perception in your video it looks like step-by-step images with 0.5s delay:
http://youtu.be/INjOS3FZgtI

It is a good idea that just turns off any outside software for demo recording and also ejects "Write AVI" module from RoboRealm pipeline and etc. In that way anything that could produce processing delay must be excluded from processing.

Further you should just shoot a new video by external camera like in this case:
http://youtu.be/S2n5qwFtkas

Don’t forget to show monitor of your laptop when AVM Navigator works in your new video.


If causes of problem still in high turn speed of your robot then you just have to set a little less value of parameter "Turn speed" in AVM Navigator dialog window.
ronn0011  [73 posts] 5 years
But in your case we have extremely low FPS that has AVM Navigator in the input images stream. I can see it even by visual perception in your video it looks like step-by-step images with 0.5s delay:

Are yoy saying low fps is the 30 fps . I am not too sure why we have to set it to 320x240 are this is to ensure the processing of AVM is as fast as possible before and after marker Mode. And i just realized we should repeat the marker Mode. What if the second repeat is not exactly the same starting point will the navigation by map. Will follow the second starting point or the 1st


My netbook is using intel processor 1.3 Ghz and ram 1.5 GHz, i know some register cause slow down of processing . Anythings to solve this issue in your laptop and is processor type play apart here.

So you suggest i go as slow as possible on the markermode , can tell me specificly how slow is slow ?. And on the navigation by map i should increase by how many percentage from markermode manual navigation ?


EDV  [328 posts] 5 years
I used NetTop Qoo (Intel Atom 230 1600Mhz) in my experiments with AVM Navigator and all has worked just fine.

If robot camera gives 25 frames per second then our software (RoboRealm + AVM Navigator) has about 40 ms (1s/25 = 0.04s) for processing of actual captured image and producing control signals to running motors. Therefore if your robot camera switched to capturing of 30 FPS then our software has only 1s/30 = 0.33s or 33 ms for processing.

*What is solution?

1. Switch your robot camera to resolution 320x240 25 FPS.
2. Find and fix cause of delay in pipeline of RoboRealm (some module in your RoboRealm pipeline is cause to delay of processing that more than 40 ms and it lead to impossibility of robot control by AVM Navigator software in real-time).

The motion speed of your robot depends on quick-action of image loop processing and so the better processing provide the faster speed of your robot and in other way: the worse processing force to the slower robot speed.

It looks like: when robot sees from actual image that he must turn to the right side for further movement along path but his processing is slow and when it produce command "Right turn" for motors this action already is not actual. And it causes your robot to oscillation from side to side.
ronn0011  [73 posts] 5 years
Although i dont get it the behaviour of algorithm if else of the coressponding images, i appreciate your explanation. Glad u explain that as i need to document and had a deep concept of the avm as this going to be my final thesis. And for the benefit of future students in my university, i ll dig more and suggest further expectation from user point of view.


Are you planning to extend the features of your avm modules ?

Or you just focus on the optimizing current features, as you said other visual mappinh  processing has limitatuon of accumulated errors for long and complex path so do you have any ideal visual mapping processing to benchmark.


I believe you can expand into like just recording the inputs of keyboard movements, and then it can generate inputs that us saved into a system. And in the mapping may be can auto generate the maze at interval time. Although, we can choose the robot to just run according to the saved inputs, i am sure the generated maze created can be used to act as simulation coressponding to te saved keyboard inputs. Maybe this is sound too simple but it works effectively on autonomous navigation .


Anonymous 5 years
As per my earlier post ... the issue is not with the AVM module but with the Arduino module. You can clearly see that from the pipeline images this module is taking up to 100ms to run. This is causing the slow framerate which causes the robot to oscillate. Try disabling this module briefly and see if your fps with the rest of the system increases. Naturally you cannot run the bot without this module but this is just for testing purposes to check your fps.

What baud rate are you running the Arduino at? Is it 115K?

STeven.
EDV  [328 posts] 5 years
>> Are you planning to extend the features of your AVM modules?

Yes, I am really planning to improve the navigation accuracy for more complex routes and also I am thinking over a fully autonomous navigation mode.

But all this work requires more time and resources for realization.
ronn0011  [73 posts] 5 years
Hi steven, yes my arduino is running t 115200 baudrate, so it is the compatibility issues ?. You mean for real time navigation i cant use this board ? . So what do you recommend or suggest if i still have to keep using this arduino board, using different board definetely not a best option .
ronn0011  [73 posts] 5 years
Hi edv clarrify regarding the processing: are you referring if my camera has 30 fps then the procrssing of avm and rr has 1/30 so 0.033 seconds right or 33 ms is the indication that flactuate in " Left hand side is the processing speed of the images " and how about tge one on the right " () / () "

So our objective to bring the value on leftside up or down ?

Thank you
Anonymous 5 years
Updates

http://youtu.be/CrP3bgY2bjc

Robot Movement
http://youtu.be/sxBkWoAjn50
Anonymous 5 years
Sorry, Was wrong file on the processing

http://youtu.be/phzH1pABA5c
EDV  [328 posts] 5 years
1. You should set the camera resolution to 320x240 25 FPS.

See picture below:
EDV  [328 posts] 5 years
Picture of video settings:

 
EDV  [328 posts] 5 years
2. Concentrate your attention on searching the cause of delay in pipeline processing.

You should try to shoot a new video by external camera and focuses it to RoboRealm pipeline.

See picture below:
EDV  [328 posts] 5 years
Picture of RoboRealm pipeline:

 
EDV  [328 posts] 5 years
If you will be able to show that the Arduino module is the cause of pipeline delay by visual demonstration then RoboRealm Team will be motivated to fix this problem.
ronn0011  [73 posts] 5 years
Sorry i got it now. So you want me to show yhe pipeline. I just realize there is a timeline on that . Okay i ll be motivated to give you more updates. Thx for visual illustrations
Anonymous 5 years
Bmw318be,

Have you tried disabling the Arduino module to see if the FPS increases?

Did you make any changes to the Arduino code before downloading it to the robot? The module should not be taking that long unless the serial connection is being close each time the robot is communicated with. This would happen if a bad response is received.

STeven.
Anonymous 5 years
Hi Steven
I have disabled the arduino module and run on the marker mode created by navigation by map  . However the RR Freeze and need to close it. So I am not too sure How to test the FPS without arduino module connected. Do you want me just test by just pressing the control panel ?. Please guide me.


Anyway now the Sparkfun Arduino  in pipelineas show only 0 value as i recorded here. And there is slight improvement in speed. I am post for you to analyze the pipe line.

Here I used Camera to record and double record it with Screen Recorder.

http://www.youtube.com/watch?v=6F-SU-NIjXE

RObot Movement

http://youtu.be/u8Ay0OHn6Kg

Observation:

There is a time when the robot just move straight for a moment without pause

The speed of running this is 98 and turn speed 30



EDV  [328 posts] 5 years
It looks much better :)

What variables you use for robot control [NV_L_MOTOR and NV_R_MOTOR] or [NV_L_MOTOR_128 and NV_R_MOTOR_128] or [NV_LEFT, NV_RIGHT, NV_FORWARD, NV_BACKWARDS]?

Can you provide robo file of your RoboRealm pipeline?

It seems that now your robot walks along route without oscillation because of [Move speed = 98] and [Turn speed = 30] parameters was set.

And so now time to shoot video with simple navigation between two points that are placed close from each other. Try to use "Write AVI" module for recording video from robot eyes. If the video that coding inside "Write AVI" module will produce delay of processing pipeline that again will cause to oscillation then you should discard "Write AVI" module from pipeline and use external camera for demo recording.
ronn0011  [73 posts] 5 years
Hi avm , currently the robot is on the lab, i will shoot again after some report writting. I will
Need your help on the literature review of avm. Where do you think i can get the understanding of algorithm logic of the avm. Seemed that i would need it for my report.

I would like to know also, from recording. How is the point is created as robot moves is it capturing 25 frames per sec, and is the
Capturing is very selective such as size big or small or clustered. Are the quality matters . If every images is considered and how many reference it has per movement Point is created for the path. I ve tried read it over and over but still hard to understand the steps of the odometry vision processing.
Can i ask u here or should i create a new thread on understanding avm algorithm ? I think it would be better on new thread. And let this be the progress of avm robot
EDV  [328 posts] 5 years
You can find out more about AVM algorithm principle in these topics:
http://edv-detail.narod.ru/Library_AVM_SDK_simple_NET.html
http://www.roborealm.com/forum/index.php?thread_id=4171#

See more details about visual navigation here:
http://www.roborealm.com/forum/index.php?thread_id=4333#
EDV  [328 posts] 5 years
In our case the visual navigation for robot is just sequence of images with associated coordinates that was memorized inside AVM tree. The navigation map is presented at all as the set of data (such as X, Y coordinates and azimuth) that has associated with images inside AVM tree. We can imagine navigation map as array of pairs: [image -> X,Y and azimuth] because tree data structure needed only for fast image searching. As you properly know, the AVM algorithm can recognize image that was scaled. And this image's scaling really is taking into consideration when actual location coordinates is calculating.

Let’s call pair: [image -> X,Y and azimuth] as location association.

So, each of location association is indicated at navigation map of AVM Navigator dialog window as the yellow strip with a small red point in the middle. You also can see location association marks in camera view as thin red rectangles in the center of screen.

And so, when you point to target position in "Navigation by map" mode then navigator just builds route from current position to target point as chain of waypoints. Further the navigator chooses nearest waypoints and then starts moving to direction where point is placed. If the current robot direction does not correspond to direction to the actual waypoint then navigator tries to turn robot body to correct direction. When actual waypoint is achieved then navigator take direction to other nearest waypoint and so further until the target position will be achieved.
ronn0011  [73 posts] 5 years
Hi EDV, That time you asking me about the roborealm variable i used, here are as followed.

Yes this is very informative, so you prefer the question is asked here?

And you mentioned " In our case the visual navigation for robot is just sequence of images with associated coordinates that was memorized inside AVM tree"

For an object at targeted location with empty space on the left and right, as the robot moves, it can only seen 1 object, So how can the memorized is done ? Are the memorize is done at every seconds of robot movement or it just memorize 1 object as a focal point, so when on " Navigation By Map" mode then navigator builds route  based on the size of the object, when it getting near, the object is bigger. ?

program.robo
EDV  [328 posts] 5 years
Thank you for robo file!
And it seems that you correctly have connected navigator module to your robot with arduino controller.

>> For an object at targeted location with empty space on the left and right, as the robot moves, it can only seen 1 object, So how can the memorized is done?

Please note that AVM (Associative Video Memory) is just algorithm that provides image recognition with further access to associated data (X,Y and azimuth in our case).

The one contrast object in empty room with cleared white walls is not enough for good visual navigation because navigator writes central part of the screen image with associated data to AVM tree but it can’t memorize empty image (how it can discriminate one empty image from other one?). Every time when navigator sees image that differs with already recorded images in AVM tree then it leads to memorizing of a central part of the screen from robot camera as the new image to AVM (in “Marker mode”). We usually have many different objects among of an indoor room space and it provides sequence of unique images from robot camera when it goes through route. The sequence of unique images is the main condition for visual navigation that is provided by AVM.
ronn0011  [73 posts] 5 years
Glad that it is right :) .

So the target object at targeted location for markermode is not sufficient to record, but i wondet the minimum number of images on the environment , such as navigating indoor, our wall is usually has the same pattern as the camera capturing the central and side wall. So as long there is pattern on side wall, it able to memorize The image and associate it with in term of x, y amd azimuth.

So everytime images are captured by camera (eye), it recognized as 1 or multiples as the robot move on to capture similar wall pattern, then after recognizing, 1 image it associated with x, y and azimuth.

I agree you stated " unique " images , are you referring every step that the robot see, it has to be different images? . Because i find there is a possibilities that what we see on te side wall is the similar wallpaper, unless we intentionally create a case where:

We intentionally place a ball, doll, chair, pillow, book and so on at every 50 cm inrerval up to the final destination. And when we navigate by map, is this item is recorded on the avm tree . So when the path is created. The navigator will plan the path way and it will expect to steer the robot according to the path and looking for this item.

You mentioned " everytime navigator see images that differ from what had been recorded in avm tree, it lead to memorizing of central part of the screen As a new image to AVM ( in marker mode)" is this new images can be on the side wall where camera capturing , and this is memorized relative to the central of the image , is it ?

I am going to try to navigate in clear wall with central contrast images and see the behaviour. So although path created here, we should expect to keep lookjn left and right and when it is about to knock the wall, " navigator will say  obstacles is detected " an te robot will shift to the other side and move a bit forward and kept repeating. Do we expect to see it reaches the final point?


EDV  [328 posts] 5 years
>> Is this new images can be on the side wall where camera capturing, and this is memorized relative to the central of the image, is it?

You can see indication of recorded location association in navigator camera view (as red rectangle) and it seems that this rectangles is placed everywhere because you see already recorded images that was actually recognized by AVM. But when navigator writes new location association mark to AVM it use exactly central part of the input image for writing.

* Obstacle signal will be activated by navigator when commands were given (i.e. the "forward" command) but no changes where seen in the input image (nothing happened) then the robot is assumed to be stuck (i.e. the robot is pushing against a wall).

So, cleared walls against robot can really cause to obstacle signal activation. In this case you should just turn off “Obstacle detection” checkbox in navigator dialog window (in “Navigation by map” mode).
ronn0011  [73 posts] 5 years
Hi edv,

1 What is the value on extreme left in ms, and some value / some value

2. I have seen that the azimuth is always measure  vertical line to robot rotation, however i see that as it turn counter clockwise it recorded as positive 20, however when the rotqtion of counterclockwise rotate further such as 100 degree( means you turn the robot left) , it actuqlly give value of 340 degree,

As it observed here
http://www.youtube.com/watch?v=G7SB_jKAcyE&feature=youtube_gdata_player

3. So actually in navigation by map, the robot view is actually a few meter forward then processing the nearest waypoint in front ( actually side wall images), so the step is see image the robot recognize the image and it make association with coordinates (x, y ) and angle. So regardless robot curret orientatation, the recorded coordinates still able to bring the robot to the recorded coordinate ?

4. Can i say it is 2 d odometry vision processing?

5.also observe it does bot have negative x and y so do the rotation, as it rotate 90 degree,  it considered the vertical as y and horizontal as x . Which of similar coordinate as home, am i right ?

EDV  [328 posts] 5 years
>> 1 What is the value on extreme left in ms, and some value / some value

Please note that English is not my native language. It would be better if you could make your questions more clear for understanding.

2. In this video < http://www.youtube.com/watch?v=G7SB_jKAcyE > azimuth indication of degree amount on the map is really appreciably less than it was in reality. But this issue connected to extremely high turn speed of robot (the navigator couldn't track the flowing of image features that was required for producing of correct azimuth calculation).


Can you attach some pictures or videos that is good illustration of your questions?

It would be really helpful for our discussion.
ronn0011  [73 posts] 5 years
Sorry, I ll give the illustration as followed with number


1 What is the value on extreme left in ms, and some value / some value



 
EDV  [328 posts] 5 years
1. Now I see that you had in mind the camera view of the navigator dialog window ;-)

For example we see string below:
4.3 ms  22/0.2097

Where: 4.3 ms - processing time of a current input image, 22 - total number of images that was memorized in AVM tree, 0.2097 - similarity rate of recognized object(s).

Actual location coordinates:
[6, 34] 16 Deg

Where:
6 - current location X coordinate, 34 - current location Y coordinate, 16 - horizontal angle of robot in current location (degree amount).

You can get it from these variables:
NV_LOCATION_X - current location X coordinate
NV_LOCATION_Y - current location Y coordinate
NV_LOCATION_ANGLE - horizontal angle of robot in current location (in radians)
ronn0011  [73 posts] 5 years
Heheh :) i wanted to upload the photo but i was using phone on my way back so i will not be able to give clear illustration, anyway your english was excellent in written explanation.

And i think the angle is in degree full 360 degree. Thanks for the formula.

I will verify again as i am still understanding some variables on the display .




ronn0011  [73 posts] 5 years
Hi sorry edv, was missed out to ask you about the similarity rate of recognized object, can explain further ?

And regarding the processing time of current input, is the speed dependent on the processor of our laptop  or any other 3th party software could slow down this . That is why when last time we troubleshoot, you asked me to stop other running software.


EDV  [328 posts] 5 years
The similarity rate parameter is showing the degree of similarity of a recognized object to model of object image that was stored in AVM tree and has range from 0 (no match) to 1 (full resemblance).

You can get this rate in “Object recognition” mode from array that has similarity for each recognized object:
NV_ARR_OBJ_SIM    - similarity rate (0...1) of recognized object

In case of indicator at the top of camera view of navigator window, we have average value of similarity rate for all objects in current input image that were recognized.


The image search in AVM tree is provided by software algorithm and it appreciably depend on CPU and RAM speed. Also another applications can take CPU time and it can slow down AVM image search too.
ronn0011  [73 posts] 5 years
Thanks , for the explanation..

Would there be a situation where similar texture is recorded twice or more. Like an object with similar texture would look different over different camera angle .

I observed the camera capturing alot images, eventhought the distance travel is small. Is it because 1 images could have more than 1 or more texture so the vision algorithm recognized the texture as a lot of images?

Hopefully this is clear :) or I need to take shot of the avm windows
EDV  [328 posts] 5 years
The navigator usually sees a lot of images that was recognized as location association marks and it provide successful extraction of actual location coordinates by generalization of marks data.

Even if robot sees the wall with monotonous textures he also sees something else that was placed before or plinth or the same texture but in other visual angle and it makes this captured image unique to others.
EDV  [328 posts] 5 years
Hey Bmw318be :) it would be nice if you could shoot new video about your experiments with AVM Navigator. I think it would be pretty interesting for other users to see such video and for me too ;-)
ronn0011  [73 posts] 5 years
Sure, at the next stage of the experiment i will:

1. Try to run the robot at semi texture wall and obserce the numver of images capture and we ll together evaluate the object or texture seen by visiob, and quantify them

2. After understanding capture image, we will improve by pasting some A4 texture paper printed on the location where the robot is unable to capture the image ( to see if the texture cause any deviation in robot movement )

3. I will also want to match how is the deviation of way point movement from 1 point to another point, possibly pasting blank of paper on the floor, and on the robot base we should attach a marker or pen.

Hopefully we can quantify the navigation. Stay tuned :)
ronn0011  [73 posts] 5 years
Sorry was busy with some examination, here is the update EDV, for longer route, seemed my trainning is poor as the robot is not mechanically stable, for straight it tend to veer, so i think it affect the initial navigation.

Hope you can give me some analysis on the part that the robot not moving smoothly, probably can highlight the problems or my camera angle.

http://youtu.be/ZHXaTrEUwkY
EDV  [328 posts] 5 years
First, can you shoot your video with helping "Write AVI" module and "XviD codec"?

This way is most acceptable regarding to speed of pipeline processing.

See picture below for more details:


 
ronn0011  [73 posts] 5 years
I am actually shooting it using microsoft write AVI, as I tried installing  k lite codec pack, as Steven mentioned but it does not appear on that, Can i see your configuration of K lite codec, I got this K lite codec bar, How to activate that "Xvid codec"

Attached is showing my compression mode and Some k lite icons that appear on my Windows

  
EDV  [328 posts] 5 years
It seems that "XviD codec" was removed from "K-Lite Codec Pack".

Deinstall the current version of "K-Lite Codec Pack" from your computer and try to install "XviD codec" from this source: http://www.xvid.org/Downloads.15.0.html
ronn0011  [73 posts] 5 years
HI EDV, thanks for the link, It is now installed in RR,

I have a new problem, everytime i plug in the USB to laptop, it keep restarting as the sparkfun also shows high delayed in pipeline, And the servo turret, keep rotating CW until the end and kept forcing the rotation even it is the extreme end, It goes to central after some time and repeated the error, As you can hear  the rotation and see in the pipeline

http://youtu.be/fLv8woR4LmA
ronn0011  [73 posts] 5 years
I had been finding the cause of this error but i was not able, the problems existed inconsistency, sometimes the error dissappear itself after turret errornously move for few times but this time the problem happened nonstop,
I have tried to do the following

1 upload the program to arduino
2. Shutdown the roborealm and plug in the usb to arduino
3.disactivate the turret
What happen

1. Problem still exist
2when shutdown, the error solved, when plug jn the usb to arduino and in the sparkfun assign the port, the problem exist
3. Worked but when activate again, problem exist


Other observation: when the robot is powered with the batterry on. The robot move by itself and turret keep going clockwise non stop repeatedly

EDV  [328 posts] 5 years
You should specify some additional information regarding your Arduino controller so that we will be able to help with it:

1. What Arduino firmware version you use?
2. Which ports you use for servos connection?
3. How was implemented power supply of Arduino controller.

Please try to provide for us more detailed information for analysis.
ronn0011  [73 posts] 5 years
1 i am using arduino 0022
2 servo turret connection pin 9 and the port that i use is port 7
3. The power of the arduino is using batterry 12 V. And since the usb is connected the power 1st is through USB.

That is why we see the pipeline has high value and once i on the power in batterry, the robot sometimes moved by itself when it is im error stage

Is it info above sufficient
EDV  [328 posts] 5 years
The number 00022 is version of IDE that you use but we need in firmware program version number that you have uploaded/burned to Arduino controller.

If you use your own user-made arduino program then you should give us source code for analyses.

If you use already-made arduino program then you should give us link to the project where you have downloaded this firmware arduino program.
ronn0011  [73 posts] 5 years
Here is the code, Note:L the ultrasonic sensor, I did not use it, Hope you can have a good analysis

program.robo
ronn0011  [73 posts] 5 years
I think thats is the RR code, here is the arduino code,
ronn0011  [73 posts] 5 years
I think thats is the RR code, here is the arduino code,

ervo servos[12];
boolean pinModes[14];

unsigned int crc;
unsigned int command;
unsigned int channel;
unsigned int value;
unsigned int valueLow;
unsigned int valueHigh;
unsigned int streamDigital;
unsigned int streamAnalog;
unsigned int lastDigital;
unsigned int lastAnalog[8];

#define ARDUINO_GET_ID 0
#define ARDUINO_SET_SERVO 1
#define ARDUINO_SET_DIGITAL_STREAM 2
#define ARDUINO_SET_DIGITAL_HIGH 3
#define ARDUINO_SET_DIGITAL_LOW 4
#define ARDUINO_SET_ANALOG_STREAM 5
#define ARDUINO_DIGITAL_STREAM 6
#define ARDUINO_ANALOG_STREAM 7
#define ARDUINO_SET_ANALOG 8


void initializeAvmNav()
{
  int i;
  for (i=2;i<14;i++) pinModes[i]=-1;
  streamDigital=0;
  streamAnalog=0;
  lastDigital=-1;
  for (i=0;i<8;i++) lastAnalog[i]=-1;
}

void writePacketAvmNav()
{
  unsigned char buffer[2];  
  buffer[0]=command|128;
  buffer[1]=channel;
  Serial.write(buffer, 2);
}

void writeValuePacketAvmNav(int value)
{
  unsigned char buffer[5];  
  
  buffer[0]=command|128;
  buffer[1]=channel;
  buffer[2]=value&127;
  buffer[3]=(value>>7)&127;
  buffer[4]=(buffer[0]^buffer[1]^buffer[2]^buffer[3])&127;
  
  Serial.write(buffer, 5);
}

void readPacketAvmNav()
{
  // get header byte
  // 128 (bit 8) flag indicates a new command packet .. that
  // means the value bytes can never have 128 set!
  // next byte is the command 0-8
  // next byte is the channel 0-16

  do
  {
    while (Serial.available() <= 0) continue;
    command = Serial.read();
  }
  while ((command&128)==0);
  
  command^=128;

  while (Serial.available() <= 0) continue;
  channel = Serial.read();
}

int readValuePacketAvmNav()
{
  unsigned int valueLow;
  unsigned int valueHigh;
  
  // wait for value low byte    
  while (Serial.available() <= 0) continue;
  valueLow = Serial.read();
  if (valueLow&128) return 0;

  // wait for value high byte    
  while (Serial.available() <= 0) continue;
  valueHigh = Serial.read();
  if (valueHigh&128) return 0;
    
  // wait for crc byte    
  while (Serial.available() <= 0) continue;
  crc = Serial.read();
  if (crc&128) return 0;
  
  if (crc!=(((128|command)^channel^valueLow^valueHigh)&127)) return 0;

  value = valueLow|(valueHigh<<7);
  
  return 1;
}

/***************************************************************

Part : Ultrasound signal sensor functions for range-finder

***************************************************************/
int ultraSoundSignal = 9;

//Get the distance
float distCalc()                                                   // distance calculating function converts analog input to inches
{
  pinMode(ultraSoundSignal, OUTPUT);                                // Switch signalpin to output
  digitalWrite(ultraSoundSignal, LOW);                              // Send low pulse
  delayMicroseconds(2);                                             // Wait for 2 microseconds
  digitalWrite(ultraSoundSignal, HIGH);                             // Send high pulse
  delayMicroseconds(5);                                             // Wait for 5 microseconds
  digitalWrite(ultraSoundSignal, LOW);                              // Holdoff
  pinMode(ultraSoundSignal, INPUT);                                 // Switch signalpin to input
  digitalWrite(ultraSoundSignal, HIGH);                             // Turn on pullup resistor

  unsigned long echo = pulseIn(ultraSoundSignal, HIGH);             //Listen for echo
  //convert to CM then to inches
  return ((echo / 58.138) * 0.39 *2.2);
  
}        

/***************************************************************

Part : Main Arduino functions

***************************************************************/
void setup()
{
  //setup communication with roborealm PC
  Serial.begin(115200);
  initializeAvmNav();
  //setup for ultraSoundSignal
  //pinMode(ultraSoundSignal, OUTPUT);  // sets enable as output
}


void loop()
{
  //distCalc();                                                    
  //delay(40);
  while (Serial.available()>0)
  {
    readPacketAvmNav();
    
    switch (command)
    {
      // init
      case  ARDUINO_GET_ID:
        initializeAvmNav();
        Serial.print("ARDU");
      break;
      // servo
      case  ARDUINO_SET_SERVO:
        if ((channel>=3)&&(channel<=11))
        {
          if (readValuePacketAvmNav())
          {
            if (pinModes[channel]!=1)
            {
              servos[channel].attach(channel);
              pinModes[channel]=1;
            }
            servos[channel].writeMicroseconds(value);
            writeValuePacketAvmNav(value);
          }
        }
      break;
      //digital stream
      case  ARDUINO_SET_DIGITAL_STREAM:
        if (readValuePacketAvmNav())
        {
          streamDigital = value;
          writeValuePacketAvmNav(value);
          lastDigital=-1;
        }
      break;
      //set digital high
      case  ARDUINO_SET_DIGITAL_HIGH:
        if ((channel>=2)&&(channel<14))
        {
          if (pinModes[channel]!=2)
          {
            if (pinModes[channel]==1)
              servos[channel].detach();

            pinMode(channel, OUTPUT);
            pinModes[channel]=2;
            if (streamDigital&(1<<channel))
              streamDigital^=1<<channel;
          }
          
          digitalWrite(channel, HIGH);
          writePacketAvmNav();
        }
      break;
      //set digital low
      case  ARDUINO_SET_DIGITAL_LOW:
        if ((channel>=2)&&(channel<14))
        {
          if (pinModes[channel]!=2)
          {
            if (pinModes[channel]==1)
              servos[channel].detach();
              
            pinMode(channel, OUTPUT);
            pinModes[channel]=2;
            
            if (streamDigital&(1<<channel))
              streamDigital^=1<<channel;
          }
          digitalWrite(channel, LOW);
          writePacketAvmNav();
        }
      break;
      //analog stream
      case  ARDUINO_SET_ANALOG_STREAM:
        if (readValuePacketAvmNav())
        {
          streamAnalog = value;
          writeValuePacketAvmNav(value);
          for (channel=0;channel<8;channel++) lastAnalog[channel]=-1;
        }
      break;
      case  ARDUINO_SET_ANALOG:
        if (readValuePacketAvmNav())
        {
        if ((channel>=3)&&(channel<=11))
      {
        if (pinModes[channel]!=2)
        {
              if (pinModes[channel]==1)
                servos[channel].detach();
                
          pinMode(channel, OUTPUT);
          pinModes[channel]=2;
            }
        analogWrite(channel, value);
        writeValuePacketAvmNav(value);
      }
        }
      break;
    }
  }
  
  for (channel=0;channel<8;channel++)
  {
    if (streamAnalog&(1<<channel))
    {
      value = analogRead(channel);
      // only send value if it has changed
      if (value!=lastAnalog[channel])
      {
        command = ARDUINO_ANALOG_STREAM;
        writeValuePacketAvmNav(value);
        
        lastAnalog[channel]=value;
      }
    }
  }

  if (streamDigital)
  {
    value=0;
    for (channel=2;channel<14;channel++)
    {
      if (streamDigital&(1<<channel))
      {
        if (pinModes[channel]!=3)
        {
          if (pinModes[channel]==1)
            servos[channel].detach();
            
          pinMode(channel, INPUT);
          pinModes[channel]=3;
          // pullup
          digitalWrite(channel, HIGH);
        }
        
        value |= digitalRead(channel)<<channel;
      }
    }

    // only send value if it has changed
    if (value!=lastDigital)
    {    
      command = ARDUINO_DIGITAL_STREAM;
      writeValuePacketAvmNav(value);

      lastDigital=value;
    }
  }
}
Anonymous 5 years
bmw318be,

The Arduino code looks fine. I would check that your power requirements are being met. Do you have external power for the Arduino or are you just using USB? Seems that something is wrong with that rather than the code which occasionally works.

The issue of the turret continuing past its extreme points is due to lack of communication with the PC and the Arduino. The Arduino gets the initial messages but looses connection to get the stop signal. Again, most likely a power issue.

If you download the latest Arduino code it has a heartbeat that will stop this runaway code after losing the connection to the PC after 2 seconds. This helps that issue but you still have the concern as to why the connection is bring dropped.

Obviously, check your connections too.

STeven.
ronn0011  [73 posts] 5 years
Hi, yes i had external power supply other than usb, could it be the usb cable that connect to the arduino and pc ?

Regarding power supply, i am sure it has sufficient power as my batterry is 12 v.

The problem exist right away after connecting the arduino to my laptop.
It stopped after i stop te run on the roborealm.

The problem actually followed by turret kept rotating and go forward and back nonstopped. At this stage of error. I cant stipped the turret and unable to control any movement.

Is the servo turret mainly is the power isaue as well.
Anonymous 5 years
What happens when you remove the variable from the Arduino module and just use the sliders to move the turret. Do you then retain control or does it also exhibit the same error?

STeven.
EDV  [328 posts] 5 years
bmw318be,

You should download RoboRealm package with new AVM Navigator v0.7.3.5 because this update has brought to AVM more accuracy in recognition and it provided more acceptable conditions for navigation by map.
ronn0011  [73 posts] 5 years
Thanks, i will update then. At the moment let me solve the issue of error start up. It happened often now,  Everytime i am connecting te arduino to laptop
ronn0011  [73 posts] 5 years
Hi Steven

How do i move the turret by slider ? In it from sparkfun arduino module, i just unticked the trurret pin and move the slider on the right hand side, pls kindly advise, i am keen to know
ronn0011  [73 posts] 5 years
Hi, steven i think you are right the problem is caused by power supply. I should not power the arduino directly with USB until external power is "ON",


So it is important to power the board 1st with my 12 v batterry, once the arduino is powered, i should plug in the USB connector to the laptop.


So from now i should do as above.


I hace also download a new version of AVM but my
Navigation is still not as successful for robot turning to a room. I ll upload soon.
EDV  [328 posts] 5 years
Please upload directly AVI file that was produced by "Write AVI" module to YouTube without involving of any other software like in this case:
http://www.youtube.com/watch?v=ZHXaTrEUwkY
ronn0011  [73 posts] 5 years
Updates on v7.3.5

http://youtu.be/1Xl2xPxQ6V4

http://youtu.be/w-aGe0hhid8

http://youtu.be/Ipp3uD28_UA

Note: The robot suppose to go inside a room However, it fails,

on the way of turning to a room, i placed a shoping bag to help the vision,but I am not too sure if this texture is good, can recommend me how to choose a good texture, or any of the images that i can print out to allow vision recognition,

My observation, The robot does not turn fast enough and overshoot as compared to the trained path, so it hit the object in front when making a turn
EDV  [328 posts] 5 years
It looks better but I see some mistakes in training:

1. If you need make turn (to other room for example) then you should not make it too smoothly (like arc). You should just try to make turn on the spot and it must be slowly or first turn the camera to new direction then straighten it back and start turning of robot body  (like in this video: http://www.youtube.com/watch?v=214MwcHMsTQ ).

2. Your robot should keep in the middle of road when you train it on the new route. Do not try to come too closely to walls.
ronn0011  [73 posts] 5 years
Thanks, you mean on the turning i can choose to turn the camera ( turret ) ? And turn back the turret to home positiom and rotate the robot ? . I did nt get it.

Ok i will try and see the next trainning .

What about the texture of the environment before turning , is it sufficient texture , would the " shopping bag " do help in image processing ?

Thanks
ronn0011  [73 posts] 5 years
Isaw on the video On given link that when it is about to turn left. Turret is moved to left 90 degree and how it could has auto to home position as signal shown. Meanning he dont have to turn the turret right , am i right ?.

Is that what i suppose to do to ensure smooth turning, and we should turn incrementally, probably for 90 degree, we can turn 5 degree increment and pause.

EDV  [328 posts] 5 years
>> You mean on the turning I can choose to turn the camera (turret)? And turn back the turret to home position and rotate the robot?

Yes, it is correct.

You can turn robot camera (turret) by "Delete" and "Page Down" keys ("End" key will set the camera in front position). If camera was turned you have to press "End" for alignment before continuing of robot moving.

>> What about the texture of the environment before turning, is it sufficient texture, would the "shopping bag" do help in image processing?

Your home indoor environment contains a lot of different objects (it is not cleared walls). So, don't worry about textures because if the robot sees general view (there is not wall closely opposite the robot) then it is already good texture.

>> I saw on the video on given link that when it is about to turn left. Turret is moved to left 90 degree and how it could has auto to home position as signal shown. Meaning he don’t have to turn the turret right, am I right?

In video above was pressed key "End" and it caused to set variable NV_TURRET_BALANCE to 0 that is signal for servo to set camera in front position.

>> Is that what I suppose to do to ensure smooth turning, and we should turn incrementally, probably for 90 degree, we can turn 5 degree increment and pause.

It is good idea ;-)
Ali Alfehaid from Ireland  [1 posts] 5 years
Hi EDV
I am using your plugin module (AVM_navigator). When I added the licence, somehow i can't see the navigator module in the plugin list. As a result, my modification on the pipeline is not functioning since avm navigator does not exist.

If you think that you are not the person to be addressed, I appreciate the help from other

You got an innovative plugin module!
Thanks

 
Anonymous 5 years
Ali,

Can you use the site's "contact us" form and send us your email address? We can than check your account to see why that plugin may be missing.

Thanks,
STeven.
ronn0011  [73 posts] 5 years
Hi EDV / Steven, i would like to use xbee module on arduino and using wireless camera, do my arduino program need some changes ogher than baudrate change to 9600.

Thx
EDV  [328 posts] 5 years
Playing with Winky rover that was controlled by AVM Navigator:
http://www.youtube.com/watch?v=4tpwAvcmZf8
http://www.youtube.com/watch?v=pt2y7xkiTXo
ronn0011  [73 posts] 5 years
Hi edv any idea of my question on xbee module on avm navigator,
EDV  [328 posts] 5 years
Sorry but I have not information regarding XBee module. I use OR-AVR-M128-DS + OR-USB-UART + NetTop Qoo Intel Atom 230 1600Mhz + Logitech HD Webcam C270 + Winky based rover.
Anonymous 5 years
Bmw318be,

For the most part, yes, you only need to change the baud rate. BUT that will also depend on the XBee module you are using and what signals the Arduino requires via serial. I think that you should probably be fine but you will now know 100% until you try it out.

Note that you should be able to get a XBee module that permits 115K buad communication ... which is not required but nice to have.

STeven.
ronn0011  [73 posts] 5 years
I am using xbee module pro . I am trying to interface it with RR, has been trying to communicate but still no success. But u ve assured me that the baudrate is the prerequsites.

The connection of serial com of cable should be just replaced with xbee wireless transmission right, however seems my xbee on arduino is not receiving the signal from my receiver.

Have you seen anyone interface xbee with roboreal in this
Forum ? .

Would like to hear if there is such case

Thanks, Steven , EDV
Anonymous 5 years
We know of many situations where it has worked (SRV used to use Xbee, Serializer from RoboticsConnection also used Xbee) but none with integrating it with the Arduino. This is the case where you may or may not have to worry about what hardware support the Arduino needs in order to program/interface with it ... which is a bit beyond our knowledge. If you look in the Serial module under flow control, that will give you an idea on what may be needed by the Arduino in order to work.

What I would do is to create a very simple Arduino sketch to print "hello" back of the serial connection and then use the Serial module to connect to that and see the printed text. That will ensure that it works with a wired connection. Then replace the wire with the Xbee modules and do the same test. If things do not work at least you know it has something to do with the Xbee comm and not a programming error.

Also, for you test, be sure to use the 9600 baud explicitly on the Arduino so that you can keep the same speed when switching to Xbee.

STeven.
ronn0011  [73 posts] 5 years
Thx steven, yes it got to do with the xbee configuration as for the RR we juat need to ensure baudrate 9600, it has now worked wirelessly. Will 9600 cause any delay in application of AVM navigator and would you suggest would wirelesa camera able to run avm navigator or any rr module successfully
Anonymous 5 years
Hi Edv,

Would like to clarrify the use of dlink 6620 internet camera, simce it has pan and tilt, do i am able to use this pan control as a localization in avm . Because you know the arduino is not controlling the ip camera pan rotation but the internet.

I intend to test avm with ip camera module.

Thanks
EDV  [328 posts] 5 years
It is important for object tracking by pan-tilt camera that it provides refreshing of the input images approximately 25 fps for tracking in real-time.

So, if your camera will be able to provide such FPS then it really will be working for real-time object tracking.
ronn0011  [73 posts] 5 years
Hi, but how to do the localization in pan and tilt if i want to use pan rotation to do the localization in the training mode, am i be able to do automatically.

This forum thread has been closed due to inactivity (more than 4 months) or number of replies (more than 50 messages). Please start a New Post and enter a new forum thread with the appropriate title.

 New Post   Forum Index