MSR-H01 Object Avoidance
James Dickinson from Australia  [49 posts]
6 years
I thought it best to move my stuff out of Valentina's thread

I have been working on vision based object avoidance for the MSR-H01 hexapod robot for a few days now, and I am really making some progress!

The roborealm file is very similar in function to the Vision based object avoidance tutorial here at Roborealm for the filter/module setup. the end result needs to be a binary image with white representing where its OK to move to and black representing no-go areas.

All the advanced stuff comes into play in the VBS file.
using simple timing the hexapod looks left then centre then right recording a sample of the heights of the white pixels in the image at each view angle.
Using this information the hexapod decides on the speed to move forward or backward, the amount to turn to face the heighest point accross the total field of vision, and how much to crab left or right to avoid getting to close to walls.

here are some example videos :

Roaming in a large space

getting out of a dead end

my MSR-H01 Hexengine is configured with these settings :

ASF = 4
DGD = 1
DLT = 0.5
LZR = 27
LLH = 40
DPU = 90

The camera should point so that the "horizon" is approximately 4/5ths of the way up the screen. to change the angle of the head, open the vbs file and change the following bit of code.


My pan/tilt kit is configered so that it can point almost striaght up, which is why I have it set to point all the way down (and its reversed LOL) so you will need to change this to suit your hexapod (remember that TI- and TI+ control what the actual maximum allowed movement of your tilt servo is).

MSRH01 Object Avoidance.zip
James Dickinson from Australia  [49 posts] 6 years
oh I forgot, Roborealm image size MUST be 320x240
James Dickinson from Australia  [49 posts] 6 years

In this video I have added an array with contains the last 3 turn directions. When backing up or standing still, if a repeated pattern happens (left, right, left, wanting to go right for example) it will overide and go in the opposite direction. this stops it getting caught in loops when it tight corners.

next up is to make the sequence in which it looks around be dependant on the current direction the robot is turning to look in the opposite direction first giving the turning more time to influence direction before data is captured on the turning side
valentina from Italy  [43 posts] 6 years
hope you will follow my thread anyway :D sorry to see you away from it, but goo d to see that you are making progresses that needed a new thread ^___^
James Dickinson from Australia  [49 posts] 6 years
I have significantly improved the roborealm filters to better find the floor and remove artifacts in combination with temporal filtering to stabalise the floor outline. while it still cant handle shadows on the ground (no idea how to get around that), in evenly lit areas most floors will be isolated automatically now.
In VBS I updated the panning system to look in the direction it is turning towards last. this provides a wider field of vision thanks to it capturing say the left side, then the hexapod turning a little to the right and capturing the right side with the centre evenly in the middle.
I then take the information from each view (left/centre/right) and add them all into one big array of coordinates representing the total area of vision, ordered from furthest left to furthest right.
From this data I find the heighest point overall and the lowest point overall and use this in combination with a minimum height to decide on how much to move forward or backward and left and right.
next the "turn memory" system checks to make sure we arent repeating the same left/right movements over and over and reverses movement for one command cycle to break the flow
as a final measure, there is a realtime capturing of the lowest point current on camera that is running at all times. if this lowest point is below the minimum hieght and the hexapod is moving forward, momentum is stopped (but turning continues). this is to ensure the robot doesnt accidentally walk into something it didnt see when it captured the info, or something that wasnt there before.

(vbs code is included inside VBS module)
James Dickinson from Australia  [49 posts] 6 years

Here is a decent video of the what I think is the final version of the object avoidance code. Im going to run it through some scenarios before making my final verdict, but as you can see it moves around very well!

In this video you are seeing the roborealm output (view from hexapod with overlay HUD) and the view from my phone in the bottom left corner synced together so you can see what the robot is thinking as it moves around

Here is a description of the hud elements

White Line : Border between floor area and everything else
Horiz. Blue Line : Minimum Distance. Full reverse is at the bottom of the screen, and full speed forward is the top of the screen
Vert. Blue Line : Centre of View. All the way to left is full turn left. all the way to the right is full turn right
Green Cross : Heighest Found Position
Red Cross : Lowest found position below Horiz Blue Line
Red X : Realtime nearest point to bottom of screen. used for emergency stop
James Dickinson from Australia  [49 posts] 6 years
The First test : The Sock Test!
valentina from Italy  [43 posts] 6 years
very good James! what gait did you use and how long did the robot take to get back on the other side of the sock? :)

I took a look at your code and only this noe is so much more complicate that our whole project :D Well, I hope I will do good projects too, in 20 years :)
James Dickinson from Australia  [49 posts] 6 years
Hi Valentina!

I use gait mode 1 for my robo as its the most fluid looking one!

It took the robot around 3 minutes to get out
valentina from Italy  [43 posts] 6 years
I agree ^^ I am also using gait 1 :)
James Dickinson from Australia  [49 posts] 6 years
I ended up adding an extra pattern to the turn system to promote getting out of corners quickly and also the closest point now influences the turn amount when moving forward to help avoid objects

here are two videos


James Dickinson from Australia  [49 posts] 6 years
Here is an update to my Object Avoidance.  Attached is a zip file containing the latest VBS script. the robo file itself hasnt changed much, so you can download my previous one if you do not have it.

As you can see in this video, object avoidance by camera alone is probably not going to get any better without introduce mapping functions, which I am not going to delve into any time soon LOL

Major improvements are :

an avoidance system has been added which modifies the chosen turn amount depending on the location of the closest location it finds within the field of vision. this helps to turn away from walls when coming at them at an angle instead of running into them, then turning. The end result is much smoother and faster movement around corners and obstacles

I redid the crabbing system to come into effect at all times, crabbing is now driven by the average travel distance seen from each perspective (left/centre/right). the system crabs to the right when the left perspective has the closest average distance, and vice versa. the strength of the crabbing overall (in either direction) is stronger as the average distance gets closer on the chosen side.
This combined with the avoidance above further increases the chances that the robot will turn to align itself with, and walk along walls rather than running into and turning away as it did before
MSRH01 avoidance 7 vbs only.zip
Anonymous 6 years
Nice! The new crabbing technique really makes the movement smoother. Pretty sophisticated VB code too.

What are the catches with the vision avoidance? Are the tile connections still causing issues?

James Dickinson from Australia  [49 posts] 6 years
Tiles are still causing some major issues, however my pipeline is fairly robust at handling such things at the cost of losing small objects. mostly this isnt a problem as things you want to avoide are generally bigger than the width of the grout, but it does blur the join lines between similar colours objects (like my floor and my walls).  In situations where you dont have dark grout between white tiles, you can lower the mean filter strength to give more detail.

The most obvious downfall to vision based object avoidance is that it can only see infront of itself. when backing up or crabbing it is assuming there is nothing there as it would not walk into an area without room to move about. however if you place it in a too-confined space it will hit the walls when it backs up.
I will be overcoming this at a later stage by mounting my SRF08 range finder onto a 360 degree turret for range mapping. thats a long way down the track for me though

I ran this in my house for about half an hour and I did not need to intervene once, so it works really well, even on my tiles

This forum thread has been closed due to inactivity (more than 4 months) or number of replies (more than 50 messages). Please start a New Post and enter a new forum thread with the appropriate title.

 New Post   Forum Index