loading
 
Head Tracking Demo using RoboRealm
from United States  [214 posts]
14 year
Hello,

I thought some of you might get a kick out of a few head tracking videos I just made using RoboRealm.  The camera used is a D-Link 920 wireless so there is a small irreducible lag between object motion and servo action, but it's not too bad.  The resolution is set to 320x240 with medium image quality and I am able to get a full 30 frames per second.  The servos used for the pan and tilt are Dynamixel AX-12+.  The tracking algorithm sets the servo speeds proportional to the sum of the COG displacement and COG velocity as measured over 3 frames.  One could also use the Optical Flow vector components to get the velocities but I happen to compute them in my C# program from the raw COG values over time.

The first video in each pair is the view from the robot's camera and the second video is the view from an external camera that can see both the robot and the target.

http://www.pirobot.org/videos/peppy/0002/
http://www.pirobot.org/videos/peppy/0003/

And here is an example of some faster tracking:

http://www.pirobot.org/videos/peppy/0004/
http://www.pirobot.org/videos/peppy/0005/

I have also attached the .robo file I use to isolate the orange balloon.

--patrick

http://www.pirobot.org




program.robo
from United States  [214 posts] 14 year
I wrote up a little description of how the head tracking was done in these videos.  The article can be found here:

http://www.pirobot.org/blog/0008/

--patrick

gioscarab from Italy  [2 posts] 14 year
Hi Patrick. Awesome robot!!!! I am going crazy, I can't find speech recognition ifor processing, some easy library to start expertimenting, have you some advise for me? What do you use to do that?
from United States  [214 posts] 14 year
I haven't played with speech recognition for a while, but the libraries you use will depend on your operating system.  I was using Windows XP and so I used the free SAPI 5.1 SpeechSDK from Microsoft at:

http://www.microsoft.com/downloads/details.aspx?FamilyID=5e86ec97-40a7-453f-b0ee-6583171b4530&displaylang=en

This is a self-extracting Zip archive that expands into the current folder so make sure you download it into an empty folder, then run setup.exe to install the SDK.  If you are using something like Visual Studio for development, you can then make a reference to the COM object called "Microsoft Speech Object Library".  Here is a link I found to a video tutorial for Visual Studio 2005 Express.

--patrick
from United States  [214 posts] 14 year
Ooops--forgot the link to the tutorial:

http://idealprogrammer.com/videos/visual-studio-2005-express-part-5-speech-recognition/

from United States  [60 posts] 14 year
I also found this very interesting and went on to read your blog. It appears we are working in similar directions albeit with different 'bots. Only a few minutes ago I posted a message here asking for some advice on the use of RR.

I found your blog article on location interesting, and also the panoramic vision article. I'll have to think about using panoramic vision on my 'bot.

The goal I've set is to have the robot know its location in the house so I can tell it, go to the master bedroom, then the kitchen, and then the charger (in the office). All by using vision to find its way around.

I also want to build a bigger 'bot platform that will compete in RoboMagellan using vision.
Anonymous 14 year
Patrick ... thanks for this great tutorial. We *finally* got around and posted it to the homepage and our RSS feed. Nice analysis of color tracking .. not sure if I've seen a better example/writeup on that topic!

STeven.
from United States  [214 posts] 14 year
Hi STeven,

Thanks for the nice feedback!  But of course, none of this would be possible without RoboRealm doing all the heavy lifting.

--patrick

from United States  [60 posts] 14 year
Patrick,

I have just been experimenting with the COG example that comes with RR. Glad I stumbled back into this thread so I was reminded of your COG work.

My goal is to have my bot follow a red dot. It is just printed on paper. I envision it being at the end of a stick so the bot follows like being on a leash. (I expect to have big bot at some point that I won't want to carry around. Plus this was an easy RR task to undertake, it seemed.)

My question is why your did the erode, dilate and hull processing. It seemed that the COG example without them works okay. Did  you find something that required them?

Rud
from United States  [214 posts] 14 year
Hi Rud,

I found that there are often other small blobs with the same color as my target which is typically a fairly large balloon.  So I do an Erode operation to get rid of the small false targets, then a Dilate to get back some of the balloon that got eroded away.  The Convex Hull filter then helps round out some of the rough edges.  I'm not at all certain this is the best way to accomplish this but if I don't do it, my COG tends to encompass my target and all the smaller blobs together.

--patrick

This forum thread has been closed due to inactivity (more than 4 months) or number of replies (more than 50 messages). Please start a New Post and enter a new forum thread with the appropriate title.

 New Post   Forum Index