I have been working on a PC based robot since I saw the OAP project on the web. Mine uses XP and Java though, although the principle is the same - affordable robots with sensors, vision, speech and speech recognition. So far the robot is starting to look good (http://robot.lonningdal.net). I havent been working on it for some time, but recently I have started programming again. One of the challenges was of course how to make meaningful use of the camera for vision processing. My first go at this was to download OpenCV and use the face detection algorithms there so that I could get my robot to look at the closest face (and greet the person also). This has worked very good, and the plan was to continue using OpenCV for other vision tasks (by creating additional JNI wrappers).
Well, I think all that changed when I came across your brilliant little application. At first I thought it was just a nice application for trying out image processing algorithms, but then I saw the API. The fact that i could send my pictures from the robot to it and get results in the form of variables is just what I needed. There are tons of things I can already achieve with RoboRealm and the nice thing is that I can distribute the processing power, and add more computers to increase the processing power. Thank you for thinking about this and making it an integral part of RoboRealm!
So far I think RoboRealm could do most of the things I want in combination with my face detector, which is really the only module you are missing here. There are many new options that open up for a robot once it has a face rectangle to relate to, although a lot of these can be partly solved by assuming a face based on your face color filter also.
But as a suggestion of a new module, how about a general feature detector of sorts. I have often thought about the fact that many things around us basically consist of circles, ovals, rectangles (or more generally, parallell lines connected to the same colored blob). By the proximity of these features you could make some assumptions about what object it is (for example a cup is an oval with two parallell lines under it). You already have a circle detector which is really nice, a couple more will do wonders. Naturally, segmentation by thresholds is the big issue here, since most detections need some information about what color the item is so that you can separate the blob from the rest of the picture. Perhaps running the picture through several times with different colored filters will eventually create enough separate blobs so that the features can be detected and their proximity (and color) can be analyzed. Oh well, just a bit of a vision ranting there. :)
Anyway, thanks for providing this tool - it will be a big enhancement to my robot and the things it will be able to do. Here is a list of the things I plan to add using it:
- Follow ball (as you show in your tutorial)
- Follow person (detect face, identify big blob under and follow that color)
- Hand detection (how many fingers are being held up)
- Detect shape on a cardboard I hold in front of it
- Simple reading of commands on a cardboard (OCR)
- Blob detection and placement in mental space map (ask it to find a blue object and it will move to the closest it has seen lately)
- Location analysis through triangulation of known blob shapes (will be a big challenge, but fun to try out)
- Locate wall socket for re-charging (at first it will just call out and ask to be plugged in, later it will plug itself in)
- Some time in the future, add a hand (LynxMotion SES) and use vision as input to control the hand so that it is able to pick up objects
Looking forward to new modules! :)