Using the Kinect
The Kinect is typically used for Depth perception and while this could be used for detecting the totes its depth range
is limited. The Kinect perceives depth by projecting a spotted pattern and determining the displacement of those dots. You cannot see
these dots as they are in the IR (Infrared) wavelength that is not perceptible to the human eye. These dots are then captured by the
IR camera built into the Kinect and then processed in order to determine depth.
While calculating depth is useful, we are more in interested in using the IR space to detect the retro-reflective tape and thus the visual
targets. The IR space is typically less noisy than the visual spectrum especially if you can eliminate any visible light from the IR
camera (which the Kinect does). By using the IR space to detect the visual targets, you will get a much cleaner image with very few visible
objects which requires a lot less processing in order to understand the location of the target.
As detecting displacement of the dots is used for depth perception, the IR image capture from the Kinect's IR camera is already
calibrated. This means that distortions caused by lens warping has already been dealt with before you even have access to
the image. This is convenient as it is yet another step that you don't need to do or worry about. This is not the
case for the Axis camera and most webcams.
The disadvantage of using the Kinect is that you will need an onboard netbook/laptop/etc. on your robot in order to plug
the Kinect into. Because the Kinect is not a typical webcam, using smaller embedded devices to create an image
rely (like the Raspberry PI, BeagleBone, RockChip) is possible but beyond the scope of this tutorial.
The Kinect's projector is very powerful and will produce a spotted pattern in the IR domain.
If we instead blur this projection
by placing a filter in front of the projector it will instead produce a more even IR illumination. We used a piece of packing foam
taped ontop of the projector to create this blurring. Effectively, this makes the Kinect a generic IR camera.
With this dispersed IR illumination you can now point the Kinect at the retro-reflective target and isolate just the squares
despite bright areas in the visible spectrum. This is a great trick to use to remove a lot of unwanted noise pixels just
using equipment that you already have. Without the blurring of the projector the targets will appear spotted and cause
processing to consider those dots as individual objects instead of as coherent targets. It is possible to blur
these within software but this causes loss of precision.
Note that the ideal material for blurring is a clear filter which allows for maximum passthru without loss of intensity. The
example above uses a white plastic bag as the filter. Better filters are tracing paper or wax paper that does not reduce
the intensity of the projection as much. Remember, the less powerful the projector is the shorter the distance you would be able
to detect the target.
The following images are taken from a distance of 20+ ft which is well beyond the depth range of the Kinect but thanks to the
retro-reflective tape the targets are easily enhanced. Just be sure to blur the IR projector well enough otherwise
you will see more speckles that may cause detection failure when processing for a 'rectangle' shape later on.
The IR image does contain parts of the rest of the scene so it is still important to process the results. For example,
we used the Color Balance module to generate the following image by maximizing
the Contrast. You can see that the other details of the room are being seen somewhat by the IR camera but are so dark
they are effectively skipped during processing..
You can also purchase additional IR illuminators that can further enhance the image. Team #2410 Metal Mustang Robotics
has reported very high quality results from using an IR LED and a Lens to enhance the IR light received by the Kinect.
The brighter the illumination the further the distance you will be able to see the target and the more changes you remove any other image artifacts that may
cause successive processing to get confused. Thanks Team 2410!!
- When using the Kinect module be sure to untoggle your camera button in the main RoboRealm interface. Since you will be using the Kinect
there is no need to keep gathering data from your webcam so be sure that camera button is up. Processing the webcam image in the
background without using it will reduce your USB bandwidth.
- Be sure to select JUST the IR radio button in the Kinect module. You do NOT want to also process the RGB image at the same time.
This will throw off ALL your calculations and make processing 2x slower. Just keep it to the IR image and if you want to use the
RGB image later, save that as a marker and use the Marker module to bring it back into view.
- Download RoboRealm
- Install the Kinect Drivers
- Run RoboRealm and add the Kinect module to the pipeline
- Once this is activated select the IR radio button and you should see the IR pattern the Kinect generates
- Use a semi-transparent piece of waxpaper, packing foam, etc. and tape that over the lens that has a red spot in the middle of it. Its the one
isolated by itself (the other two being the IR camera and RGB camera).
- Once you have blurred things enough you should be able to see objects in IR without all the spots present. Move the Kinect to view the retro-reflective
tape and see what that looks like up close and from a distance.