Screenshots     Download     Pricing     Documentation     Tutorials     Contact     Forum     Search  

Using RoboRealm

Computing - RoboRealm runs on a Windows PC. In order to utilize the vision processing that RoboRealm provides you will need to use a Windows based machine. This can either be a local extra onboard computing device like a tablet or a remote device like your dashboard PC. Either way, you'll have to decide where to place the computing power depending on what you are looking to accomplish.

While simple color tracking does not require a powerhouse CPU its best to use as fast a CPU as possible. The Classmate PC once provided in the Kit of Parts will not be sufficient. Preferably an i7 would be desired, but i3 or i5 (i.e. multi-core CPU) should be sufficient. Keep in mind that vision processing is very CPU bound, a machine with limited graphics, screen size, disk size, etc. is fine with regards to vision processing and can help to keep the costs under the FRC purchase limit. Refurbished laptops should be fine. For example, this i5 laptop from microcenter at $290 should work fine and is well below the $400 purchase limit. We also like using lower power tablets that are very cost effective to do basic vision processing. For example, this $60 WinBook can run RoboRealm to do some basic color processing tasks and is very inexpensive. For more embedded/smaller platforms please see RoboRealm's List of Embedded Computers.

The previous links are provided for convenience only. Purchase at your own risk!

Onboard versus Remote - While it is possible to use RoboRealm on a remote machine via streaming images, it is not recommended to do so. The reasons are as follows:

  • Bandwidth - The bandwidth usage of every team is throttled below what a 30fps 640x480 image can safely pass under. Even with a 320x240 image, FIRST is recommending a compression ratio which while acceptable for human usage will lose a lot of precision for image processing.
  • Reality Lag - It takes time to compress, send and uncompress a JPG image. By the time the remote laptop gets the image (even at 30fps) the real time of the image can be up to 1 second old. While this is fine for slow motion and stationary analysis, if your robot is moving based on a processed image, it will be reacting to data up to 1 second in the past. This can cause oscillations or waggling of your robot.
  • Image Use - Often once you setup your camera for image processing, its settings are no longer appropriate for human use, i.e. the brightness might be ok for visual targeting but appropriate to see your opponents actions. If you plan to use your camera to aid your driver its best left as a regular RGB image. (The recommended settings of the Axis camera for target detection are much too dark to see anything but the targets).

System Configurations - So what's the best configuration to use for image processing?

  1. Best - Onboard Computing, Kinect - The best configuration is to use an onboard computing solution plugged into the Kinect with a blurring mask covering the Kinect projector in its IR mode. This solution provides the cleanest image free of most background objects which can be processed the fastest. In this case an i3 will have no trouble keeping up with the needed processing. No bandwidth issues are present, no calibration of the camera is needed (the Kinect is precalibrated) which allows for maximum speed of processing and quickest results. As the Kinect acts like a webcam, there is minimal lag from reality and the image information is not compressed or changed in any way. For those looking to further increase the range, IR LEDs such as the 315mW 850nm Infrared LED from SuperBrightLeds can help with the IR illumination seen by the Kinect's IR camera.

    The disadvantage is that you need an onboard laptop/netbook/tablet with appropriate padding to help protect from any sudden crashes (although laptops are quite durable these days). It also requires mounting the Kinect in the appropriate spot which is much larger than the typical webcam or Axis camera. The Kinect also requires a 12V power supply and cutting of its power cable in order to attach to your onboard battery.

  2. Good - Onboard Computing, Webcam, LED lighting - If you don't have a Kinect handy, the 3 ring lighting with any regular webcam that can be set to a fast shutter speed or low exposure will offer similar advantages. With enough lighting an image very similar to the Kinect can be produced that offers the same speed advantages to using the Kinect. With the webcam directly connected to a PC you get the same minimal lag in communication and quickest response time with no bandwidth concerns.

    An advantage to the above solution is a webcam is less expensive than the Kinect and typically much smaller in size. You still need to wire a power supply and purchase the additional LEDs so in total the price will be similar but slightly under the cost of a Kinect (~$120).

    A disadvantage is that you may need to have a calibration module like the Radial Distortion module to help straighten the target lines prior to processing with some (but not all) modules.

  3. Ok - Onboard Computing, Axis, LED lighting - The main disadvantage of this solution is the time taken to compress and decompress the JPG image will cause a slight delay from reality. Since the Axis is local and connected directly to the PC you can set the lowest compression (best quality image) so that JPG artifacts and color reduction problems are reduced. You will still have a slight reality delay but since the WiFi network is not part of this solution it will be minimal.

    The disadvantage is that if you use the Axis for target detection, the image will not be very usable for the driver station (i.e. it will be too dark).

  4. Poor - Remote Computing, Axis, LED lighting - Sending images (even at 320x240) back to the driver station for processing is network dependent. While your tests in your isolated network may work just fine, keep in mind that during the actual contest, you are not the only robot on the field streaming back video. This increase in network reliance can cause hiccups or delays in images that may cause the robot to behave in unknown ways at critical points.

    Lets suppose that the network at the contest is 10x fast than what everyone needs. With this you could be streaming at 30fps (frames per second) which is great. The problem is that each frame will be delayed by as much as a second from reality. This is caused by the time it takes to capture an image, compress it to some sort of image format, send it over the network and then finally view it on your laptop. For slow moving robots this should not be a huge concern but if you plan to perform tracking while moving or want the best reaction time processing on the driver station will create an unacceptable lag. This lag will typically manifest itself as an oscillation of the robot or targeting device.

    For example, if we think about a specific moment while tracking a target you can gain a better understanding of why this is an issue. Say your camera has just taken a picture. The picture at that moment represents reality and would indicate that the robot needs to move more to the left. Compressing it takes about 100ms (millisecond). Transmitting it to your driver station takes about 300ms. Processing once received takes about 200ms (that would equate to 5fps), sending back the commands based on what you processed takes about 100ms (the commands are just variables and much smaller than the image). Now your robot reacts to those commands ... but that means in this example that the robot is reacting to 700ms old data. If the target has now moved during that time and now instead should cause the robot move right the robot will always be moving in the wrong direction. As the system will eventually catch up to the correct movement, it will tend to oscillate or zigzag as it moves. Reducing this lag from reality will create a much smoother movement and also allow you to move quicker (alternatively, you can slow your robot down to reduce the zig zagging).

    Keep in mind that the times above are only for example purposes and do NOT reflect actual times that you may experience. The best way to experience this lag is to setup the camera on the robot streaming back to the driver station and while looking at the driver station open and close your fist in front of the camera and see how long the image takes to show the correct open or closed fist. This test allows you to 'feel' when you hand is open or closed and watch how long the video takes to update. See if you can create a frequency of open and closed fists that will be completely out of sync of what you are seeing in the camera ... and yet you may still be getting 30fps.

  5. Forget it! - Classmate, Axis, no LED - Not even worth trying ... the Classmate is too slow, the images will be lagged from reality, and the target will not be reliably detected due to the lack of significant contrast.





© 2005 - 2017 RoboRealm. All Rights Reserved. | Contact | Privacy | Disclaimer | Link to Us |