2017 FIRST® Robotics Competition
RoboRealm is a software application that provides you the ability to rapidly process images from the Kinect and/or Axis IP camera in order to visually detect field elements. You can use RoboRealm to track positional markers, follow lines, track balls, and augment camera images with additional visual graphics. RoboRealm is an ideal vision platform for the Hybrid Autonomous period and human controlled parts of the competition (i.e. Augmented Driving).
As the 2017 season is over you can still download a 15 day trial copy that you can use to experiment with before next season.
The following tutorials outline several possible ways to perform automated vision based target tracking and targeting for the 2017 FIRST® STEAMWORKS Robotics Competition. The actual method you use may be a variation of the following techniques, but these examples should for a base from which to work from. Keep in mind that the solutions presented here are not the only possible solutions and therefore if your final method differs from that presented that does not necessarily mean it is wrong.
These tutorials assume you are using some sort of visual hardware device on your FIRST robot and that you are interested in using an autonomous guidance mechanism to keep track of the vision targets. The targets are made of 4" retro-reflective tape which will reflect light directly back to the source. To see this characteristic of retro-reflective tape use a flashlight and while holding the flashlight right next to your eyes shine the light towards the reflective tape. You will notice that only when your eye and the flashlight are in line will the reflective tape have the best response.
Using the Axis IP Camera If you have decided to use the Axis camera on your robot for target tracking, this tutorial covers how to capture images from the Axis camera using RoboRealm.
Using a webcam If you have decided to use a webcam on your robot for target tracking, this tutorial covers image capture using RoboRealm and the control of digital zooming. This assumes you are running a netbook/laptop onboard your robot.
Using the Kinect If you have decided to use the Kinect on your robot for target tracking, this tutorial covers the setup of the Kinect drivers and initial image capture using RoboRealm. This assumes you are running a netbook/laptop onboard your robot.
Visual Targeting #1 In order to move your robot into position to fire at the upper visual target you will need to first find the target in an image. This tutorial shows one way of detecting the target within an image. It uses the premise that the target is much brighter than the surrounding image and that the target's ratio between the upper and lower strip is unique in the image.
Visual Targeting #2 This tutorial shows another way of detecting the target within an image. It uses the premise that the target is greener than the surrounding image and that the target shapes can be filtered using their size and proximity to each other.
Image Distortion Looks can be deceiving! This tutorial focuses on the issues related to distortion of images as they are created from lens that will warp and change image in ways that make that less precise that we'd like. Several teams have realized that measurements taken from images can be very different based on what camera they are using and what focal length they believe them to be. We try to address some of the issues here that are very common when working with image measurement from imperfect sources (i.e. webcams, Kinect, Axis cameras).
RoboRealm Computing RoboRealm runs on a Windows PC. In order to utilize the vision processing that RoboRealm provides you will need to use a Windows based machine. This can either be a local extra onboard computing device like a tablet or a remote device like your dashboard PC. Either way, you'll have to decide where to place the computing power depending on what you are looking to accomplish.
Communication Once you've determined the distance and location of the desired target you'll need to communicate that to the CRio. Using the Network Tables protocol, RoboRealm can share information to the CRio or any other device on the network like the Smartdashboard does. Using the Network Tables module, you can send the resulting data to the CRio in order to move your targeting system appropriately.
For those looking for a ready-to-go solution we recommend having a look at RoboSight from Kerkits. RoboSight is an IR based Raspberry
PI solution that captures images in IR space, performs all target recognition onboard the PI and sends the results directly to Network
Tables for you to use in the RoboRio. While RoboRealm requires you to either host a PC onboard your robot or stream a
very small image back to your dashboard, RoboSight includes similar techniques onboard the robot in a plug and play solution. If
you are in a rush to get vision targeting for FIRST STEAMWORKS done, you'll definitely want to review that solution.
For more information
FRC 2016 RoboRealm Tutorials
FRC 2015 RoboRealm Tutorials
FRC 2014 RoboRealm Tutorials
FRC 2013 RoboRealm Tutorials
FRC 2012 RoboRealm Tutorials
Robot programming with WPILib
FIRST Robotics Resource Center
TeamForge FIRST WPILib
| New Post
|FRC 2017 Related Forum Posts||Last post||Posts||Views|
FRC SmartDashboard Problem
Cannot get latest version of SmartDashboard to connect to the RoboRealm webserver @ 127.0.0.1. Is t...
FRC Camera Client Receives Error When Sending Parameters
We're having trouble getting the RoboRIO to send the camera image to RoboRealm. It worked fine last year, but for whatever reaso...