Mindstorms +RR
17 year
Ive read the tutorial twice but still don't understand how it works with lego mindstorms. 1, Can you provide any more information? 2, Will RR work with bricxcc+nqc?
Lego Mindstorms
17 year
Sure, the RR works with Lego mindstorms by sending communications through the IR tower. The RCX brick picks up these commands and changes the motor values or variable values depending on what signal is communicated from RR to the brick. RR will work with nqc since nqc if for writing programs on the Lego Brick which may or may not check variables set via a communicating PC running RoboRealm. Thus it is possible to use RoboRealm in direct mode (a background task in the RCX brick will execute the commands) or to communicate with nqc programs running on the brick using variables to exchange information. For the most part you can think of the RCX brick acting as a RC type receiver, i.e. the RCX brick simply receives and executes commands determined by RR from the PC it is running on. In this way the RCX is nothing more than a RC type car. If you decide to use onboard programs written in nqc then you can have more advanced communication that could decide not to execute a suggested command from RoboRealm. It really depends on what you are trying to accomplish. STeven.
17 year
Whell i was thinking of the vision command thing where it followed you if it detected an object in a certain area or a multi color brick sorter but im still comfused.
Vision Command
17 year
Ok, lets then structure things from the Vision Command point of view. We're not 100% familiar with the Vision Command system but I assume its structure is as follows: a camera (Logitech if I'm not mistaken) is used to grab an image, the image is then processed for some rudimentary features (movement, color, etc) which is then communicated to the RCX brick (not unlike a CMUCam type setup) to perform the appropriate reactions based on what the camera 'sees'. Thus the closed loop is visual sensing by the camera -> visual processing -> RCX brick -> motor movements and then back to camera, etc. This closed loop allows the RCX brick to react to visual stimulus.

We can build a similar and more powerful system by using a wireless camera and receiver (http://www.geeks.com/details.asp?invtid=203C-50MW-N ~$40.00) a digitizer (http://www.hauppauge.com/pages/products/data_usblive.html ~$50.00) and your PC (faster more powerful CPU is better). Using these 3 hardware components you can attach a small wireless camera to your Lego Robot. The feedback loop would then be: camera -> wireless transmission -> wireless receiver -> digitizer -> USB 2.0 -> PC -> RoboRealm -> IR Tower (or bluetooth with NXT) -> RCX brick -> motor movements and then back to vision. This loop is someone longer than with just using Vision Command BUT it does have this PC in the middle of the loop which is capable of a lot more processing than the onboard capabilities of the Vision Command or RCX brick (or the CMUCam for that matter).

However, the loop is more complex and can be more expensive (I think Vision Command sells for under $30.00 on ebay??). The main difference is that the RCX is no longer used as the processing 'brain' of the system at least from the vision point of view and just becomes a slave to the PC.

The flexibility comes in using RoboRealm for vision processing. You can configure RoboRealm to perform the same commands as Vision Command. If you'd be willing to provide exactly what those capabilities are we can show you how to use them in the proposed setup.

Note that the tutorial uses a stationary webcam. In this setup the camera is not wireless but instead just a standard webcam. This removes the need for the wireless camera and the digitizer (assuming you get a USB based webcam) but has the limitation in that it does not move with the robot.

Does this help?

Anonymous 17 year
Yes it helps
(this should be in the manual)
So how do i make the brick perform an action eg.
Hand on right of camera operates out put a?
eg trigger
task main()
Tracking Skin
17 year
You would first add into the RoboRealm processing pipeline the appropriate filters to determine the action that you want to trigger off from. In your case with a hand, perhaps the colors->skin color module followed by the Analysis->COG module would create the COG_BOX_SIZE variable which if large enough (say > 30) would indicate a presence of something skin like. With this information you would add a Exntensions->VBScript module and run the following program that would track the
skin like object:

if GetVariable("COG_BOX_SIZE") > 30 then
' get the center of screen
xcenter = GetVariable("IMAGE_WIDTH")/2

' if the blob is off to the right then increase the right motor
if GetVariable("COG_X") > xcenter then
rightMove = 230
leftMove = 128
' otherwise up the left motor in a similar manner
rightMove = 128
leftMove = 230
end if

' set the variables so that the motor control module can use them
SetVariable "LEFT_MOTOR", leftMove
SetVariable "RIGHT_MOTOR", rightMove
end if

Your processing pipeline would look something like:

Center Of Gravity

with the Lego module configured to use LEFT_MOTOR and RIGHT_MOTOR variables.

Note that this will produce a somewhat crude jerky movement. You can refine the motor equation by increasing the motor movement by the amount that the blob is off center, i.e. move more when the object is further to the right or left than when it is on center.

Does this make sense?

If you still have problems can you attach a photo as it helps us to recommend appropriate filters.

17 year
Wow that realy is easy. thanks.

This forum thread has been closed due to inactivity (more than 4 months) or number of replies (more than 50 messages). Please start a New Post and enter a new forum thread with the appropriate title.

 New Post   Forum Index