XML/Sockets API - Questions for a node.js / Javascript interface implementation
Carl W Ott from United States  [11 posts]
4 months
Some new questions surfaced as I'm working to update the node.js interface I wrote which allows my code to interact with RoboRealm via the XML/Sockets API.

This prior answer shows that RoboRealm API reads and writes only happen in-between pipeline cycles:

Q1. does the API have a buffer, and capture all <request> items on the socket even if they come in during a pipeline cycle?  Or is there a risk it can drop some?

Q2. does the API have a queue or FIFO, such that it can accept new <request> before completing and responding to older <request>?

Q2a. or can the API only accept one <request> at a time, such that I should ensure receiving a <response> before issuing another <request>?  Is this what was meant by the code snippet from that prior thread?
e.g.  Are these API interactions strictly issued and responded to in the sequence listed, one after the other?
   rr.SetVariable "XX", YY
   rr.WaitVariableChange("ZZ", 1000)

(sorry in advance, asynchronous javascript has been messing with the gray matter, where such lines could easily be acted on in a sequence other than the sequence they're written...)

Q2b. and if the RoboRealm API can accept multiple requests to a queue, does the API always provide <response> in the same FIFO order as the <request> queue?

Lastly, just to confirm what may be obvious, but the thread from a year ago hints that multiple API reads and writes can occur if there is enough time between pipeline cycles, e.g. if the Max Processing FPS is low enough.  In which case one may be able to fetch several individual variables from the API, as they are guaranteed to be from the same pipeline cycle snapshot.  But it also seems that if there is not enough time between processing pipeline cycles, or if a <request> happens to arrive just before a new pipeline cycle starts, that successive calls to the API may inadvertently fetch variables from different snapshots...

Q3. hence, where two or more variables need to be from the same processing pipeline cycle snapshot, is it considered best practice to fetch all variables, or fetch several variables, and parse the subset one needs?
Carl W Ott from United States  [11 posts] 4 months
FWIW - some empirical data for Q2a and Q2b...

I've got a real world example where 3 <request> were sent over the XML/Sockets API to RoboRealm, before there was enough time for a <response> to come in for the first two.  It appears that RoboRealm accepted all three <request>, and gave <response> in FIFO order - or at least in a correct looking order with the middle <response> in the middle.

In this example, my node.js code issued

and then, about 43 ms after the first <set_variable> was written to RoboRealm, my node.js code received this entire string all at once:


I'm still hoping for official answers to those questions.  But this at least suggests keeping the interface simple by waiting for an expected <response> before issuing the next <request> to RoboRealm...
Steven Gentner from United States  [1365 posts] 3 months

Q1. There is an internal structure that will be populated based on requests via the API. It isn't a buffer per say since if you send a set_variable it will set the value in a temporary table that what is currently being executed on. When the pipeline cycles this entire table is merged with the current execution flow. So if you send 20 set_variable requests, all twenty will be merged at the same time (depending on the speed of the set_variable requests). So it is a type of buffer but more of a structure that is very close to what the pipeline uses so that a merge is very fast.

Q2. The is an implied FIFO queue due to network communication. The requests are processed as they appear over a network connection. If you have more than one connection the order with respect to the two connections will be random. In one connection the order is FIFO.

Depending on how JS does its network transmission it is VERY possible that one thread sends data and enters a wait state with a second thread reading its response! If the same network connection is used on two threads this will happen quite frequently. Normally in other languages one uses a lock or mutex to avoid this from happening ... or use more than one network connection.

Q2a. Sort of already answered but just to clarify another point. When you use a get_variable the api will respond with the most current value as in the pipeline. This is expected. If you then issue a set_variable that waits until the current pipeline is done. Again this is expected. BUT now if you issue a get_variable BEFORE a new pipeline iteration happens you WILL get the value back that you just set even through its NOT actually in the pipeline. The API knows that you intended to set the value on the next pipeline iteration so it will update its own table that is then queried by the second get_variable. This is to ensure that when you issue a set and then a get that you WILL get the value you just set regardless of if the pipeline iteration has run or not.

This was done since it was very confusing to issue a set followed by a get and NOT get the value you just set until a couple milliseconds later once the pipeline actually runs.

Q3. To guarantee that all variables are from the same snapshot you would have to issue a get all variables since the variable table is only updated when requests are issued ... so when you do them individually its very possible that you get the first from a previous snapshot and the second from a new snapshot that just ended.

But what is normally done in these cases is to avoid running a new pipeline iteration. I.e. one either pauses the pipeline and pauses it as needed or stops the pipeline and runs as needed. It depends on what you are doing. This is very common when accessing multiple arrays of information that should all be correlated to one image.

You can also create your own buffered variables in RR. For example, you can use a single variable that when unset uses the If Statement to isolate a Set_Variable module which will make copies of multiple variables in one pipeline iteration and also set the single flag variable. As long as the flag is set the variables will NOT be updated so you can query them as needed. When done, your API call would unset that flag variable that will cause another copy to be made. Let me know if this is not clear.

It is NOT recommended to switch on/off a camera as the reconnect can be very slow. Turning the pipeline on and off is very quick and immediate.

Finally, if you are really needing variables to be in sync you may want to think about writing a plugin instead of using the API as a plugin is called within the pipeline iteration. Its effectively like doing a


via the API but in the middle of the pipeline (i.e. a plugin is called based on its placement with the pipeline).

Hope this helps.

Carl W Ott from United States  [11 posts] 3 months

thank you very much for that excellent and detailed response.  That is extremely helpful.

FWIW, I've made modifications with promising results.  These ensure that <request> are only sent to RoboRealm API after the prior request has responded.

With regard to Q2 - good question about node.js.  From what I've seen, it looks like one can control which threads and connections are used.  So it should be possible to implement a mutex lock if needed. But at some point of course, low level socket details are handled by the OS kernel - a level I'm hoping to avoid. FWIW - there is hint of node.js-kernel interface in the doc for the library I'm using to connect to the RoboRealm API https://nodejs.org/api/net.html#net_socket_write_data_encoding_callback

Thanks for the interesting thought on using a Plugin versus the API.  I'll keep that in mind, and perhaps look at that if the API path hits a wall.

Can you give a little background about the Watch_Variables module?

I've noticed that when some hard-coded delays allow my code to execute too fast, an interesting thing happens... Interactions with RoboRealm behave as expected when the Watch_Variables module is opened, but they do not behave as expected when Watch_Variables is not open.  At least part of this should be corrected after I replace the hard-coded delays with proper sequencing. But I am wondering what effect the Watch_Variables module might have aside from simply slowing down the processing pipeline FPS.

Can you describe briefly how the Watch_Variables module affects & interacts with the processing pipeline?
For example, does the processing pipeline do something fundamentally differently when a Watch_Variables dialog is open?
Are Watch_Variables values literally sent to the screen at precisely that point in the processing pipeline where it was opened from?

Also, with respect to Q3 above, if the API receives a pause command in the middle of a pipeline, does the pause command take immediate effect when it is received, or does it pause the processing pipeline only after the current iteration completes and before the next iteration starts?
Steven Gentner from United States  [1365 posts] 3 months

The Watch Variables module should be benign when it comes to any interaction with the underlying variables. It is simply a read only display.

The issue you experience is most likely due to a slowdown since simply displaying the variables takes a bit of time due to the windows GUI function. See the gray numbers to the right of the pipeline when the dialog is visible versus not. As the system cannot display all changes as quickly as they happen it may even skip values since they would not be displayed anyhow as variable updates will buffer up behind GUI commands.

Variable values at the point of the Watch are used to update the display ... but again, if that's happening too quickly you many not even see a particular number as it gets overwritten by the next before you can see it.

Pause happens between pipeline iterations ... you cannot stop the pipeline in the middle of execution as that can cause an illegal state to persist. You can add in If statements to not execute certain parts of the pipeline but the pause command issued via the API ONLY pauses after the current iteration is complete.

Carl W Ott from United States  [11 posts] 3 months

thanks, that's perfect - should be just enough to clean up the interface on my side to match...
Carl W Ott from United States  [11 posts] 25 days

you mentioned that "the pause command issued via the API ONLY pauses after the current iteration is complete."

Does 'current iteration' refer to the iteration of the current custom function tab, or of the 'Main' tab?

For example, if RoboRealm is executing script in a function tab when the pause command comes in, should it complete that function tab, and then pause, or should it complete that function tab, unwind back to the 'Main' tab and complete that part of the pipeline iteration before pausing?

Also, let's say that RoboRealm starts executing an annotation module like 'Display_Arrow', and then the pause command comes in over API. Is it guaranteed that the annotation will be drawn before the pipeline is paused?
Steven Gentner from United States  [260 posts] 23 days
The pause will only pause once the 'complete' pipeline has been executed. So if a pause is received anywhere in the middle (regardless of which tab/function/etc) it will only pause once that entire pipeline is complete ... this is to avoid such things like half draw arrows.

In other words, once the pipeline is complete the system will check if it needs to pause or not ... if not another iteration will run, if so, then it will wait until unpaused.

The entire pipeline is treated as Atomic (in terms of race conditions).


 Reply  New Post   Forum Index