Test Module
Thomas Wagner from Germany  [51 posts]
2 years
(Sorry, this belongs to test module, but I cannot select it, therefore the post will probably not show up there. No posts there so far)

I think that the test module is a great idea.

However, I have not really succeeded in making in work, maybe an example might help.

I have provided a script which counts the number of red blobs in two images I provide.

The variable to be inspected is blob_counts.

While image 1 is evaluated properly, image 2 does not (it also identifies the purple blobs, since my algorithm is still wrong).

1.) When initially making the pass/fail decisions manually, do I have to take these results from a scrap paper, or can I directly check images or variable within the test module?

2.) How do I hand the "correct" values to the module?

Maybe I have misunderstood parts of the concept.

Best regards,


2015-08-28 Testmodul.zip
Steven Gentner from United States  [1365 posts] 2 years

Thanks for mentioning that. We've added a 'test button' selection in the list to allow for comments to show up correctly.

I think you have the basic premise of the Test dialog. A couple clarifications that should answer your questions:

1. The dialog allows for the selection of variables that are to be checked. Ie. in your case add the blob_count variable into the Check Variables. What that means is that this variable's value is relevant to the pass/fail of a particular test.

2. When you click on a particular image once you have selected the required Folder that image will be run through the current pipeline and assumable update the blob_count variable (this naturally depends on your pipeline to create/update). Based on the results of that processing (which you can visually see in the main RR GUI) one would press pass (i.e. the processing was good) or fail (the processing was bad) or skip if you are not sure/i.e. will deal with it later.

What happens on a PASS is that the variable (blob_count) is saved. When the test is rerun, the current variable value is compared to the saved one. If they differ, the image is now marked with a fail. In the case of that marked as FAIL, if the value changes from a failed value you will have to select a PASS to correctly remember that good value.

Currently there is NO way to add in a good value for the variable using the Test interface. The idea is that you continue to work on the pipeline to get a good value for each image at least once so that this can be 'recorded' using the PASS button.

In theory, one could use the Set_Variable module to correct a bad variable to a correct value to be saved as a Test ... but that's a bit around what it was designed for.

During initial testing the interface seems a bit more confusing. As more and more images produce correct values the value becomes more apparent in that small changes can be quickly retested against all previous images to ensure they still work. Its then a task of getting the failed images back to their correct values.

Hopefully this clarifies the usage. If you have some ideas on what might be a good addition (like the ability to specify a good value per variable) let me know.


This forum thread has been closed due to inactivity (more than 4 months) or number of replies (more than 50 messages). Please start a New Post and enter a new forum thread with the appropriate title.

 New Post   Forum Index