Fault
, Eliminators
and Tests
tabs. If there is anything you don't understand refer back to the Concepts or References pages.bed_mice
and renamed its contained fault groups to cat
and infestation
. Why? Because if we wonder why mice are appearing in beds there are two main possibilities. One - the end user has a cat and it's bringing in unwanted gifts. Two - mice are finding their own way in and the user has a rodent infestation problem. So we added corresponding fault groups. It's easy to try things out by adding and removing faults and fault groups (using the + and - buttons above the JSON editor) and moving them around (by dragging and dropping in the hierarchy editor, CTRL-drag to reorder children). The blank rules added using the + buttons don't do anything until you edit them so this is a great way to organise your troubleshooting thoughts!bed_mice_user
test, which prompts the user for their name. Remember (from Concepts) that inference flows through a deep logic network from left-to-right and top-to-bottom, so the bed_mice_user
test is invoked from the bed_mice_eliminator
, because the test is listed in the eliminator's conditions clause. Inference starts with the root fault group, which evaluates its eliminator - causing the bed_mice_user
test to be evaluated. The eliminator returns UNKNOWN by default (it has no results clause) so inference proceeds to the contained cat
and infestation
fault groups. They have no eliminators or tests so they also evaluate as UNKNOWN and return to the root fault group, which displays its UNKNOWN resource.cat
eliminator and a query test. Select the cat
fault group and look at the logic rules in the Eliminators and Tests tabs.cat_own
condition in the eliminator and select Goto
from the popup context menu to go to the cat_own
query test. Click the back button (beside the choice boxes above the JSON) to return to the eliminator.cat_own
query test asks if the user has a cat. Note that its prompt
resource uses a template to refer to a previously established fact - the user's name returned by the bed_mice_user
test. We can assume this test has already been evaluated (and the question asked) because to reach the cat
fault group inference must have already flowed through the eliminator of its parent bed_mice
fault group, which evaluates that test. If we wanted to be explicit we could add the bed_mice_user
test as a condition of the cat_own
query test. Doing so would have no adverse effect as tests are only evaluated once and re-evaluating a test returns its original result.cat
eliminator evaluates the cat_own
query test (by asking the user if they have a cat) from its conditions clause and stores the test result in the ?own
variable. Its result clause uses the ?own
variable and effectively says that all cat-related mouse problems can be ELIMINATED if the user does not own a cat, otherwise they (do own a cat and) should continue to look into cat-related problems.cat
fault group's UNKNOWN resource.cat
fault group) and more cat-related tests. You can see the number of tests and eliminators on the hierarchy and the JSON editor tabs. These new tests illustrate synthetic tests and test variants. After identifying a cat as a suspect in our mice-in-the-bed mystery we asked ourselves how they are doing it - which boils down to how they are getting outdoors (to hunt) and back in (to return triumphantly bearing mouse gifts). We thought of 3 different ways, with 3 similar but differing solutions. The cat could be using a cat_flap
, a cat_window
(left open) or an unsuspected secret_passage
. Check out the cat_flap
fault (the other new faults are still just placeholders, with updated names and comments to help our thought processes).cat_flap
fault evaluates the cat_flap
test in its conditions clause. Right-click on the cat_flap
condition to go to the test, or type "cat flap" into the search box. Or find the test in the hierarchy by selecting the cat
fault group and using its Tests tab.cat_flap
test to any fault group, but it will be less obvious to find in future and its name (used by resource templates) would be weird, because test names concatenate their _class and label fields (and the _class field comes from its fault group).cat_flap
test is another query test that asks the user if their residence has a cat flap - but in its condition clause it refers to a cat_facts
synthetic test, which performs a series of tests to gather facts about the user's cat, one of which is the cat's name (used in cat_flap
"prompt" resource). Go to the cat_facts
configuration using the choice boxes above the JSON editor.cat_facts
synthetic test lists 5 other tests in its conditions clause. These are evaluated in order and store their results in variables like ?name, ?gender, etc. By convention variables are named after the test label and all conditions (tests) are evaluated, unless they have already been evaluated in which case they assign their original result to the variable. So here we have 2 query tests asking for the cat's name and gender, so we can refer to the cat by name and in a gender-appropriate manner (no cat wants to be called 'it'). The last 3 tests are synthetic tests that, given the gender of the cat, evaluate whether to use the pronouns him/her, his/her or he/she.cat_facts
synthetic test uses a resource expression x_test_cat_facts_summary
to evaluate its result using the "summary" resource and template substitution.cat_name
and cat_gender
query tests and make sure you understand all their fields. Then look at the JSON for the cat_him_her
synthetic test.{{ cat_gender == 'boy' ? 'him' : 'her' }}
evaluates as 'him' or 'her' depending on the value of cat_gender
. See the templates reference for syntax.cat_his_her
synthetic test. In the choice boxes (on the Tests tab for the cat
fault group) you should see 2 test variants - cat_his_her_boy
and cat_his_her_girl
. Using variants is an alternative way of doing the same kind of thing as in the cat_him_her
synthetic test but as variants are possibly the hardest concept to grasp in eXvisory we use them here as an example.cat_her_his_boy
test variant. This variant is evaluated if its first condition (the so-called variant condition) evaluates to TRUE, in this case if the cat_gender
is 'boy'. The result of this variant is the string 'his'. Now look at the cat_her_his_girl
test variant.cat_her_his_girl
evaluates the same cat_gender
test (variants must use the same test in their variant condition) but the empty string result means evaluate this variant if no other variant matches - so the cat_her_his_girl
variant is evaluated if the cat_gender
is not 'boy'. It would be equally valid to use the result expression "(eq ?gender girl)" provided you are sure that one variant will definitely be evaluated (given the possible cat_gender
values), otherwise an exception is thrown. The result of the cat_her_his_girl
variant is 'girl'.cat_his_her
synthetic test.cat
faults and tests and added rules to diagnose infestation
problems (unrelated to cats). Inspect the new rules to make sure you understand everything and can find it in the Concepts and Reference sections of this documentation. Try out the new rules in the web chatbot and see how straightforward it is to add new knowledge to your deep logic network.infestation_bed_food
test. With a little practice (and a little knowledge of Java) it is easy to see the correspondence between the JSON configuration of rules in the eXvisory dev editor and the generated Java implementation. This is important because there are lots of ways in which you can accidentally mis-configure the JSON so that it generates invalid Java - for example creating an eliminator that refers to a non-existent test. These errors are easy to find, because every time you press the Build button eXvisory dev compiles the Java code and returns detailed error reports. But the error reports refer to the generated Java code, not the JSON source, so to resolve them you need to understand how the JSON generates the Java.infestation_bed_food
fault and edit the infestation_bed_food
condition to mis-spell the label as bed_foot
. Pressing the Save button does not generate an error, as it just saves the JSON (it does basic validation, but not enough to spot this inter-rule error). But when you press Build you should see a compiler error popup.infestation_bed_food_test
.