Our detailed documentation refers to a quite sophisticated sample deep logic network at https://dev.exvisory.ai/apps/sample-mobile because we believe that's more realistic than a trivial "Hello World" example. But if you've just started using eXvisory dev you will probably also appreciate a shorter tutorial you can work through in an hour or two. We hope you're not squeamish, as the premise of this tutorial is building a chatbot to figure out why mice appear in beds. Yuck!

This tutorial assumes you've read and understood the Concepts section!

To follow the tutorial register with our developer program and we will provision you with an eXvisory dev instance like https://dev.exvisory.ai/apps/<org>-test-<project> (where org and project are your organisation and project names).

Step 1 - Inspect the default deep logic network

When we provision an eXvisory dev instance it comes pre-loaded with a deep logic network starter template you can edit to start building your own chatbot.

Starter deep logic network template

Select the various cyan fault groups and blue faults in the hierarchy and inspect their JSON configurations in the Fault, Eliminators and Tests tabs. If there is anything you don't understand refer back to the Concepts or References pages.

Step 2 - Upload version A

Rather than asking you to type in lots of JSON we provide evolving versions of our 'mice in the bed' tutorial for you to upload into your eXvisory dev instance.

Admin > Upload > JSON menu

The configuration of an eXvisory deep logic network is contained in a JSON source file, which you can download from (or upload to) your eXvisory dev instance as often as you like. Keep these JSON source files in a version control system like Git so you can use diff and merge tools to cooperate with other developers. Download version A of this tutorial's JSON source to your desktop from the link below and use the Admin > Upload > JSON menu to upload it to your eXvisory dev instance.

Don't worry, when you are finished with this tutorial you can replace its JSON source with the starter template, which is on our Downloads page.

Step 3 - Inspect version A

After uploading version A your eXvisory dev instance should look something like this.

We edited the starter template to call the root fault groupbed_mice and renamed its contained fault groups to cat and infestation. Why? Because if we wonder why mice are appearing in beds there are two main possibilities. One - the end user has a cat and it's bringing in unwanted gifts. Two - mice are finding their own way in and the user has a rodent infestation problem. So we added corresponding fault groups. It's easy to try things out by adding and removing faults and fault groups (using the + and - buttons above the JSON editor) and moving them around (by dragging and dropping in the hierarchy editor, CTRL-drag to reorder children). The blank rules added using the + buttons don't do anything until you edit them so this is a great way to organise your troubleshooting thoughts!

eXvisory dev does not have bulk Undo/Redo (beyond the JSON editor) so if you lose track of changes just re-upload the JSON source. During development make sure to regularly download your JSON source into version control (from the Admin > Download menu).

Step 4 - Try version A

Every time you make a change and select Save your chatbot is immediately available to test.

Select the Admin > App > instance_name menu item to open a new browser tab at our default web chatbot, loaded with version A of your deep logic network. Create a new eXvisory chat session called "mice in my bed!" and enter your name.

The default eXvisory web chatbot is a scripted, multiple-choice chatbot. It doesn't use natural language (except for automated translations - see its 'hamburger' menu) but is easy to use to quickly try out different troubleshooting 'happy paths'. You can build hybrid deep logic - natural language chatbots that use the same underlying deep logic network using our eXvisory webhook SDK, but that's outside the scope of this tutorial. Contact us for details.

Web chatbot - version A

It's not a long conversation because version A only has one bed_mice_user test, which prompts the user for their name. Remember (from Concepts) that inference flows through a deep logic network from left-to-right and top-to-bottom, so the bed_mice_user test is invoked from the bed_mice_eliminator, because the test is listed in the eliminator's conditions clause. Inference starts with the root fault group, which evaluates its eliminator - causing the bed_mice_user test to be evaluated. The eliminator returns UNKNOWN by default (it has no results clause) so inference proceeds to the contained cat and infestation fault groups. They have no eliminators or tests so they also evaluate as UNKNOWN and return to the root fault group, which displays its UNKNOWN resource.

Eliminator for the bed_mice fault group

Read through this until you can clearly follow the flow of inference through the deep logic network (its fault groups, eliminators and tests) and understand how that inference flow generates the chat questions and answers (from the evaluated rule resources).

Step 5 - Add a cat eliminator

Don't worry, no cats are harmed in this tutorial. Download version B of this tutorial to your desktop and upload it into your eXvisory dev instance.

Cat eliminator

We've added a cat eliminator and a query test. Select the cat fault group and look at the logic rules in the Eliminators and Tests tabs.

Right-click on the cat_own condition in the eliminator and select Goto from the popup context menu to go to the cat_own query test. Click the back button (beside the choice boxes above the JSON) to return to the eliminator.

The cat_own query test asks if the user has a cat. Note that its prompt resource uses a template to refer to a previously established fact - the user's name returned by the bed_mice_user test. We can assume this test has already been evaluated (and the question asked) because to reach the cat fault group inference must have already flowed through the eliminator of its parent bed_micefault group, which evaluates that test. If we wanted to be explicit we could add the bed_mice_user test as a condition of the cat_own query test. Doing so would have no adverse effect as tests are only evaluated once and re-evaluating a test returns its original result.

cat eliminator
cat_own query test
cat eliminator
"_class" : "cat",
"label" : "eliminator",
"variant" : "no_cat",
"comment" : [
"ELIMINATED if user doesn't have a cat, otherwise UNKNOWN"
"conditions" : [
{ "_class" : "cat", "label" : "own", "result" : "?own" }
"result" : {
"ELIMINATED" : "(not ?own)",
"UNKNOWN" : ""
"resources" : {
"unknown" : [
"I don't want to point paws, but it's often your beloved feline ",
"(bringing you gifts) who is the culprit!"
cat_own query test
"_class" : "cat",
"label" : "own",
"comment" : [
"Query test: does user own cat?",
"Example of default and override resources for y|n choices"
"_values" : [ "y", "n" ],
"resources" : {
"prompt" : [
"Hi {{bed_mice_user}}, do you have a cat?"
"y" : "Yes (I have a cat)"

The cat eliminator evaluates the cat_own query test (by asking the user if they have a cat) from its conditions clause and stores the test result in the ?own variable. Its result clause uses the ?own variable and effectively says that all cat-related mouse problems can be ELIMINATED if the user does not own a cat, otherwise they (do own a cat and) should continue to look into cat-related problems.

All very logical, but why did we add that eliminator and test?

This is the heart of the eXvisory approach.The eXvisory dev editor visually and systematically guides you through the thought process of assembling a deep logic troubleshooter.

For each fault group you identify ask yourself "What test could we ask the user to perform that would ELIMINATE the fault group (and all its contained faults) or SCOPE the fault to belong somewhere within that fault group (so we can ELIMINATE all faults outside that fault group)". In this case the most obvious way to eliminate a cat as the bringer-in-of-mice is to ask whether the user has a cat!

Step 6 - Try version B (cat eliminator)

Return to the web chatbot and resubmit your answer to the "What's your name?" question. Now you should see the "Do you have a cat?" question and, if you answer in the affirmative, the chatbot replies with the cat fault group's UNKNOWN resource.

Web chatbot - version B

Step 7 - Synthetic tests and variants

Download version C to your desktop and upload it into your eXvisory dev instance.

We have added 3 specific cat faults ('below' the cat fault group) and more cat-related tests. You can see the number of tests and eliminators on the hierarchy and the JSON editor tabs. These new tests illustrate synthetic tests and test variants. After identifying a cat as a suspect in our mice-in-the-bed mystery we asked ourselves how they are doing it - which boils down to how they are getting outdoors (to hunt) and back in (to return triumphantly bearing mouse gifts). We thought of 3 different ways, with 3 similar but differing solutions. The cat could be using a cat_flap, a cat_window (left open) or an unsuspected secret_passage. Check out the cat_flap fault (the other new faults are still just placeholders, with updated names and comments to help our thought processes).

cat_flap fault

The cat_flap fault evaluates the cat_flap test in its conditions clause. Right-click on the cat_flap condition to go to the test, or type "cat flap" into the search box. Or find the test in the hierarchy by selecting the cat fault group and using its Tests tab.

By convention tests should be created in the fault group for which they make the most sense. You could add the cat_flap test to any fault group, but it will be less obvious to find in future and its name (used by resource templates) would be weird, because test names concatenate their _class and label fields (and the _class field comes from its fault group).

cat_flap query test

The cat_flap test is another query test that asks the user if their residence has a cat flap - but in its condition clause it refers to a cat_facts synthetic test, which performs a series of tests to gather facts about the user's cat, one of which is the cat's name (used in cat_flap "prompt" resource). Go to the cat_facts configuration using the choice boxes above the JSON editor.

cat_facts synthetic test

Synthetic tests are straightforward. The cat_facts synthetic test lists 5 other tests in its conditions clause. These are evaluated in order and store their results in variables like ?name, ?gender, etc. By convention variables are named after the test label and all conditions (tests) are evaluated, unless they have already been evaluated in which case they assign their original result to the variable. So here we have 2 query tests asking for the cat's name and gender, so we can refer to the cat by name and in a gender-appropriate manner (no cat wants to be called 'it'). The last 3 tests are synthetic tests that, given the gender of the cat, evaluate whether to use the pronouns him/her, his/her or he/she.

Note that the cat_facts synthetic test uses a resource expression x_test_cat_facts_summary to evaluate its result using the "summary" resource and template substitution.

Look at the JSON for the cat_name and cat_gender query tests and make sure you understand all their fields. Then look at the JSON for the cat_him_her synthetic test.

cat_him_her synthetic test

This simple synthetic test uses a resource expression to figure out whether to use 'him' or 'her' to refer to the cat (in other rule resources). Note that resource templates can evaluate their own logic expressions to return different resource strings based upon previously established facts. So in this example the template expression {{ cat_gender == 'boy' ? 'him' : 'her' }} evaluates as 'him' or 'her' depending on the value of cat_gender. See the templates reference for syntax.

Now look at the cat_his_her synthetic test. In the choice boxes (on the Tests tab for the cat fault group) you should see 2 test variants - cat_his_her_boy and cat_his_her_girl. Using variants is an alternative way of doing the same kind of thing as in the cat_him_her synthetic test but as variants are possibly the hardest concept to grasp in eXvisory we use them here as an example.

cat_his_her synthetic test - boy variant

Look at the cat_her_his_boy test variant. This variant is evaluated if its first condition (the so-called variant condition) evaluates to TRUE, in this case if the cat_gender is 'boy'. The result of this variant is the string 'his'. Now look at the cat_her_his_girl test variant.

cat_his_her synthetic test - girl variant

The variant condition of cat_her_his_girl evaluates the same cat_gender test (variants must use the same test in their variant condition) but the empty string result means evaluate this variant if no other variant matches - so the cat_her_his_girl variant is evaluated if the cat_gender is not 'boy'. It would be equally valid to use the result expression "(eq ?gender girl)" provided you are sure that one variant will definitely be evaluated (given the possible cat_gender values), otherwise an exception is thrown. The result of the cat_her_his_girl variant is 'girl'.

References to cat_his_her synthetic test

When you refer to variant tests in other logic rules you treat them as a single rule. Use the location button (beside the +/- buttons) to see how other rules refer to the cat_his_her synthetic test.

Step 8 - Try version C (synthetic tests and variants)

Return to the web chatbot, resubmit your answer to the "What's your name?" question and follow through your extended troubleshooting chat, providing more facts about the cat suspect. For the first time you should be able to find the fault (a cat using a cat flap to smuggle mice inside) and a resolution (imprison the cat overnight in the room with the cat flap).

Web chatbot - version C

Try out different responses to see how they affect the course of the conversation. It's interesting that few other chatbot frameworks allow backing up to previous questions and changing their answers (to perform what-if experiments) because it's so useful!

Step 9 - Final touches

Download the completed tutorial to your desktop and upload it into your eXvisory dev instance.

Completed tutorial (infestation_food_query test)

We've fleshed out the remaining cat faults and tests and added rules to diagnose infestation problems (unrelated to cats). Inspect the new rules to make sure you understand everything and can find it in the Concepts and Reference sections of this documentation. Try out the new rules in the web chatbot and see how straightforward it is to add new knowledge to your deep logic network.

Step 10 - Errors and code generation

Before we leave the tutorial there's one last important aspect of eXvisory dev to look at. Every time you press the Save button eXvisory dev generates Java programming code that implements your deep logic network. It's this Java code that powers the web chatbot (and other eXvisory APIs). To view this Java code select the Admin > Download > Rules menu link (open it in a new browser tab).

Java code generated by eXvisory dev

Search down through the generated Java code until you find the infestation_bed_food test. With a little practice (and a little knowledge of Java) it is easy to see the correspondence between the JSON configuration of rules in the eXvisory dev editor and the generated Java implementation. This is important because there are lots of ways in which you can accidentally mis-configure the JSON so that it generates invalid Java - for example creating an eliminator that refers to a non-existent test. These errors are easy to find, because every time you press the Build button eXvisory dev compiles the Java code and returns detailed error reports. But the error reports refer to the generated Java code, not the JSON source, so to resolve them you need to understand how the JSON generates the Java.

Add deliberate typo and press Build (to see error)

To see this in action go to the infestation_bed_food fault and edit the infestation_bed_food condition to mis-spell the label as bed_foot. Pressing the Save button does not generate an error, as it just saves the JSON (it does basic validation, but not enough to spot this inter-rule error). But when you press Build you should see a compiler error popup.

Compiler error

The full error is "Message: org.codehaus.commons.compiler.CompileException: File 'KB.java', Line 137, Column 48: A method named 'infestation_bed_foot_test' is not declared in any enclosing class nor any supertype, nor through a static import". With a little experience this error message is enough to show you the problem - the method is missing because it should actually be infestation_bed_food_test.

For more cryptic errors download the Java rules and load them into an editor (so you can see line and column numbers, more easily do searches, etc.). You will end up being quite familiar with the generated Java code as you also use it with our desktop SDK for automated regression testing.

That's all for now. We hope you enjoyed this tutorial and found it useful.