Execution Statistics : 20C New Feature in Intelligent Advisor

Whilst some have already upgraded, many are still (at least in their production site) using 20B. I wanted to look at a 20C New Feature in a little more detail in this post. For some of my clients these Execution Statistics features have been very high on their wishlist for a long time. So what are we talking about?

Have you ever needed to identify which parts of the execution process are consuming the most time? Have you longed for some instrumentation? Users of complex platforms like Siebel CRM are familiar with the Application Response Measurement platform and the ability to drill down into domain and subdomain data to find out why something is not performing well.

Intelligent Advisor is not Siebel CRM nor is it trying to be, however this 20C New Feature is the start of more visibility of execution data. So let’s look at what the Execution Statistics functionality introduced in 20C means for rule authors and developers.

Firstly, consider the following : I’ve got a heavy set of rules, and I want to find out what is consuming execution time – is it a particular element – a specific attribute or something else.

There are three things to mention. Firstly, you need a project, and you need some Test Cases. The first pieces of functionality are based around the obvious principle that since test cases are executed on the client PC, and statistics of execution are available in the engine, why not make them available to the rule designer?

Execution Statistics - 20C1

So in the above example this is a project that calculates the journey between two station in the Paris underground network. It’s a pretty heavy set of rules with thousands of interconnected entity instances.

And there are some test cases in a Test Case file.

Execution Statistics - 20C2

We’re going to investigate the three bullet points shown. The first two are based on the Test Cases in your project. If you use the Run button, then the links should appear (assuming you have never done this before). CSV Files are generated. The first file, testPerformance.csv, is generated in the root directory of the project. It is a CSV and once you have cleaned it up, it looks like this:

Execution Statistics - Performance Analysis

So this is a pretty good start for performance analysis – we can see the time is recorded as a very highly precise number. We can see which main components are taking time – in this case the child station amount which is a big loop over thousands of combinations.

The second link is more oriented towards understanding which of the test cases you gave the engine to run are taking longer than others. So clicking the second link provided you with a detailed set of statistics for the test cases. This file is stored in a new folder TestLogs under the root of your project with a name like TestCaseLog_20200827144055.csv. The history of your runs is kept for you since the file is timestamped.

Performance Analysis Test Cases

So in my case I can investigate further by loading the most time-consuming test case into the debugger and looking at it in more detail.

The third button is used in a similar way, but requires external content, specifically a JSON file from one of your REST Batch runs. When you click the Analyse Batch Request button and provide a file (for example, a batch JSON file looks a bit like this in my case (this is just one case, the file probably has 500 cases) :

JSON File Execution Statistics

When the file is selected using the button, after a few seconds (or longer, depending on your project) then a new link appears next to the button (you get to choose the filename and location for this one) :

Access JSON Statistics

Opening this file gives the following output, which is the same performance information, only this time for your JSON files. This is useful if you are using Batch Assess. JSON files like this can also be loaded int the Debugger.

Execution Statistics JSON Output

So you can see that the 20C release has made excellent strides in the direction of execution analysis – rule designers now have something to work with!

Richard Napier

Author: Richard Napier

Richard Napier joined Siebel Systems in 1999 and took up the role of managing the nascent Siebel University in Southern Europe. He subsequently was Director of Business Development and Education for InFact Group (now part of Business & Decisions) for 8 years. He now runs his Consulting and mentoring company, On Demand Consulting & Education Ltd and has run ODCE since 2010. Owner of the OPA Hub, he also is Co-Founder of the Siebel Hub.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Worldwide
Logo by Southpaw Projects LLC