The OPA Assess Method in Oracle Policy Automation Determinations #2
Following from the first part of this tutorial about Oracle Policy Automation and the Determinations API, you finished the previous post with your system all ready to go for the initial use of the Web Service and the two methods. Firstly, you will use the ListGoals method to practice but also to learn an important step, then the Assess Method (Oracle Documentation). In the setup of the previous part, we deliberately ensured that anonymous access would not be possible. As a result, you will need to do some extra work before you can try out your Web Service.
Viewing the ListGoals Request
If you open the ListGoals folder in the SOAP UI interface and examine Request 1, you will see something very similar to the following:
Notice that in the example, I have edited the “show-version” tag to read “true” instead of the “?” which was previously present. Many times in the following examples you will need to either edit such elements, or remove them completely if the information is not mandatory. Clicking the green triangle present in the top left hand corner of the window does not, however, get us any kind of useful response. Rather we have an error, telling us that the request we sent did not contain information sufficient to authenticate our request.
Adding the Header information for the SOAP Request
The following screenshot shows the editing you will need to do in your request (on the left hand pane) in order to proceed any further. The header information will be needed in any request you make (ListGoals or Assess methods) in this environment at the current time.
Once you have made these changes, save the header somewhere useful since you will need it all the time in the following examples.
Click the green triangle again. If you are still getting error messages in the right hand side pane, remember that the OPA Hub User you are authenticating with, must have permission to use the Determinations API. Check that by logging in to the Oracle Policy Automation Hub as a Hub Administrator, and viewing the details of the user in question. For ease of viewing, I have highlighted the setting that needs to be checked in order for the user to be allowed to perform the steps you are trying to do:
Viewing the Results
Hopefully, your next attempt at clicking the green triangle is more successful. Here is an example of the result you might have, if you are usin the same Project as I demonstrated in the first part of this post.
The following annotations might be useful
- Notice the version information. If you set the request “show-version” to false, this section will not appear.
- The global entity is clearly marked.
- Notice this attribute does not have a readable Id. If you have forgotten to add names to your important attributes, you will see auto-generated Ids like this one. This should be your cue to go back to your Project, add a name, upload and deploy your new version.
- This entity has a name.
- The entity name, text and type are all clearly visible.
TIP : You will probably want to go back and add names for all of the attributes (race date, imminent race, and so on) and deploy that version.
What have you obtained?
The output has listed the top-level goals for the Project. In our case, there are Global and entity-level goals that can be inferred by providing the right information to perform an assessment. The ListGoals lists, as it’s name suggests, the goals you can obtain outcomes for.
Armed with this information you are ready to go further. Let’s suppose you are interested in the goal called “h_status” in my example. We can attempt to obtain some output.
Assess Method Initial Call
Our first call will be made using a very cut-down version of the complete request. Since our Project contains no Properties, no Change Points and since we are going to ask for the same level of information about all of our attributes and get a level pf outcome information that is the same, no matter whether we are receiving (whether the outcome is uncertain or certain), the request can be cut down to look like this;
Notice the outcome section where we have asked for the values of the horse and the horse status, and the global attribute race_date which we have entered as our request input. The result, assuming you have been using the same Project, would be something like the response below.
In this output, the following areas are highlighted :
- The race date is reiterated
- The entity instances of the horse entity are shown, each with the name and the status
- The attributes are inferred as you would expect
Outcome Styles and Assess Method
In this first request, you set outcome style to “value-only”. Two other choices are possible at this juncture. Change the outcome style for the horse status to “base-attributes”, as in the example shown below.
Upon executing the request, you will notice that the outcome has more detail. This information is very useful, since it highlights the base attributes that are needed to infer the horse status. In this case specifically, the date of the race.
Finally, change the outcome style of the same attribute to “decision-report”. Now the output will include the tree of decisions that lead to the output:
So far so good – Assess Method
So it is clear that the outcome style can not only assist us in understanding which attributes are needed as input (base attributes) but also in understanding the decision that was made (decision report). In the next part of this series we will investigate other tags in the request and response.