Category: Version 12

What’s New in Intelligent Advisor 20C?

What’s New in Intelligent Advisor 20C?

And so the wheel of time turns and before we know it, yet another version of Intelligent Advisor has hit the shelves. And this one is a big release, with some fantastic stuff in it, not just for Interview fans but also for those of us who are focussed on REST Services, performance and testing. So what’s new?

So, here is what’s new in Intelligent Advisor 20C. Let’s begin with the Interview Extensions. There are new extensions that help us format the styling for portait entity containers and entity collects. They have styling extension names like portraitContainer and (as child elements of the container) headerCell, row, rowOdd, rowEven and so forth. Some of these were introduced in the second monthly update to 20B but they are so fresh it’s worth mentioning them. They can easily produce nice effects. Here are some of them in techicolor, namely dynamic Buttons, Column Headers and Row colors.

What's New - Interview Extensions

That bunch of things would be enough to get me really excited, but there is a LOT more. For a while now I’ve been militantly asking for functionality where my customers can begin to profile their rules, get performance statistics and execution time data.

Now they can – firstly, directly through Intelligent Advisor REST calls (using the Batch API) via the addition of @time per case as well as processorCasesPerSec and processorQueuedSec in the REST response. This is fantastic for understanding what’s going on with cases in Batch.

Secondly the users of Policy Modeling can now leverage new functionality and obtain statistics from a set of Test Cases in Excel format or from a JSON batch request. For Test Case output, reports and timings (in CSV) are generated automatically and stored in a new TestLogs folder. As development continues, these CSV files will be good sources of information to understand where added rules or entities are causing a slowdown.

Thirdly, they can also delve into the details of execution times via a JSON batch request analysis. For example, shown below is the generation of the rule profiling report obtained by loading a JSON data set into Policy Modeling on the Test Cases tab via the new Analyze Batch Request button.

In summary, for performance hunters there are now 3 useful areas : batch execution timing, test case execution reporting and finally sample batch profiling.

But that is not all! There is an important accessibility enhancement – the ability to flag a control as leveraging the browser’s native auto-fill capability. Select the Screen Control and set the properties :

What’s New in 20C – Input Auto-fill

On the Hub there is a new feature called repository branching to allow copying of a project in the Repository directly on the Hub to create a visible relationship between the source and the new one. If you have ever had to create copies of projects to represent different strands of development or testing, this will be welcome.

When viewing (in either direction) you can see where the project has been branched / is the root of a branch:

There are other new features, notably a component versioning system that becomes apparent when viewing the new release in OPM. The product version might be 12.2.20.X but the component version is 42.0.0. Separation has occured so that the product can evolve at a different rate to the component version. The “component” of which they speak is the core interface between Policy Modeling and the Hub. In light of some of the potential functionality on the roadmap, it makes sense to separate the two.

This release is a big one, no doubt about it. Not just for the new features which bring proper instrumentation and profiling to Intelligent Advisor – which was overdue – but also for the new Interview Extension features. The other elements are laying the groundwork for yet more new features in the future. Read the online notes here.

Have a good day.

JavaScript Extension : customEntityRemoveButton

JavaScript Extension : customEntityRemoveButton

Once again I came across this question on another forum, so I answered there and decided to reproduce it here in more detail. As some of you are no doubt aware, there are a number of smaller extensions in respect of Entity Collects, that all the developer to manage the process of deleting, or adding instances without having to completely redesign the Entity Collect which is a heavier task by an order of magnitude. customEntityRemoveButton is a good example.

The question, very pertinent I must say, concerned the deletion of instances, and the fact that there is no confirmation dialog. You click the button and the suppression just happens, without a chance to say “oops, didn’t mean that one”. Enter the customEntityRemoveButton!

customEntityRemoveButton 1

  1. An Entity Collect
  2. The Delete Instance Button that will be customized with a customEntityRemoveButton extension

The extension in question is known as a customEntityRemoveButton, and it has a unique handler which you might not have come across. This handler is known as setInstance – and the job of this handler is to provide the instance identifier. So, for a scenario, if you create three instances, then obviously each button needs to know which instance it belongs to, so to speak. This handler fires first (even before the mount handler it seems) so that you can recover the identifier and then use it to ensure the right button does the right job.

The example scenario is that the user would love to have a confirmation dialog. And since I’m often accused to using too much jQuery, this one is in native JavaScript. It would look like this perhaps:

/*  Generated by the OPA Hub Website 07/04/2020 17:54
Educational Example of Custom Remove Instance Button Extension for Oracle Policy Automation
I will remember this is for demonstration purposes only.
 */
 let mycontext = "";
OraclePolicyAutomation.AddExtension({
	customEntityRemoveButton: function (control, interview) {
		if (control.getProperty("name") == "xRemoveButton") {
			return {
				mount: function (el) {
					var div_parent = document.createElement("div");
					div_parent.id = "xButton_parent";
					el.appendChild(div_parent);
					console.log("Button mounted");
					makebutton(mycontext,el,control);
				},
				setInstance: function (el) {
					console.log("Button set Instance for " + el.toString());
					mycontext = el;
				},
				update: function (el) {
					console.log("Button updated for " + el.innerText);

				},
				unmount: function (el) {
					if (control.getProperty("name") == "xRemoveButton" ) {
						console.log("Unmount");
					}
				}
			}
		}
	}
})

function makebutton(instance, mycontext,control) {

	var div = document.createElement("button");
	div.id = "xRemoveButton" + instance;
		div.innerHTML = "Remove Me " + instance;
	div.onclick = function () {
		var goahead = confirm("Are you Sure?");
		

		if(goahead === true) 
		{control.removeInstance(instance);return false;}
	
	else{
		alert("No delete");return false;
	}
			
	};
	mycontext.appendChild(div);

}

I repeat that this is just a quick demonstration that I pulled together from somewhere in my head. The documentation on line is very thin concerning this particular handler and indeed these extensions in general.

customEntityRemoveButton 2

In the screenshot above and below you can see the new improved custom Button!

customEntityRemoveButton 3

Users now relax safe in the hands of our sanity checking dialog box 🙂

If you are interested in the Zip Archive just leave a comment and I will share it. This was done in 20A.

Have a nice day!

Continuous Delivery – OPA and Postman, Newman and Jenkins #3

Continuous Development – OPA and Postman, Newman and Jenkins #3

Following on from the two previous episodes, (part one / part two) the stage is almost set to move to automation over and above the simple concept of a Postman Collection. In this chapter we will see how to use data files in Postman to facilitate making use of CSV or JSON data in a dynamic way, and we will get Newman up and running. So let’s start with the first element.

In our Collection, we currently have one request which is sending information to OPA and getting a journey time and plan back in response. Great. But we want to test our journey planner with more than one journey. And we want to be able to easily change the journeys without having to mess with our request. So this is where the ability to load data into a Collection from a text (CSV or JSON) file comes in very handy in deed. In pictures

  • Create a CSV file with your data in it. For the project used in this example we would need to have something like this

Collection Data File

Notice the headers are origin and destination. They will become variables that we need to add to our request.

  • Make sure that the request is updated

So now my request looks like this:

Request Modified

Don’t forget to save your request. Now if you go to Collection Runner and run your Collection, you can use the Select File button and select your file:

Using a Data File in Postman

As soon as you select the file, you will see that the number of iterations changes to take into account all the rows in your file. Now when you want to push a new set of journey tests all you need is to change that file. Awesome!

Armed with all of this, we can now move to the next challenge. Postman is great but not everyone wants to use the graphical user interface all the time. How about the ability to run a Collection from the command line? That would be great AND would open up lots of options to running Windows Scripts and the like. Well, the news is good. Newman is the CLI (Command Line Interface) for Postman. Setting it up if you are not familiar with Node.js can be a bit daunting, but let’s just make a shopping list first:

  1. Install Node.js (here)
  2. Install Newman using the Node Package Manager
    This step should be as easy as opening the Node Command Prompt (which got installed in your Windows Start Menu in the first part) and typing something like the following:

    Installing Newman

  3. Run Newman from the Node.js prompt to use Postman Runner with your chosen Collection and Data File using the ultra simple command line.

So the last step might look a little like the following – notice that in the command line you specify the Collection (which you must first export to a file from inside Postman) and the Data File (wherever it is – I put both files in the same location for ease of use. And once the job is done, Newman reports back with it’s own inimitable, Teletext-style display of the output. A word of advice – before exporting a Collection make sure you have saved all the recent changes!

Exporting Ready for Newman

 

Once the Collection is exported, you might run it like this. This is just a simple example. It is possible to also load environment variables and more to fine tune it.

Newman Command Line

After a few seconds we get the output from Newman. Note the 4 iterations, and that iteration 3 failed due to an excessively long journey time.

Newman Output in Teletext

Now this is of course a major step forward – running Collections from the command line, feeding in data sets just by using a simple command line switch – these features mean we can really now start to think of automation in our approach to testing this project. But we are not done yet. The output from Newman is pretty ugly and we need to change that. By installing Newman HTML Reporter, we can get a much nicer file. Follow the steps in the link and then change your example command line to something like this:

Note the new command line option -r html (which means, use the HTML reporter) and the –reporter-html-export option (with two dashes) which ensures the output file is stored in the folder of my choice. And the HTML output is so much better:

Newman HTML Reporter

What a nice document. So far so good. We can spin these collections with just a command line. But wait a minute, what about running tests when you are not in the office – sure, you could write yourself some sort of Powershell Script in Windows. But what if you wanted to run a set of tests every day at 3pm, then again on Tuesdays, then once on the first Monday of every second month…it would get quite hard. And what if you wanted to send the HTML Report automatically to all your colleagues on the testing team…but only if there were failures?

And so, we march onward to the next phase. See you shortly!

Input Validation in an Input Extension – IBAN Validation

Input Validation in an Input Extension – IBAN Validation

A funny thing happened today – I came across a forum post that was talking about exactly what we had been describing to someone else the same day. Odd. These confinement measures play tricks with your mind. Anyway here is the scenario, it’s about input validation.

The customer needs to validate a banking account number in IBAN format in Oracle Policy Automation as part of an HTML Interview. What are the options for input validation?

There are several ways to handle this

  1. If you are just validating one country-origin IBAN numbers then you could certainly use Regular Expressions on your Input Text Attribute. A checksum rule would provide extra validation.
  2. Write a lot of Error or Warning Rules to achieve the same.
  3. But given there are many formats and many countries, if you need something bigger then you had better use some help!

There are standalone JavaScript libraries like this one, or you might use the jQuery Validator instead. It comes with IBAN validation as part of the additional methods supplied with it.  Since a lot of people are familiar with jQuery, and since Oracle Policy Modeling already uses it, we decided to opt for that solution for input validation.

We downloaded the validator and the additional files. we ensured that jQuery was accessible to the Project (in short, we clicked the Styles button in the Interview, then the Custom.. button and said OK before pasting the files in the folder). If this was a real-life case we would load the files in the correct order programmatically (jQuery, Validator, then additional stuff) but for this quick hack we renamed the files in alphabetical order to make sure they loaded like A, B and C.

We created an Input attribute, and named it something like this:

Input Validation - Create Attribute

Then we added it to the Screen and hooked up a custom property to make sure the code is executed for the IBAN element.

Input Validation Setup Screen

Then I fired up my trusty Code Generator and created a template Input Extension. I edited the Validate handler and this is the complete input extension. Of course this is the bare bones but it will do for a start :

//*  Generated by the OPA Hub Website 18/03/2020 08:06
Educational Example of Custom Input Extension for Oracle Policy Automation
I will remember this is for demonstration purposes only.
 */
OraclePolicyAutomation.AddExtension({
	customInput: function (control, interview) {
		if (control.getProperty("name") == "xIBAN") {
			return {
				mount: function (el) {
					console.log("Starting name:xIBAN customInput Mount");
					var div = document.createElement("input");
					div.id = "xIBAN";
					div.value = control.getValue();
					div.setAttribute("data-rule-iban", "true");
					el.appendChild(div);
					console.log("Ending name:xIBAN customInput Mount");
				},
				update: function (el) {
					console.log("Starting name:xIBAN customInput Update");
					console.log("Ending name:xIBANcustomInput Update");
				},
				validate: function (el) {
					console.log("Starting name:xIBAN customInput Validate");
					//errorplacement ensures the standard message is not visible
					var validator = $(".opa-interview").validate({
							errorPlacement: function (error, element) {
								return true;
							}
						});
					var returnvalue = validator.element("#xIBAN")
					if (returnvalue === true) {
						control.setValue(document.getElementById("xIBAN").value);
						return true;
					} else {
						return 'Your Message Here - THIS IS NOT A VALID IBAN'
					}
					console.log("Ending name:xIBAN customInput Validate");
				},
				unmount: function (el) {
					if (control.getProperty("name") == "xIBAN") {
						console.log("Starting name:xIBAN customInput UnMount");
						var xIBAN = document.getElementById("xIBAN");
						xIBAN.parentNode.removeChild(xIBAN);
						console.log("Ending name:xIBAN customInput UnMount");
					}
				}
			}
		}
	}
})



Input Validation

Input Extension – Summary

The basic idea is simple – when the Validate handler fires, pull the form and the input element and validate it as an IBAN number. If it passes, then there is no error message. Otherwise the validate fails and you can tell the user. You can have a more interesting message than this of course.

The Zip Archive is in the OPA Hub Shop. Have fun!

Whats New in Oracle Intelligent Advisor 20A?

Whats New in Oracle Intelligent Advisor 20A?

The crop of new features in 20A this month can also be filled out with some extra new features that crept into 19D when it was updated about a month after the initial release.

20A General Release

In 20A, the focus is very much on enhancing the connection with Oracle Engagement Cloud. As those who work with it know, up until now getting set up with an Intelligent Advisor interview has required Groovy Script. More importantly, Oracle Engagement Cloud / Oracle Intelligent Advisor integration has lagged behind the others (notably the Oracle Service Cloud, or even Oracle Siebel integrations) in terms of functionality. And finally, in the past it was absolutely awful in performance terms, notably the GetMetaData was renowned, at least where I was working, for taking up to 4 minutes to provide a response.

The performance issues were worked on a while ago, and Oracle have made great strides in that direction, so now it is fantastic to see that the other aspects of the connector are getting some love too:

Dynamic reference data loading – the ability to load in additional data from Engagement Cloud (for example, product catalog or transaction history information) after the user is already part way through an advice experience, Yup, ExecuteQuery comes to Engagement Cloud integrations.

Native Intelligent Advisor control in Application Composer – adding an Intelligent Advisor interview into a subtab of an agent workspace no longer requires groovy script. Essentially a “plug and play” drop-in component for your Oracle Engagement Cloud subtab.  You can read more about it here.

Connector support for the Case object – Given the broad reach of Oracle Intelligent Advisor (benefits, law enforcement and so on) in the Public Sector and other “case focused” industry use cases, Case records can now be loaded, updated and created directly from Intelligent Advisor interviews. This includes support for  child objects of case (contact, household, message, resource and custom child) within the same interview. For further details, you can click here.

As a final, fun bonus – this time not related to Oracle Engagement Cloud – Image Control Extensions come to JavaScript! Very cool, since I have a friend and customer who has been waiting for this for a long time. Thanks to Oracle Intelligent Advisor Development for delivering it. We’ll be showing an example in the coming days. You get stuff like this:

  • getImageSource() – Returns the URL of the image to be displayed
  • getLinkUrl() – (Optional) Returns the URL of the link the user should be navigated to when they click on the image
  • openLinkInNewWindow() – Returns true if the image’s link URL should be opened in a new window
  • getHorizontalAlignment() – Specifies the horizontal alignment of the control
  • getWidth() – Returns the width of the control in pixels
  • getHeight() – Returns the height of the control in pixels
  • getCaption() – Returns the image description (alternate text)

If you want to learn about Control Extensions, you can read the book.

There are a few other enhancements, notably a versioned authentication in REST. But the OIA team have once again moved the bar higher and I think they deserve a big round of applause.

Thoughts on Installing OPA 19D On Premise

Thoughts on Installing OPA 19D On Premise

I addressed these thoughts in another forum, but figured I might put them here too, but under a different angle. Recently I discovered, as you do, that the installer for the On Premise Server Components of Oracle Intelligent Advisor 19D (aka Oracle Policy Automation Server Components) had a bit of an issue. As you probably know, it comes with three options (there used to be a fourth I think, in the 2014 timeframe, when the In Memory Analysis Server thing was being introduced).

Installing 19D

You can see them in the top part of the screenshot above, and you can see my issue in the lower section. the installer just dumped me out without any warning, complaining that I hadn’t given the deployment a name. Well I had not had a chance to even think about it because the script crashed as soon as I hit Return. Now, the java magic behind this script has been reverted to “Update 1” rather than “Update 2” according to the excellent response I got from Oracle. But that is not the purpose of this post.

In spite of the error above, and the errors in the other two options which stopped me from doing anything at all  – the second option had the same problem:

And the third option couldn’t work without the database and randomize seed key being present. So I was stuck. This was just a test machine, so I was not under life or death pressure, but I didn’t want to give up as I had to do some work on 19D. So after checking everything was versioned correctly and installed (WebLogic 12c was already running and healthy, Java JDK and JRE installed and in the path, that sort of thing) we decided to do it manually.

The On Premise installation of Oracle Policy Automation can broadly be defined as three steps

  • Create the user and database
  • Populate the database with the tables
  • Populate the tables with the seed data (such as the user admin)

These can be done manually by a combination of the following :

  1. Reading and following the On Premise guide where it provides helpful examples of the creation of the OPA Hub database user. There is little more to do that make any changes to the password or username and just run it.
  2. Running the SQL Script in the /unzippedinstaller/opa/bin/sql folder that creates the database (“create_tables_oracle.sql”). Again, aside from making sure you are connected with the user you just created, just click and go.
  3. Running the SQL Script which injects the seed data (such as the admin user and the basic roles, configuration settings and so forth) which is in the private_cloud subdirectory and is called “seed_data_oracle.sql”.
  4. The final step is to reset the admin password for the future OPA Hub. This uses the admin command (the admin.sh or admin.cmd is present in the unzipped folders in the same location as install.cmd) with the -resetpassword switch. The various options are detailed in the documentation, if you are using Oracle DB don’t forget to use both the dbconn as well as the dbtype and dbuser and dbpass options. If you get a message about admin not being a user, go into the authentication_pwd table (there should only be one record) and change the status flag from 1 to 0 or vice-versa. Commit the changes then run the admin script again with your new password request.
  5. You can now run the third option to create the web applications and manually install them on the WebLogic server using the Deployment option. Don’t forget to create a Data Source pointing to your database before you install (it is documented here). On a test server I always put it as the default datasource as well to save me time.

Now you should have an up and running OPA 19D :

19D Welcome Screen

There are of course a number of other things you might have to fiddle with or that you can leverage. In my case, there was already a 19D database instance installed some days previously using the original installer, so a clone might have been an easier option.

Now let’s just make something clear. Under no circumstances am I telling you to do this. I’m putting this here because I thought it was interesting and educational. But you must the installation tools and guide provided. I will not be held responsible for anything or anyone.

Whats New in Oracle Policy Automation 19D?

Whats New in Oracle Policy Automation 19D?

Another quarter rolls by and the Oracle Policy Automation team have released their latest version. This is the final one for 2019, and who knows what 2020 will bring us? Judging by the conversations at the different Focus Groups this year, I would say “lots of things”! So here is our traditional roundup of Whats New in Oracle Policy Automation 19D!

Don’t forget to reach out to the Oracle Policy Automation Blog team and hassle them to publish the 2020 Oracle Policy Automation Focus Group calendar, so you can start convincing your boss that you need to go there. And believe me, you need to go there. It’s by far the best way to get facetime with the great and the good of the community, both from Oracle and from the customer side. And it’s all in the spirit of sharing and collaboration. No selling!

Whilst I’m on this subject, don’t forget the Early Bird prices for Modern CX 2020 in Chicago run out soon. We’ll be there, so I look forward to meeting as many people as possible!

Back to the subject at hand – what’s new in Oracle Policy Automation 19D? Well, here is the list:

New Hub User Interface in Oracle Policy Automation 19D

There have been mutterings about this for a while so it’s with pleasure that I see the new UI has grown more responsive and more in line with the other Oracle applications in the Cloud:

Whats New in Oracle Policy Automation 19D? 1
You might think that was a bit of a disappointment since it’s pretty similar to the last one, but digging a bit reveals more news:

Whats New in Oracle Policy Automation 19D? 2
The fonts have changed. And…
Whats New in Oracle Policy Automation 19D? 3

So have the role names! And the deployment pages get a refresh too. Someone has been downloading icon sets I think!
Whats New in Oracle Policy Automation 19D? 4
And the detail pages get a wash and brush-up too. I find the metrics very small however:Whats New in Oracle Policy Automation 19D? 5
Drilling down on one of the metrics, shows the updated screen shown below. Still no visibility on metric for Web Services,or any other channel (sigh).
19d - Project Metrics
But there’s more. Wandering around in there, we can notice that the “collection” idea has been renamed the rather more sexy “Workspace”:

Whats New in Oracle Policy Automation 19D Workspace
This all seems to smack of “clearing the way for a bunch of new stuff” but hey, what do I know?

Entity Level Forms

So now if you have a household with 3 individuals applying for something, you can run off Forms for the individual members of the household.

Entity Level FormsResubmit Interview Data

Interview designers can now allow Screens to resubmit data. This is only available for Interviews using the Connector Framework.

Resubmit Data in Connector

Pass a cookie parameter in the OAUTH header for embedded interviews

For a web service connection, there is now the ability to name a cookie which will be passed through in the parameters of any Load, Save, GetCheckpoint or SaveCheckpoint request made. This enables customers that authenticate users of their interviews via an OAUTH token passed in a cookie to have that same token passed when the data adaptor is invoked during an interview.

Refresh Seed Data

This is a very interesting one. Suppose you need to refresh the seed data in the course of the Interview. I mean, reissue the load and pull in mapped data like you did at the start? An intriguing prospect, with lots of side effects, for example in the case of mapped entities (and this taken from the online help) :

This means that when seed data is reloaded:

  • any instance that currently exists in the session but not in the seed data will be deleted,
  • any instance that exists in the seed data but not the session will be created in the session, and
  • any instance that exists in both the session and the seed data will be left alone.

Well, that started out looking like a modest release but in fact there are lots of things that are going in Oracle Policy Automation and Modeling. Thanks as always to the whole team for another cracking release!

Table Headers : Tabular Layout Trick

Table Headers : Tabular Layout Trick

Updated 18th March 2020

[Update Start

Thanks to assiduous reader Steven Robert (see the comments) who reached out and pointed out some annoying side effects and  a requirement, now I get to revisit this topic after a while away. As luck would have it, I was working on a similar situation to the one Steven describes, and I had not been timely in updating this post. Thanks to him and here is the updated version of the concept, with explanations.

  1. As the original post mentioned, and Steven pointed out, the selector used in the example is unreliable
  2. Steven proposed a new selector and solved that issue, making it multi-column as well

But the downside is the lack of display when the table is first instantiated. We need something capable of reacting before the think cycle kicks in.

In order to  achieve something like this, our “payload” needs to be part of the table that is displayed automatically. So the obvious candidate in this case is the label extension, since the first column is actually a label anyway. You could extend this concept to include other controls, but  it would require more heavy lifting as you would probably end up, if you were unlucky, building an entire Entity Container. We covered that in the JavaScript Extensions book and it isn’t usually a short effort. Anyway, we have a label so we are cool.

Our label extension has a mount key which fires when the label is displayed. So if the table is displayed, the label code will kick in. So we can be ready as soon as the table is ready. Secondly, we could in theory add several label columns and have custom headers for each of them (you could of course achieve that using non-JavaScript techniques).

Here is the walkthrough based on the previous Project (with the credit cards and visa cards which are derived as a subset of the credit cards).

  1. Add a label in the row in the table and add custom properties. It should display whatever attribute is appropriate.
  2. Generate a standard label extension and edit it a bit.
  3. Add the custom properties to trigger the code when the label is displayed.
  4. Add another label if you want (I ran out of originality but added a second one for demonstration purposes).
  5. Fire it up in the Debugger, and then in the Browser.

I didn’t go all the way to deployment but it looks like it could be elevated into a viable concept. Also, I wasn’t a very good citizen as I didn’t do any checking to avoid my code running every time the label is instantiated – I would most likely check to see if the header already had the text I wanted to avoid setting it again.

And finally, of course if you want the Project Zip just leave a note in the comments!

Here is a walkthrough video. Hope it makes sense!

Update End]

Original Post :

There is always much discussion when users first discover the Interview tab. Let’s be honest – not all of the comments are exactly positive. It all feels a bit, well, basic.

There are a number of things that catch you out at first (and indeed, later). So let’s take a moment to study tabular layouts and a common issue.

For this example I’m going to use the same project (Credit and Visa Cards) as the previous post, since that gives us two entities to work with.

Tabular Layout

Let’s consider that you want to display both entities using tabular layout. You create a Screen and set them both to tabular display. But let’s assume that you want to display the Visa Card with a couple of specifics. You want to include the provider of the credit card. So let’s set that up as a Value List and use an attribute on the Credit Card, and infer it on the Visa Card:

Table Headers Tabular Layout Trick 1So, with that now done, we want to display the Visa Card provider in the Entity Collect (as it is an inferred entity, we cannot use an Entity Collect). But we want to display it as a label as you can see in this screenshot (we added a name to the attribute as you saw in step one so we can reference it in our Screen:

Table Headers Tabular Layout Trick 2Notice how we added a label and used that to display the text of the provider? Using a label ensures three things

  1. It is read-only
  2. If the user is tabbing from input to input, the cursor will not get stuck in that field
  3. It doesn’t look like a read-only input, just a label (which is what we want).

But the downside is that the label does not have a table header in that column, since the Interview designer only adds those for Inputs:

Table Headers Tabular Layout Trick 4

I find it a shame that we cannot put table headers in this “tabular” column, since in HTML a table should have column headers. In fact if we take a moment to inspect this table in the browser, we note that annoying, there is a table header in the table:

Table Headers Tabular Layout Trick 5

So, we need to get that table header populated with our chosen text. But how shall we do it? We don’t want to create an Entity Container extension, since that would mean we have to do the whole thing from top to bottom. So we only want a little tiny change. We have a couple of choices.

  1. Create a Style Extension for the Entity Collect
  2. Create a Label Extension for stealth modification

Let’s try the first option, since it reveals some interesting facts about Styling Extensions. Firstly, get ready by doing the following; change the text associated with your entity in the Interview by double-clicking where the rectangle is, and entering whatever text you would like to display in the missing header.

Then add a compound Styling Extension to your Project. Tabular Containers allow for nested styling, like this:

OraclePolicyAutomation.AddExtension({
style: {
tabularContainer: function (control) {
if (control.getProperty("name") === "xContainer") {
style: {
headerRow:
YOUR STUFF GOES HERE

}
}
}

}
});

Notice the “headerRow” is a child of “tabularContainer”. And notice the line that says YOUR STUFF GOES HERE. Now for an interesting fact about Styling Extensions. They behave, to a reasonable degree, just like Control Extensions. They are really one and the same thing – the main difference of course is the handlers that are exposed in Control and Interview Extensions.

Drop jQuery into your resources folder, and then replace YOUR STUFF GOES HERE with the following line:

$("#opaCtl4th0").text(control.getCaption());

Of course, the jQuery selector may be different for you but it is easy to find the “header” I illustrated in the previous screenshots. Open your Project in a real Browser (Ctrl+F5) for debugging and take a look at the results:

Final Header Result

Our Styling Extension has added the text to the header, drawing it from the Interview Screen Entity Container definition, and we have it where we want it. Of course, you could style it as well.

But it goes to show that Styling Extensions are really not very different to Control Extensions!

Back to Basics : InferInstanceFor

Back to Basics : InferInstanceFor

It’s one of those functions that people often ask questions about. At first glance InferInstanceFor appears to be “just another one of those Instance functions”. But it actually hides something very interesting. The ability to create copies, to a certain degree.

So copies of instances, huh? Why would I want to do that? Well there are many reasons. But before we look at reasons, let’s look at how it works. Starting with a simple data model – consider the credit card as an entity. And the visa card as well. Let’s say that (logically enough) the visa card will be hosting instances of the credit card that are Visa cards.

So if we wanted to actually infer the existence of these Visa cards (as opposed to inferring membership of a relationship) then InferInstanceFor is going to come in very handy.

What we need, in order to be able to do this, is to establish an inferred associative relationship. We need to connect the credit and Visa cards with a relationship, so that we can use that in our upcoming rules.

InferInstanceFor 1

Suppose we enter into the Debugger 3 credit cards. Note that the Visa card is an inferred entity – you are about to add the rules to infer them. Here is the rule to begin the Word document:

InferInstanceFor Rule

And so, let’s tidy up the Interview and make a nice Screen layout with the two entities on the Screen – the credit card as a New Input and the Visa card as a new Control > Entity Container. I’ve added some labelling and put it into Tabbed layout for clarity:

InerInstanceFor Screen Layout

So now let’s run the Debugger and see the result, with 3 cards entered:

InferInstanceFor Debug 1

Hmm, that’s seriously underwhelming – what’s with the unknown?! The reason is, InferInstanceFor simply creates the “copy” instance. It does not in any way clone the attribute values. So right now you have a Visa card, but there is no information at all in the instance.

That’s why, whenever you see an InferInstanceFor, you are highly likely to see another rule right after it, making sure that some of the data is actually populated. A useful function in this context would be For(), since it is designed for cross-entity reasoning – which you are now doing since you have two entities. A sample rule might look like this:

InferInstanceFor Rule For

Note that this rule assumes that your credit card entity has an attribute called the credit card number. I renamed the attribute the credit card to the credit card number in my project. And of course, the separator “,” might be different in your region.

After a bit more tidying up it looks like this. Nice!

Debug Screen Final

Job done. So InferInstanceFor is useful for creating mirror copies of instances, but you will need to leverage other functions and rules to actually populate the instances you create.

If anyone wants the Zip File, just leave a comment!

Input REST Batch Requests into Debugger

Input REST Batch Requests into Debugger

One of the new features introduced in 19C is the ability to use REST batch sessions (or requests, to give them their real name) directly in the Debugger. This is a great leap forward. Up to now, where I am working at the moment, we had built a tool to translate the REST into XML but it was still less than optimal.

So you can imagine how excited I was when I saw this new feature arrive in the product. There is, however, one major issue that I still find very frustrating. You will understand perhaps if I show you an example. Let’s consider the following. I have a batch of 10000 REST Batch cases that have been used in our testing and saved in JSON format. Now I want to open one of these in my Debugger to investigate what is happening. I open the project in Oracle Policy Modeling and I rush to the Debugger.

REST Batch

The pop-up window shows that my JSON file has been loaded, and shows me…the case id. Which from a functional point of view, of course, tells me nothing at all. Most of the testers here would be unable to remember which case represents which testing scenario. What we would have loved (and we are going to ask for) is the possibility to choose what to display in that window. For example, in our case, maybe if we show the identifying attribute from one of the entities being used, that would be more than enough for us to be able to recognize which case it is.

This might not seem a big deal but when you have an operations department who simply sends you the file and says it does not work i(they don’t necessarily know anything about Oracle Policy Automation apart from how to run it) it can be frustrating working backwards from REST case numbers back to scenarios that we can relate to our Test Cases in Excel.

I’m going to be accused of mixing everything up but it would be nice to have something easier to recognize, or perhaps a parameter that we could change.

Worldwide
Logo by Southpaw Projects LLC