Category: Integration

Intelligent Advisor as Lightning Web Component (LWC)

Intelligent Advisor as Lightning Web Component (LWC)

This post discusses two ways to embed Intelligent Advisor as an LWC into Salesforce Lightning applications. It is a good opportunity to review the two ways to embed an Interview into any modern web application, and to revisit the specifics of passing parameters to the Interview.

The two options are as follows:

  • Embed the Interview using an IFRAME
  • Embed the Interview using the OraclePolicyAutomationInterview object and either the Start or the BatchStartOrResume method

Both are implemented as Lightning Web Components. In Salesforce, it is not possible to execute external JavaScript, even if the location has been added to the Content Security Policy Trusted Sites. The JavaScript file (in this case Interviews.js) needs to be uploaded as a Static Resource and loaded into the component. The same is true for any styles or other resources (images and so forth) that your Interview might need – for example if you have implemented extensions.

The content of a LWC is essentially three different files – a template for the HTML, a JavaScript file for the code and a manifest file for the details of the deployment and where it can be used. In scenario one, the IFRAME, the code is very short and simple. The HTML template uses variable in the source attribute, and the detail is defined in the JavaScript. The following screenshots assume you have created an SFDX Project in Visual Studio Code, added a LWC and authorised the Org so that you can upload the files. For the demonstration we used a trailhead instance.

The simple HTML Template for the IFRAME embed

Then the JavaScript for the fullURL() code.

The JavaScript file with the fullUrl() code.

Note how the selector uses this.template.querySelector() rather than the traditional document (or jQuery for that matter) .

Once that is up and running, the manifest file is a doddle and looks like this.

In our case the LWC is available on App, Record and Home pages.

The result is as ugly or as pretty as you want it to be, within the constraints of the IFRAME which nobody really wants to see any more!

We made it ugly so you wouldn’t want to use it

Let’s face it, spending all that time and money on modern web components and ace software like Intelligent Advisor, it feels a bit disappointing to end up with an IFRAME. So the other option is more enticing. Let’s use the JavaScript lbrary to launch the Interview!

Another LWC is needed and the difference should be immediately visible. In the first instance, the HTML template is a tiny bit more complicated, but not much.

Note the lwc:dom attribute.

So that the LWC framework leaves us alone and doesn’t start to steamroller over the embedded Interview, we add a blank DIV and ensure it is marked as lwc:dom manual so that we can add child content to this DIV without getting an error.

As prerequisite we need to ensure that the interviews.js library and any styles we use have been added to the Org as Static Resources. Then we can import them into our code. So we make sure we’ve done that in Setup first.

Interview and InterviewStyle. We didn’t use jQuery for this demonstration.

Now that we have set that up, we can add the relevant Whitelist Sites to Intelligent Advisor and our Org in the Access List (Intelligent Advisor) or the CSP List (the Org) to ensure we don’t get any CORS errors or Security errors. For the record, it’s probably obvious, but you cannot use mixed mode here (everything has to be HTTPS).

Our LWC looks a little like this, with the static resources loaded (and a couple of other things needed to get our extension up and running.

OpenIA() will be the code that actually calls Intelligent Advisor

The interview is called using the BatchStartOrResume method of the Oracle PolicyAutomationInterview object – for no reason other than the ability to package parameters, seedData and anything else into a tidy object. Recall that this method can be used to launch multiple Interviews on a single page which is not our case here. For this demonstration we used an Interview that calculates the travel time between stations in the Paris metro.

Open IA

Much of it will look very familiar since it is the standard embedding method. The end result, once deployed to the Org and added to the Home Page (for example, looks much nicer than the IFRAME and plays well in the space allocated, with the Intelligent Advisor styles adapting so that I can see the Screen even with a small view window.

And so, Intelligent Advisor lives in a Salesforce Lightning Web Component. Have a nice day! If you are interested in seeing the Zip of the code, just leave a comment.

Links : Setting Up Development Tools / LWC QuickStart

Continuous Delivery – OPA and Postman, Newman and Jenkins #4

Continuous Development – OPA and Postman, Newman and Jenkins #4

In this part of our series, we will take our existing platform composed of Postman, Newman and Oracle Intelligent Advisor and plug it into Jenkins, to allow us to run lights-out tests according to a schedule, and to enable the testing team to receive emails when the tests have completed.

For reference, the other parts of this series are here : Part 1, Part 2, Part 3.  For the record, let’s be absolutely clear about one thing – this is not the only way to automate this process, and there are other tools and platforms that can do the same job. Heck, if you are lucky you will not even have to worry about this because some other team or department already has this thought through and all you have to do is deliver projects. But you might want to understand the mechanics of this process anyway, or if you are unlucky (or lucky, depending on your point of view)  then when the musical chairs are happening and you are the one left standing so you inherit responsibility for this, at least you will have a head start.

So, off to work. Many of you will know Jenkins, the stable, powerful open source automation platform that allows us to pretty much build, deploy or test anything. The biggest bonuses of Jenkins are that it is free to use, easy to install and that it comes with an incredibly large set of plugins, which extend the functionality to include lots of other things.

If you have installed Jenkins, which is about as easy as it can be, you should find yourself looking at the basic setup screen of Jenkins:

If you are pressed for time, just accept the suggested plugins first, and then return to the list of plugins later. You will need to add several, if they are not already present. The most important one is the Node.JS plugin, since we want to use Jenkins to communicate via Newman, and you will remember from the last post in this series that Newman runs on Node.js.

 

  1. Manage Jenkins, Plugin Manager
  2. Click Available to find more plugins
  3. Filter for nodejs
  4. Select the plugin
  5. Install without a restart or with a restart as you wish depending on whether you have more to install.

Other important plugins include potentially the Email Extension Plugin which you might find useful, and potentially lots of others. You will also need to make sure that you have installed any plugins you want to use with Newman, recall that in the last post in the series you added an HTML report generation to Newman. If you didn’t do that yet, you should do it now. There are a few other settings in Jenkins that you need to setup, notably the System Admin Email Address but most of it should be fairly straightforward and you will find out that you need it when Jenkins complains.

Now you need to tell Jenkins a little more about Newman. In the Global Tool Configuration of Jenkins, via the Manage Jenkins option, scroll down to the Node.js seection and ensure you have added the details about your Node.js installation, and that you want to use Newman with it.

Jenkins Setup NodeJS

  1. Add NodeJS
  2. Give it a name whatever you would like to see in the logs
  3. Install Automatically
  4. Choose the version that you installed (the installer would have told you that)
  5. Add this to ensure that Newman and the HTML Reporter you used with Newman are available

Save all of that and you are just about ready to use Jenkins for real.

Create a new Freestyle Project in Jenkins and give it whatever name you wish. Then in the Build Environment section, ensure that you signal the need for Node.js. In the Build Step, choose Execute Windows batch command (unless you are not running windows, in which case there is a non-Windows equivalent). Add, for example, the following command line to run Newman with Node.js and use the HTML reporter, with your data file from the previous post in this series. The file name ensures that this file is overwritten each time you run this project. If you don’t you will end up with lots of files, each using a timestamp as the name, which can get confusing.

Having set up our Jenkins Project, we are going to add a Post Build Action to ensure that your team (or just you) gets an email with the report from Newman. The screenshot below assumes you have added the Extended Email Plugin mentioned above. Note the items below the image.

Jenkins Post Build Step

  1. Without going into too much detail, the extended email plugin provides all sorts of options for creating email groups and deciding which email (there can be many templates) goes to which person.
  2. This is where you add the Newman HTML report. Note the syntax of the  attachment path is not maybe what you are used to.
  3. The Jenkins build log can also be included with the mail, and can be useful if you have added many build steps and want to see what they did.

At a minimum you might want to set up an email that is sent when there are failures:

You can click the advanced options and begin that process to set up who receives emails in different trigger circumstances.

Save your project. Now, as a final point, you may which to schedule these tests to run when you are not in the office. So you can add Build Triggers from the top of the Project Configuration and use CRON-style syntax to decide when this will run.

Jenkins Setup Build Triggers

Now all you have to do is save your Jenkins Project. Of course, if you don’t want to wait until midnight you can always Build Now. You will recall from the third part of this series that my data file had an iteration in it that was set up to cause the Postman test to fail, because the journey time was too long. So, sure enough, I get an email in my Inbox:

Jenkins Email 5

The HTML report is attached to my email, and I can view the details of the failure, which is indeed the iteration 3 from my data file:

So we are now in a position to test, in an automated fashion, our Web Service from Oracle Intelligent Advisor. Although the example given was using the REST assessor, Postman will happily let you run SOAP XML calls as well, so this would be applicable too. In the next part of the series we will look at using Jenkins to test our HTML-based Interview for the same project.

See you soon!

Continuous Delivery – OPA and Postman, Newman and Jenkins #3

Continuous Development – OPA and Postman, Newman and Jenkins #3

Following on from the two previous episodes, (part one / part two) the stage is almost set to move to automation over and above the simple concept of a Postman Collection. In this chapter we will see how to use data files in Postman to facilitate making use of CSV or JSON data in a dynamic way, and we will get Newman up and running. So let’s start with the first element.

In our Collection, we currently have one request which is sending information to OPA and getting a journey time and plan back in response. Great. But we want to test our journey planner with more than one journey. And we want to be able to easily change the journeys without having to mess with our request. So this is where the ability to load data into a Collection from a text (CSV or JSON) file comes in very handy in deed. In pictures

  • Create a CSV file with your data in it. For the project used in this example we would need to have something like this

Collection Data File

Notice the headers are origin and destination. They will become variables that we need to add to our request.

  • Make sure that the request is updated

So now my request looks like this:

Request Modified

Don’t forget to save your request. Now if you go to Collection Runner and run your Collection, you can use the Select File button and select your file:

Using a Data File in Postman

As soon as you select the file, you will see that the number of iterations changes to take into account all the rows in your file. Now when you want to push a new set of journey tests all you need is to change that file. Awesome!

Armed with all of this, we can now move to the next challenge. Postman is great but not everyone wants to use the graphical user interface all the time. How about the ability to run a Collection from the command line? That would be great AND would open up lots of options to running Windows Scripts and the like. Well, the news is good. Newman is the CLI (Command Line Interface) for Postman. Setting it up if you are not familiar with Node.js can be a bit daunting, but let’s just make a shopping list first:

  1. Install Node.js (here)
  2. Install Newman using the Node Package Manager
    This step should be as easy as opening the Node Command Prompt (which got installed in your Windows Start Menu in the first part) and typing something like the following:

    Installing Newman

  3. Run Newman from the Node.js prompt to use Postman Runner with your chosen Collection and Data File using the ultra simple command line.

So the last step might look a little like the following – notice that in the command line you specify the Collection (which you must first export to a file from inside Postman) and the Data File (wherever it is – I put both files in the same location for ease of use. And once the job is done, Newman reports back with it’s own inimitable, Teletext-style display of the output. A word of advice – before exporting a Collection make sure you have saved all the recent changes!

Exporting Ready for Newman

 

Once the Collection is exported, you might run it like this. This is just a simple example. It is possible to also load environment variables and more to fine tune it.

Newman Command Line

After a few seconds we get the output from Newman. Note the 4 iterations, and that iteration 3 failed due to an excessively long journey time.

Newman Output in Teletext

Now this is of course a major step forward – running Collections from the command line, feeding in data sets just by using a simple command line switch – these features mean we can really now start to think of automation in our approach to testing this project. But we are not done yet. The output from Newman is pretty ugly and we need to change that. By installing Newman HTML Reporter, we can get a much nicer file. Follow the steps in the link and then change your example command line to something like this:

Note the new command line option -r html (which means, use the HTML reporter) and the –reporter-html-export option (with two dashes) which ensures the output file is stored in the folder of my choice. And the HTML output is so much better:

Newman HTML Reporter

What a nice document. So far so good. We can spin these collections with just a command line. But wait a minute, what about running tests when you are not in the office – sure, you could write yourself some sort of Powershell Script in Windows. But what if you wanted to run a set of tests every day at 3pm, then again on Tuesdays, then once on the first Monday of every second month…it would get quite hard. And what if you wanted to send the HTML Report automatically to all your colleagues on the testing team…but only if there were failures?

And so, we march onward to the next phase. See you shortly!

Continuous Delivery – OPA and Postman, Newman and Jenkins #2

Continuous Development – OPA and Postman, Newman and Jenkins #2

Following on from the previous post in this series, we now have Oracle Policy Automation and Postman playing nicely together. In this chapter we will use  some scripts in our Collection, both to streamline the authentication process and to provide us with some feedback about the execution of our collection.

So, first things first let’s add some scripts to our Postman Collection. In Postman there are two sorts of scripts, which you can think of as “before” and “after” scripts. For example you might run a script to prepare some data, or to authenticate your user with Oracle Policy Automation before you actually run a request. Similarly, after the test has run, you might investigate an output attribute from Oracle Policy Automation, or look at the response time of your request and compare it to others. That sort of thing. Let’s look at some useful examples. And just before we continue, remember that “before” and “after” scripts are available both at the Collection level as well as the individual Request level.

  1. Authenticating before the Collection is run

This is a common need  – to make sure the Collection can actually run without having to manually go and authenticate if your 1800 seconds are up. Of course more sophisticated scripts could store the current expiry and check to see if authentication is needed but this is just a simple example of how to authenticate. For example, you could paste this into your Pre-Request Script or your Pre-Collection Script:

var myVar = "?grant_type=client_credentials&client_id=YOURUSER&client_secret=YOURPASSWORD";
console.log("Updating Token");
pm.sendRequest({
url: "http://YOURSERVER/determinations-server/batch/auth" + myVar,
method: 'POST'
}, function (err, res) {
pm.environment.set("OAuth_Token", res.json().access_token);
});

This requires you also to set up a variable in your Environment in Postman called OAuth_Token, and also to ensure that your Collection knows to use it. Follow these steps.

  1. Create the Environment Variable by clicking the Eye icon and then Edit. Leave the value blank, your script will populate it automatically, Ignore the other variables in the screenshot they are not used by this simple script.
  2. Add the Script to manage the Variable.
  3. Add a reference to the new variable in Authentication.

So in pictures, first the Environment Variable to be added:

Postman Variables 3

And the Collection or Request Pre-Request Script Tab:

Postman Script Before

And in the Authentication Tab (note the double braces):

Postman Variable Usage

Now we can launch the Collection and know that the authentication token will be acquired as well, without having to remember to do it manually.

This is the first step towards our automation platform being truly useful. But before we do that, let’s add a couple more things to our Collection. We will add a test script that actually tests something. In my example Project, one of the outputs is the duration of the trip between the two stations. Let’s say that for whatever purpose, we consider a journey of greater than 20 minutes to be a functional failure. So we want to write an “after” script that looks at that for us. And let’s also say that we want to look at the response time and ensure that it meets our internal guidelines. So we might have a script like this:

function standardDeviation(values, avg) {
var squareDiffs = values.map(value => Math.pow(value - avg, 2));
return Math.sqrt(average(squareDiffs));
}

function average(data) {
return data.reduce((sum, value)=>sum + value) / data.length;
}

if (responseCode.code === 200 || responseCode.code === 201) {
response_array = globals['response_times'] ? JSON.parse(globals['response_times']) : []
response_array.push(responseTime)
postman.setGlobalVariable("response_times", JSON.stringify(response_array))

response_average = average(response_array);
postman.setGlobalVariable('response_average', response_average)

response_std = standardDeviation(response_array, response_average)
postman.setGlobalVariable('response_std', response_std)
}

This script uses three global variables that you create in the same way as environment variables and one of them is an array, representing the different times you run your request for example. In pictures:

Add an “after” Test Script:

Add the Variables :

Run the Collection or Script and observe the variables are evolving:

And let’s add another test, this time to check the journey duration is not too long (paste this after or before the existing “after script).

As you can see above, you are able to access the output of the request using JSON and then you can pose a condition. In the example above, the test has failed because I chose two stations that are very far apart. This is getting rather interesting. The next part is to use some data that makes our requests a bit more useful and to automate them. See you soon  for the next part!

Kudos to the following pages :

How to Automate OAuth2 in Postman

A Postman script to calculate average response times

Oracle Policy Automation and Siebel

Oracle Policy Automation and Siebel

My co-writers over on the Siebel Hub participated in a recent successful online event with OnTheMove Software, who provide mobility solutions for Siebel and other CRM systems. They are good friends of the Siebel and OPA Hub Websites, and it was a pleasure to take part in their popular event. You may recall last year’s event.

Although the subject of the day was very much Siebel, The OPA Hub Website was able to deliver a presentation about the ability of Oracle Policy Automation to “fill some gaps” left by Siebel. To be more specific, Siebel 2018 has recently been released and is a fantastic, elastic modern application with a fully Cloud-aware architecture. All the old legacy artefacts (files like the compiled SRF, web templates and so on) have been removed and the proprietary elements of the architecture replaced with more modern, open-source, cloud-friendly elements.

But the development tools are, at the moment, not keeping pace with this move to the Cloud. And as the Cloud is perceived as more accessible, more friendly (more fun!), the users and developers in the Cloud have a hard time keeping up with the need to provide modern platforms for developing, deploying and using the CRM.

That’s where Oracle Policy Automation steps in. With it’s incredibly easy, business-friendly natural language approach to putting business policy online and available to any application, with it’s drag and drop no-code user interface, with it’s built-in BI Publisher, with it’s agnostic approach to data model integration – Oracle Policy Automation is the solution. When I meet Siebel customers shaking their head at their thousands of lines of eScript code, when I see them wondering how to accelerate these aspects of development, make them more accessible and more efficient all at the same time, I just want to shout out “Oracle Policy Automation”.

Unusually for me, this presentation also manages to squeeze in three quotes from William Shakespeare. So if you are interested in how Oracle Policy Automation can help accelerate your Siebel CRM deployments, just watch this video.

Thanks as always to David at OnTheMove for his impeccable organisational skills.

video

Oracle Policy Automation and Siebel Innovation Pack 16 #6

Oracle Policy Automation and Siebel Innovation Pack 16The final post in this series looks at some of the “extras” that facilitate the integration of Oracle Policy Automation and Siebel Innovation Pack 16. By “extras” I mean other Web Services provided by Oracle Policy Automation, which will need to be taken into consideration when designing how these two applications can best work together but that are not directly related to the subject of getting the two applications to integrate using Applets, Integration Components, Workflow Processes and so on. Some of the content that follows is license-dependent, but should be of interest to any Oracle Policy Automation person.

Overview

Given that there are a number of different services to review, this post therefore is necessarily a mixture of many things. To summarise, there are

  • Administrative Services : The REST API of Oracle Policy Automation allows the creation of users of all the main types (integration users as well as normal ones) and also for the automation of deployment, and retrieval of associated information.
  • Execution Services : Assess, Interview and Answer (and the Server service, although it does not really need to be covered here).
  • Batch Execution Service : The REST API for Batch Execution allows for batched execution of goal determination

Together these are referred to as the Determinations API. The API is version specific in the sense that features are constantly being added (for example, integration user management is new to release 18A) so make sure you are using the correct WSDL file. For specific Oracle Policy Automation rulebases you can download the WSDL easily and that is shown in the videos below.

Assess Service

The Assess Web Service is probably the most famous service from a Siebel developer perspective, since it allows Siebel Enterprise to call Oracle Policy Automation and obtain an XML response (in the manner of a typical SOAP Web Service). It is often used therefore when no user interface is required.

The above video provides a short overview of how to derive the necessary information from Oracle Policy Automation and to use it in standard Web Service fashion. Developers should note that the post-processing of the Response will most likely occur in a Siebel Workflow Process or Script, in order to parse the response and deal with it.

As such, accessing an Oracle Policy Automation rulebase with Assess can be done very simply indeed. If the Oracle Policy Automation rulebase you are working with has a Connection in it (to Siebel or anything else) then you may also wish to use the Answer Service (see below).

Interview Service

The Interview Web Service was heavily used in the Oracle Policy Automation and Siebel Innovation Pack 15 integration, in order to mimic the behavior of the standard Interview using the Siebel Open UI framework. This Service is best suited to applications needed to provide the Interview User Interface in another technology (a Java application, a Silverlight Client, a Visual Studio application or whatever). It has a number of specifics and developers must manage session control, as the short video below illustrates.

Answer Service

The Answer service is reserved for Projects where there is a Connection object in Oracle Policy Automation, and as such provides a SOAP-based tool to pass data sets to the Project and receive the response. Amongst other things, therefore, it can be used to test the behaviour of an Oracle Policy Automation project when the external application (for example Siebel Enterprise) is not available.

REST API Services

As outlined above, there are in fact two REST API areas of interest : the administrative platform and the Batch Assessment service. Both require OAuth2 authentication and session management.

What’s Left – Oracle Policy Automation and Siebel Innovation Pack 16

So what is there still to do, for the Siebel Developer who has followed all the different posts and videos in this series? Well of course it is not possible to show everything, so here are the main points that you will now need to finish on your own : but most of them are entirely non-specific to Oracle Policy Automation and Siebel Innovation Pack 16.

There are of course many different things that you might want to do with Oracle Policy Automation and Siebel Innovation Pack 16, so at the OPA Hub Website we are always happy to hear from our readers with comments and questions : all you have to do is post at the bottom of the article. We obviously cannot run your project from here (but if you want us to, just get in touch!)  but you should feel free to contact us with questions, ideas for articles or anything else that is Oracle Policy Automation-related.

As Siebel Developers will know, Siebel Enterprise is now in version 17 and the next big thing, Siebel 18, is expected soon. The good news is that almost all of the steps shown here are completely identical in the newer version, since the changes are architectural rather than functional for the most part. If you come across anything completely different then, again, just let us know. We do plan on providing an update to this post series as and when the Siebel 18 is made generally available.

Finally

The OPA Hub hopes you all enjoyed the different posts in this series. For your bookmarks, here are the other posts in the series:

Oracle Policy Automation and Siebel Innovation Pack 16 #5

Oracle Policy Automation and Siebel Innovation Pack 16 #5

So, following on from the previous post in this series, where we looked at testing the Load and Save operations using Oracle Policy Automation and Siebel Innovation Pack 16 (as opposed to simple SOAP UI testing which is good, but will only get you so far), this post takes a slightly different turn and investigates two operations that are not strictly speaking required to be implemented.

The definition of the Oracle Policy Automation Connector Framework contains a boolean tag as to whether checkpoints are enabled in a given Connection. And these checkpoints are the subject of this post. Firstly, what is a checkpoint?

A checkpoint is a point in an interview, after which the contents of the Screens (Controls, for example data you have entered) is saved in a specific format, namely as an encoded Base64 string. This string of course needs to be saved somewhere : for example in a table in your Siebel database. Once it is saved, it can be used to open the Interview once again, through the integration between Oracle Policy Automation and Siebel Innovation Pack 16, and the session can be resumed. Obviously this has a great advantage of being simpler than trying to save all the data you have into Siebel Business Components, especially given that the Interview might not be complete yet.

So you can think of checkpoints, and their two operations SetCheckpoint and GetCheckpoint, as sort of temporary saves. When you save the checkpoint you do so with an identifier (so, an id as in previous operations). But the process of SetCheckpoint and GetCheckpoint is completely separate from Load and Save : they are two different mechanisms to handle two different business needs.

Here is a screenshot of what it looks like residing in a Siebel Table, which you will learn more about in the videos and presentation:

Oracle Policy Automation and Siebel Innovation Pack 16 - Checkpoints

The use of the Siebel Row Id means that it is relatively simple to create an Applet that sits on top of the Business Component, because you might use it as a Child Applet with the Obj Id Val as your key to finding stored sessions for your Customer or whatever it is.

The usage of these stored sessions requires a slightly modified URL to open the Interview, which you will learn about in the videos as well. In both cases (starting or resuming an Interview) a Symbolic URL, or a JavaScript embed, will be enough to call the Interview from the Siebel side.

From the Design perspective, implementing Checkpoints in your Screens is very simple, assuming you have selected a Connection that supports them. For example, the screenshot below illustrates the options available when designing the Interview. Note how you can select the relevant Screens or all of them. Selecting all Screens ensures that the Base64 string is pushed to the storage table after each Screen.

OPA 12 - Oracle Policy Automation and Siebel Innovation Pack 16 Checkpoints 2

Now that you have the details of this new part of the Oracle Policy Automation and Siebel Innovation Pack 16 integration, here are the videos to help you go further, and the links to the other parts of the series.

In this topic, learn about the two optional (but very useful methods) called Get and SetCheckPoint. This presentation explains the prerequisites and pitfalls.

Presentation

In this topic, learn how to implement these methods in Siebel, build them into your Oracle Policy Automation Project and how to test and verify their functionality.

Implementation

Links to Oracle Policy Automation and Siebel Innovation Pack 16 Series

Next…

In the next part of this series we will look at other Services available to Siebel developers in Oracle Policy Automation and Siebel Innovation Pack 16.

Oracle Policy Automation and Siebel Innovation Pack 16 #4

Oracle Policy Automation and Siebel Innovation Pack 16 #4

Welcome back to part four of our ongoing series about Oracle Policy Automation and Siebel Innovation Pack 16 . This post continues with the setup and testing that began three posts ago. For reference here are the links to the previous parts of the series:

Oracle Policy Automation and Siebel Innovation Pack 16 Load and SaveThis particular article continues working on the core data transfer operations, namely Load and Save. I also have a tendency to call the Save operation Submit, because it reminds me that not only must the request be submitted to Siebel to save any mapped out data, but a response needs to be sent back from Siebel to Oracle Policy Automation to, for example, display a message in Oracle Policy Automation confirming that the save was a success (or whatever).

This need for a two step approach (Save in Siebel and Respond to Oracle Policy Automation) means your Workflow Process is likely to have both typical Siebel Operations to update the database but also typical transformation and response creation like the previous operations.

The example Workflow Process for Save will require, therefore, quite a bit of work before it is fully functional. In the video I try to highlight this, but it is worthwhile mentioning the key issues again here:

  • You will need to extract any data from the hierarchy sent to your by Oracle Policy Automation
  • You might well need to use scripting if the hierarchy you receive has multiple entity instances (for example, the Oracle Policy Automation Project infers multiple vehicles and you want to save each of them in Siebel).
  • You will need to make sure that you create a Response that updates one of the input mapped, load after submit attributes to show it in the Interview.

In this video which follows on from the previous set of SOAP UI tests, build and troubleshoot your Save operation with Siebel CRM to check for errors. There are lots of places where you will need to put in a bit of work on the example Workflow Processes (since they do not actually save much at all) and more complex (and therefore more interesting) business requirements may require a Business Service approach, namely to iterate through multiple instances of data returned to Siebel.

Whilst the videos cannot give you all the details, they definitely will put you in the right direction!

Oracle Policy Automation and Siebel Innovation Pack 16 Load and Save Testing in Siebel

Remember you can find the White Paper and associated files  (at time of writing) at this Oracle Website location.

Next…

In the next part of this series, we look at two supplementary operations, GetCheckpoint and SetCheckpoint : whilst a Connection does not have to support these operations, if you plan on allowing users to stop and resume their interview before it is finished then you definitely need these operations. See you next time!

 

Oracle Policy Automation and Siebel Innovation Pack 16 #3

Oracle Policy Automation and Siebel Innovation Pack 16 #3

This the third post in this Oracle Policy Automation and Siebel Innovation Pack 16 series, following on from the first two which dealt with the “design time” or “metadata” related operations CheckAlive and GetMetadata. If you want to catch up here are the links to the previous parts.

Oracle Policy Automation and Siebel Innovation Pack 16 Workflow ProcessBoth of those operations are fundamental to allowing the Oracle Policy Automation Hub to understand the availability of your data source and the structure thereof.  Once they are operational, there are two main things to take into account. Firstly, the pattern of Workflow Process plus Inbound Web Service Operation is one that is maintained in every case, no matter what set of data you are retrieving. Secondly, the next stages of the Connection setup are common to many Siebel Integrations but there will be Oracle Policy Automation specifics : in the Load and Save operations you will handle getting data from Siebel to and Oracle Policy Automation Rulebase, and then returning any output to Siebel.

As in the previous cases the Oracle White Paper provides, in the associated Zip file, Workflow Processes and other objects that will be needed. As before, according to your business requirement and technical setup, you will need to edit those Objects in Siebel Tools and make further objects. Changes can be frustrating as you are likely going to be searching the Repository for variable names, or Object references, and sometimes you miss one or two.

In the examples shown in the video presentations and walk-through I have deliberately kept this Oracle Policy Automation and Siebel Innovation Pack 16 overview as simple as possible, for example by eliminating the processing of attachments, and by concentrating on the key steps in the Workflow Processes. So for today we will look at the Load operation. Because this operation will require testing, this post will look at setup and SOAP UI, and the following post will take that a step further and look at testing it with real Siebel data.

The Save (a.k.a Submit) operation is necessarily the most complex operation, dealing with the saving of data in Siebel but also the response back to Oracle Policy Automation – which means taking a request to deal with a response and responding with what feels like a request!

 

Oracle Policy Automation and Siebel Innovation Pack 16 Load And Submit Presentation

Oracle Policy Automation and Siebel Innovation Pack 16  Load and Submit Testing in SOAP UI

Testing

In this topic, take your first steps to testing your Load and Submit in the SOAP UI utility.

 

Oracle Policy Automation and Siebel Innovation Pack 16 #2

Oracle Policy Automation and Siebel Innovation Pack 16 #2

Oracle Policy Automation and Siebel Innovation Pack 16 #2

Oracle Policy Automation and Siebel Innovation Pack 16 - Hub ConnectionFollowing on from the first post about Oracle Policy Automation and Siebel Innovation Pack 16 a few days ago, this post continues with a series of (hopefully) useful videos about the next steps. Last time, you had just built your Connection in the Oracle Policy Automation Hub and had checked to see if the green light came  on. In the video sequence today, you will test both of the design time methods (CheckAlive and GetMetadata) in your SOAP UI testing tool to ensure that you get something like the correct response.

Testing in SOAP UI can be very frustrating at first. You take the time to download the WSDL from Siebel Enterprise and import it into SOAP UI, fully expecting to work with it immediately. But there are a few traps. Firstly, the need to (unless you have switched off the requirement in the Oracle Policy Automation Hub, which would be very unwise in most circumstances) add wsse tags to the Header and provide a user name and password. Secondly, you may (probably) need to remove some extraneous tags on the SOAP Request, and finally if your Siebel environment is not up and running and the relevant Workflow Processes are not active, you won’t get much in the way of feedback :).

Presentation

In this brief overview, we talk about the different big-picture steps to set up communication and how to go about it.

Setting Up a Connection for Oracle Policy Automation and Siebel Innovation Pack 16

In this part you walk through the practical steps to build a Connection, add or import the different Workflow Processes and Inbound Web Services to implement the first two operations and get ready to test them.

Build CheckAlive and GetMetaData Operations

This video walks through the technical steps in Oracle Policy Automation, Siebel CRM and SOAP UI to build these two operations according to the White Paper.

Next…

In the next few days, the Load and Submit operations, the core of the integration, will be worked through and examined in Siebel and Oracle Policy Automation terms.

Worldwide
Logo by Southpaw Projects LLC