Author Archive: jturrell

Intercompany Eliminations in PBCS

As part of a recent PBCS implementation, I had to design an Intercompany Elimination solution within a BSO plan type.  This is a relatively uncommon and somewhat complex requirement in a PBCS implementation, and as such, I was pretty excited to blog about it.  Unfortunately, the whole post nearly ended up in the garbage.

Every once in a while, you write a great post, only to find that someone else has covered the topic sooner than you, and better than you.  In my case, that person was Martin Neuliep.  In Developing Essbase Applications:  Hybrid Techniques and Practices, Martin shared a single formula that elegantly calculated Intercompany Eliminations in a BSO cube.  In addition, Martin was scheduled to discuss the topic of eliminations at Kscope in his presentation on June 28th, titled Eliminating the Eliminations Problem in Essbase.

Luckily, I had an ASO solution in addition to my BSO approach.  As it turns out, Martin was planning on sharing several elimination calculation methods in his presentation (some developed by him, and some developed by others).  Perhaps my ASO solution will make the cut?  (hint, hint)  Either way, you should check out his presentation – I will definitely be there.

With all of that said, here’s my approach to intercompany eliminations in an ASO plan type.

Why Eliminate Intercompany Transactions?

Most of my clients create their Budgets and Forecasts in a “fully eliminated” manner.  Within their financial plans, they pretend that certain transactions don’t occur, because these transactions don’t affect the company’s bottom line.  If one subsidiary has a planned (future) transaction with another subsidiary, these clients may not bother recording the transaction in their Budget or Forecast at all.  While this approach simplifies the planning process, it leaves out financial details that may be useful.

When a client’s requirements indicate that these transactions are relevant to the planning process, we need to build Intercompany Elimination logic into our PBCS applications.  The accounting behind these transactions can get pretty complex, but what we’re going to focus on today are the technical mechanics that facilitate these eliminations in PBCS, specifically in an ASO plan type.

So why do we eliminate these transactions?  Because depending on where we’re looking in the Entity dimension, we need to pretend that they never occurred.  As far as most investors are concerned, these intercompany transactions don’t count on a consolidated basis.  Imagine selling a car to your spouse . . . this transaction doesn’t change your combined income or net worth.  This leads us to an interesting question.  When and where do we eliminate these transactions?

The Land of Make Believe

Let’s start with a simple example – an imaginary company that mines raw materials and sells those materials to its manufacturing units.  These plants then sell finished goods to a distribution company.  All of these entities are part of the same vertically integrated company.  The Entity dimension in such a company might looks like this:

Image_1

To facilitate transactions between two entities in an OLAP database, it is generally beneficial to have separate Entity and Customer Dimensions.  The only unusual thing about this design is that many of the “Customers” are internal customers.  As such, our Customer dimension might look something like this:

Image_13

Note that all members under Intercompany Trading Partners correspond with a member from the Entity dimension exactly, but with an “ICP_” prefix.  This ASO solution will not work if there are discrepancies between Entity dimension members and the members underneath Intercompany Trading Partners.

Planning Intercompany Transactions

Intercompany transactions can occur across a variety of accounts within the Income Statement and Balance Sheet.  The simplest example is one subsidiary selling something to another subsidiary within the same company.  Let’s assume that US Mine 1 sells something to Plant 1 for $100.  Our level zero data would look something like this in PBCS:

Image_3

If we were to look at Total Sales for US_Mine_1, we would want to see $100 in sales.  But what if we wanted to see Total Sales for Total_Company?  Assuming that this was the only sale, we would want to see $0.  This is because the transaction must be eliminated at the first common ancestor between the Entity and the Intercompany Trading Partner.  Total_Company is the first common ancestor between US_Mine_1 and Plant_1.  What makes this calculation interesting is that the transaction should NOT be eliminated at ancestors before we arrive at the first common ancestor.  So we would definitely expect to see that $100 in sales to show up in the parent member Mining.

The Dreaded “E-Company”

The ins-and-outs of these transactions can get tricky, especially when mixing many transactions together with both internal and external customers.  Developers will likely have multiple sets of accounts being eliminated.  (E.g., Sales & Cost of Sales, Receivables & Payables, etc.)  Ragged hierarchies and alternate hierarchies can add additional complexity.  For this reason, it can be helpful to introduce “E-Companies” into the Entity dimension.  These are basically fake companies where we only calculate and store elimination data.

Adding E-Companies to the Entity dimension might look something like this:

Image_4

Unfortunately, E-Companies can make an Entity dimension convoluted.  If your company structure is particularly large or volatile, E-Companies can create a significant amount of maintenance.  They can also be confusing to end-users who might not understand their purpose.

** NOTE – Most intercompany elimination solutions in BSO databases require E-Companies!

ASO to the Rescue!

One of the nice things about PBCS implementations is that they often include an ASO reporting database.  In fact, some clients never consolidate their BSO databases at all, and instead, simply map their data to an ASO cube that rolls everything up on the fly – no business rule required!  And here’s where things get really awesome – in an ASO database, we can calculate intercompany eliminations without the need for E-Companies.

Here are some things to consider when designing your ASO plan type:

  • Both an Entity dimension and a Customer dimension are required.
  • The Intercompany Trading Partner hierarchy (within the Customer dimension) must match the Entity dimension exactly, with the exception of “ICP_” prefixes. This includes intermediate parents.
  • A “Data Source” dimension of some type is required to separate regular data from eliminations.
  • Account dimensions in ASO databases are automatically set to Dynamic. The Data Source dimension will also need to be dynamic to support member formulas.

The Data Source Dimension

In this solution, all of the logic associated with eliminations lives in a dimension called Data Source (or something similar).  In this dimension, all base (non-eliminated) data is loaded into a member called Amount.

Data_Source

This dimension also includes a special “Do Not Use” system-only type of member.  Here, it is called Do_Not_Use_Elim.  We generally do not want users querying this member.  It’s just a temporary holding spot for storing the inverse balance for any intercompany accounts.  This member can be populated with a procedural calculation.

Image_14

It is important to note that the “Do Not Use” member should be populated in the same member combinations as the original budgeted amount, with the exception of the Data Source dimension.  Remember – this “Do Not Use” member is simply a holding spot.  Users should not query this member.

Abracadabra

The “real” magic happens in the member above called Elim.  In this member, a formula is added that filters when eliminations (stored in the “Do Not Use” member) are displayed and subsequently added to the base data.

Elim_Code

When the Elim member above rolls up to the Consolidated_Amount member, we see that the intercompany sales amount goes to zero.  In other words, it is eliminated.  (See row 15 below)

Image_16

The example above shows our original sale (row 3) and the inverse amount stored in the “Do Not Use” member (row 4).  Rows 7-10 show our original Entity and its ancestors.  We can see that the eliminations are only displayed at the first common ancestor and above (row 9 – 10).  Finally, in rows 13 – 16, we see the view that most users will retrieve, using the member Consolidated_Amount.  This member takes the base data in the Amount member and layers in the data in the Elim member.  As such, we can see that the sale exists in rows 13 & 14, but is eliminated in rows 15 and above.

Wrap-Up

Like most calculations in PBCS (and Essbase in general), there are multiple options for solutions, each with its own pro’s and con’s.  This solution works well against very large databases and has the added benefit of not requiring E-Companies.  Happy Eliminating!

Getting Groovy with the EPM Cloud Migration REST API – Part 2

This post is late.  It’s embarrassingly late.  But it’s finally finished, and I hope you find it useful.

In the previous post, we discussed Oracle’s Migration API for EPM Cloud, and walked through some basic Groovy examples on:

  • Retrieving the API Version
  • Listing Files in the Inbox/Outbox
  • Uploading Files to the Inbox/Outbox
  • Downloading Files from the Inbox/Outbox

The features above are useful for tasks like loading data and metadata into an application because we often have to place files into the Inbox/Outbox before we can reference them with Data Management or as part of a job.

In Part 2 of this post, we will focus on the migration capabilities of the API.  I generally don’t automate full migrations, as they often involve a lot of communication with developers and coordination regarding the specific artifacts being moved.  However I almost always automate application backups.  Your needs may be different, so we’ll cover it all, including:

  • Listing Application Snapshots
  • Downloading Application Snapshots
  • Uploading Application Snapshots
  • Initiating an LCM Export
  • Initiating an LCM Import
  • Running the User Provisioning Report
  • Running the User Audit Report

Snapshots vs LCM Exports

Part of setting up PBCS involves defining a Maintenance Time.  During this one-hour window, the application is backed up (nightly) and an Application Snapshot is created.  The application snapshot contains everything necessary to restore the application, including data, metadata, forms, security, reports, business rules, mappings . . . you name it.  Only one Application Snapshot is maintained in the Inbox/Outbox.  It gets overwritten every night, so if you want to keep a rolling set of backups, you will likely want to automate and schedule the export of these files.

When we talk about an LCM Export, we are generally talking about a more focused artifact export, initiated by an admin.  For example, when a developer wants to migrate a specific set of artifacts from the TEST instance to PROD.  But technically speaking, an Application Snapshot is an LCM export . . . it’s just the automated nightly export that includes all artifacts and data.

Listing the Application Snapshots

We previously stated that the Inbox/Outbox only maintains the most recent Application Snapshot, so why would we bother listing these files?  There’s only one of them at any given point in time, right?  Well it turns out that when you use the Migration API to list Application Snapshots, you’re really just listing all files in the Inbox/Outbox.  In fact, you may have noticed that the listFiles() function in the Groovy Common Helper Functions uses the URL for “/application snapshots”.

It’s interesting to note that the files in the Inbox/Outbox are classified as either “LCM” or “External”.  LCM files include Application Snapshots and anything exported using LCM.  But curiously, this property also applies to internal Inbox/Outbox directories.  “External” files make up the remaining contents of the Inbox/Outbox.  These are usually files uploaded for data and metadata.

We can see this LCM vs External designation by adding the code circled in red below to the listFiles() function.

Image_16

If we then run the listFiles() function, we should see the current Artifact Snapshot (among any other files in the Inbox/Outbox:

Image_17

Downloading Application Snapshots

Downloading Application Snapshots is one of the more useful aspects of the Migration API, as most clients want to keep a rolling set of files over a given time period.  Because the Application Snapshot in the Inbox/Outbox gets overwritten every night, it’s important to download this file in an automated and scheduled manner.

Downloading an application snapshot is just like downloading any other file, however there are a couple of “gotchas”.  While all of the Migration API documentation refers to the “Application Snapshot”, the actual file that gets produced nightly in the Inbox/Outbox is called the “Artifact Snapshot”.

Another unusual fact about this file is that it has no file extension, even though it is a .zip file.  If you want to easily browse the contents of this file, you’ll need to add the .zip file extension once downloaded.

Other than those two items, the Application/Artifact Snapshot can be downloaded using the downloadFile() function in the Groovy Common Helper Functions.  Please see the previous post here with some useful updates to this function.

Image_17a

Uploading Application Snapshots

Uploading an application snapshot is similar to uploading any other file to the Inbox/Outbox.  With that said, I have not been successful in uploading a previously downloaded Artifact Snapshot without first updating the file name.  This is because the nightly “Artifact Snapshot” file has a space in the file name, and has no file extension.  Remove the space and add a “.zip” file extension and everything works as expected.  But I have to ask, why would Oracle produce a download file that cannot be uploaded in its original state?  It may have something to do with the fact that Oracle can’t be sure which operating system exists on the target file system, and different operating systems use different file compression programs.

Note the different file name below:

Image_20

Regardless of the filename, the Migration API will recognize that the file being uploaded is an Artifact Snapshot (as opposed to any other file being uploaded to the Inbox/Outbox), however the file must have a “.zip” file extension.

Initiating an LCM Export

LCM Exports initiated via the Migration API aren’t especially intuitive, however once you know how they work, they’re pretty flexible.  I’ll explain . . . in order to execute an LCM export using the Migration REST API, users must first execute a successful export via the PBCS interface.  Once that has been done, developers can re-execute that same export using the API.

My initial reaction to this approach was concern.  What if I setup a process in the PBCS user interface that archives all of my forms?  Will that export contain a hard-coded list of forms, missing any future artifacts I might add to the application?  The answer is no (and yes, depending on how you specify your artifacts).  If I tell LCM to export all of my forms (without selecting any individual forms), then any new forms added at a later date will get picked up in future exports initiated using the API.  If I specify specific forms in the initial selection using the PBCS user interface, then only those specific forms will get exported in the future when using the API.

Let’s look at an example, but before we get started, you will need to add the function exportSnapshot() from the Groovy Common Helper Functions to your script.  Next we will log into PBCS, and select the Application icon.  Once expanded, select the Migration icon.  (Your icons may vary somewhat if your environment hasn’t been updated.)

Image_21

Navigate through the Artifact Categories, and select the objects you want to export.  Remember that selecting an artifact category will allow future exports to capture newly added artifacts, while selecting specific artifacts within categories will limit future extracts to those just those specific artifacts.

Select the Export button and provide an export name.  I selected “Planning” and named my export “Full_Planning”.  Assuming that I have copied the exportSnapshot() function from the Common Helper Functions for Groovy, I only need a single line of code to re-execute an LCM export:

Image_22

At this point, you should have a fresh LCM export sitting in the Inbox/Outbox.

Initiating an LCM Import

I wouldn’t typically automate an LCM import into an application, however this could conceivably be useful if you’re refreshing a number of testing or training environments on a regular basis.

To import a snapshot using the Migration API, first copy the importSnapshot() function from the Common Helper Functions for Groovy.  Then add the line highlighted below and run the script.

Image_23

A quick check of the Migration Status Report will confirm the success of your LCM import.

Image_24

Running the User Provisioning Report

Running user provisioning reports in an automated manner can be useful for organizations with rigorous IT audit and documentation requirements.  To run this report using the Migration API, copy the waitForCompletion() function from the Common Helper Functions.  You will also need to copy the provisionReport() function from the chapter “User Provisioning Report”.  (This function isn’t currently listed in the Common Helper Functions for Groovy.)

Then add the following line to your script:

Image_25

It is important to note that the User Provisioning Report will be placed into the Inbox/Outbox.  From there it can be downloaded automatically with the downLoadFile() function.

The User Provisioning Report itself will look something like this:

Image_26

Running the User Audit Report

While the User Provisioning Report will highlight all of the roles assigned to a give user (including the group through which they inherited a role), the User Audit Report shows when the user accessed the application and their associated IP address.  This report doesn’t address specific activities undertaken within the application, but rather the fact that the app was accessed.

To run the User Audit Report using the REST API, you will need to copy the userAuditReport() function from the chapter “User Audit Report”.  (This function isn’t currently listed in the Common Helper Functions for Groovy.)

Then add the following line to your script:

Image_27

Like the User Provisioning Report, the User Audit Report will be placed into the Inbox/Outbox.  From there it can be downloaded automatically with the downLoadFile() function.

The User Provisioning Report itself will look something like this:

Image_28

You may have noticed that both report functions allow the specification of a report type (either “provisionreport” or “userauditreport”).  The documentation suggests that the “provisionReport()” function goes with the “provisionreport” parameter, and the “userAuditReport()” function goes with the “userauditreport” parameter.  This makes sense, but can you run an audit report with the “provisionReport()” function and vice versa?  Here’s what I noted:

Image_29

Wrap-up

So there you have it!  You can now automate backups, migrations and usage reports using the REST API’s.

 

 

Getting Groovy with the EPM Cloud Migration REST API – Part 1

Oracle provides specific REST APIs for many of its cloud offerings, however one API in particular is useful for multiple cloud products – the Migration API.  This API is sometimes referred to as the “LCM” API or the “interop” API.  Regardless of the name, this is the API that provides Lifecycle Management functionality for EPM Cloud products.

If you’ve never worked with any of Oracle’s EPM REST APIs, have a look at my previous post here on the PBCS REST API.  This will give you some context and background on the subject.  We will be using Groovy for all examples.  Steps for installing Groovy can be found within this post.

Using the Migration API, developers can perform the following tasks:

  • Get API Versions and Details
  • List Files in the Inbox/Outbox
  • Upload Files to the Inbox/Outbox
  • Download Files from the Inbox/Outbox
  • Delete Files from the Inbox/Outbox
  • List Available Services
  • Restart a Service Instance
  • Reset a Service Instance
  • Upload/Download Application Snapshots
  • Initiate LCM Imports & Exports
  • Send Feedback Reports to Oracle
  • Generate a User Provisioning Report
  • Generate a User Audit Report

Some of the tasks above are not great candidates for automation.  For example, it would be highly unusual to reset your service instance in an automated manner.  I also wouldn’t recommend sending Oracle feedback in any automated fashion.  They may have some “feedback” for you . . .  with that said, there are some extremely useful features of the Migration API, and we will cover them below.

A Word About the Common Helper Functions . . .

As noted in previous posts, the Common Helper Functions offer a significant head start in automating an EPM environment.  This post will reference many of these functions while illustrating how to put them together into a cohesive script.

** NOTE – Each section below has a link where readers may download the code samples.

Getting Started

After opening the Groovy Console, we will need to import some classes from the groovy.json package.  If you’re unfamiliar with JSON, it’s basically a format for passing arrays commonly used in REST APIs.  The code below will import the appropriate package (no need to download any additional jar files – these come with Groovy):

Image_1

Next, we must define variables for our EPM environment and authentication.  If you’ve read my previous post on the PBCS REST API, these should look familiar.  You will need to update these with your instance information and credentials.

Image_2

First Things First – Determine the API Version

It’s important to note that the API versions for the various EPM products can differ.  The current version for the PBCS API is “V3”, however the Migration API version is “11.1.2.3.600”.  Several functions require the API version, and they will not work if the wrong API version is passed as a parameter.

To get the current Migration API version, we will need the following functions from the Common Helper Functions:

  • getLCMVersions()
  • executeRequest()
  • fetchResponse()
  • fetchJobStatusFromResponse()

Copy and paste these functions into your Groovy script below the EPM Instance Variables, then insert a line to call the getLCMVersions() function.

Image_3

At this point, your code should look something like this.

Run your script and review the results in the Groovy Console output window:

Image_4

At this point, it’s a good idea to define a variable apiVersion, and assign that variable a value within the getLCMVersions() function.

Listing the Files in the Inbox/Outbox

Now that we have the correct API version, we can perform some of the more interesting functions of the Migration API.  For example, we can list the contents of the Inbox/Outbox.  To do this, add the listFiles() function from the Common Helper Functions to your Groovy script.  Then call the function as follows:

Image_5

Run the script and review the list of files in the Groovy Console output window.  At this point, your code should look something like this.

Uploading Files to the Inbox/Outbox

To add files to the Inbox/Outbox, we need to copy the following four functions from the Common Helper Functions into our Groovy script:

  • fetchPingUrlFromResponse()
  • getJobStatus()
  • sendRequestToRest()
  • uploadFile()

Note that the sendRequestToRest() function leverages a third-party library called JSON-lib.  To use this library, download the java-json.jar file from http://www.java2s.com/Code/JarDownload/java/java-json.jar.zip and unzip it into the C:\Program Files (x86)\Groovy\Groovy-2.4.6\lib directory.  Add the following line (highlighted in yellow) to the top of your Groovy script and you’re ready to go!

Image_6

Out of the box, the functions above will get the job done, however uploading files from a specific directory produces some undesirable behavior.  For example, if we have a local file called C:\Temp\Test_File.txt, and we upload that file, here’s what shows up in the Inbox/Outbox:

Image_7

Notice that the file path gets baked into the file name.  We didn’t want to upload a file called “C:\Temp\Test_File.txt”.  We wanted a file called “Test_File.txt”, and it just happened to be originally located in the C:\Temp directory (which is irrelevant once the file is uploaded).

We can rectify this situation by modifying the uploadFile() function, separating parameters for the directory and file name.  Modify the uploadFile() function by adding the sections highlighted in yellow below:

Image_8

Next, call the uploadFile() function with two separate parameters for the directory and file name.  Note the double-slashes below.

Image_9

Running the script now uploads the file to the Inbox/Outbox with the correct file name.

Image_10

At this point, your code should look something like this.

Downloading Files from the Inbox/Outbox

To download files from the Inbox/Outbox, we need to copy the following function from the Common Helper Functions into our Groovy script:

  • downloadFile()

Developers may find that this function benefits from a couple of key changes.  The downloadFile() function doesn’t allow developers to specify a target directory for the downloaded file.  By adding a parameter for the directory, and updating the saveFilePath, we can specify the target directory.  Modify the function as follows (note the sections highlighted in yellow):

Image_11

Image_12

After successfully running the function, you may notice that Groovy doesn’t “release” the downloaded file.  Trying to delete the file in Windows File Explorer results in the following error:

Image_13

The only way to get past this error is to close the Groovy console completely.  Upon inspection, it looks like the code opens an output stream, but never closes it.  This is easily rectified by inserting the line below (highlighted in yellow) into the function:

Image_14

After making the changes above, developers should be able to download files from the Inbox/Outbox to any local directory.

Test the downloadFile() function by calling it at the end of the Groovy script.  Note the two parameters used by the function.

Image_15

If you’ve been following along this whole time, that’s awesome!  Your code should look something like this.

Bed Time . . .

In Part 2 of this post we will cover the remaining aspects of the Migration API . . . Artifact Snapshots, LCM and User Reports.  Check back soon – I should have it posted by the end of next week!

One More Thing

I’m speaking at OpenWorld this year!  My presentation on Essbase Hybrid Aggregation Mode will be in Moscone South – Room 303 on Sunday 9/18 at 1pm.

PBCS Pro Tip: Manage Multiple Test Accounts with One Gmail Address

During the testing phase of most Planning implementations, developers need to create test user accounts.  I typically create at least one test user for each security group so I can verify that the correct access has been assigned.  With an on-premises Hyperion Planning implementation, this is easy – simply create user ID’s in the Shared Services native directory.  With PBCS, creating bulk test ID’s can be difficult, as each user ID requires a unique e-mail address.  If you need 50 test users, should you create 50 fake/temporary e-mail accounts?  Luckily the answer is no.

The Easy Way . . .

Gmail has a clever feature that allows developers to use the same address (almost) for all test user e-mail accounts, and it involves the “+” sign.  Specifically, the plus sign and everything to the right of a “+” in a Gmail e-mail address is ignored.  Here’s an example.  Let’s say I want to create one test account per security group, and I have the following groups:

  • Sales_Read
  • Sales_Write
  • Marketing_Read
  • Marketing_Write
  • Executive_Read
  • Executive_Write

I created a free Gmail account with the e-mail address pbcstest@gmail.com.  If I try to use this same e-mail address for all six of my PBCS test accounts, I’ll get an error.  Each account requires a unique e-mail address.  However I want to manage all of my test accounts from a single e-mail address.  To do this, I need to use a “+” sign in the e-mail addresses of my test users.

Here’s an example.  For the Sales_Read group, I might create an account called Test_Sales_Read.  When I assign an e-mail address to this account, I’ll use pbcstest+sales_read@gmail.com.  Gmail will ignore the plus sign and the text to the right of the “+”.  All confirmation e-mails will be send directly to the pbcstest@gmail.com account.  I can create as many users like this as I want, as long as the text to the right of the “+” sign is unique for each account. PBCS will treat the e-mail addresses as if they are unique.  My six accounts above might look something like this:

User_Table

When Oracle Cloud sends the user ID confirmation messages with temporary passwords, they will all be delivered to pbcstest@gmail.com.

On a Personal Note . . .

This feature isn’t just useful for PBCS.  Imagine creating rules within Outlook that route your personal e-mails to folders based upon the following:

  • myaccount+netflix@gmail.com
  • myaccount+bankstuff@gmail.com
  • myaccount+amazon@gmail.com
  • myaccount+travel@gmail.com
  • myaccount+odtug@gmail.com

Let your e-mail OCD run wild!  (and simplify the creation of PBCS test accounts)

Getting Groovy with the PBCS REST API

For many years, developers have had access to a variety of Essbase API’s that allowed them to write their own programs to automate and interact with their Essbase environments. Hyperion Planning, on the other hand, has had pre-built “utilities”, but it never had anything approaching a true API. With the release of Oracle Planning and Budgeting Cloud Service (PBCS), we now have access to the PBCS REST API. So why is this important?

Automation capabilities affect software adoption (aka sales for Oracle). When you think about it, PBCS offers some truly amazing benefits. It’s cost-effective compared to on-premises environments. It’s faster than many on-premises environments. Administrators don’t have to worry about upgrades or maintaining complex infrastructure. PBCS gets new features long before on-premises. PBCS practically sells itself, right? If PBCS is cheaper, better, faster and easier than on-prem, what could possibly make an organization hold on to their on-prem environment?

For many, it’s the thought of having to rebuild the automation and integration that has been built up around their on-prem applications over the years. Many of the tools and techniques for automating an on-prem environment will not work with PBCS. But while the idea of tearing out years of work related to data integration and application automation can be daunting, it is doable. Let’s look at some options.

Automating PBCS

The primary tool used to automate PBCS is aptly called the EPM Automate Utility. This command line utility allows admins to import and export data and metadata, refresh apps, run business rules, etc. It also allows administrators to upload and download artifacts to the PBCS repository (think automated LCM). The EPM Automate Utility has been discussed extensively in other blogs. It’s well documented and easy to use (albeit somewhat limited). We are going to cover something much more interesting.

RESTing in the Cloud

Oracle’s recent cloud offerings are designed to be RESTful. This means that they conform to a REST (representational state transfer) architectural style. More importantly, it means that apps in Oracle’s cloud are accessible with a REST API. As a matter of fact, the EPM Automate Utility was built on top of the REST API for PBCS.

05_PBCS_Automation

So why would a developer choose to build a program in Groovy, Perl or PowerShell if they could simply create a batch file that calls the EPM Automate Utility? Because, like all utilities, the EPM Automate Utility has a limited set of commands and invocation methods.  The PBCS REST API opens up possibilities for creative and elaborate automation and integration, using many different languages.

Let’s Build an Example

There are an infinite number of ways developers can invoke and use the PBCS REST API. We are going to build a simple example that uses Groovy to add a Product to the Vision application in PBCS. Before we get started, this example assumes:

Please don’t be too concerned with that last point. I didn’t know what JSON was before I started working with the REST API for PBCS. It’s basically a data structure for passing lists and arrays between web apps.

Step 1 – Download the JSON Jar File

In order to leverage JSON from within Groovy, developers will need the java-json.jar file. This can be downloaded from a variety of locations. It should be placed in the Groovy library. Mine is here:

C:\Program Files (x86)\Groovy\Groovy-2.4.6\lib

Step 2: Get Familiar with the Common Helper Functions

Oracle provides us with a set of Common Helper Functions for Java, Groovy and cURL. This is nice, because they provide a good head start when using the PBCS REST API. Unfortunately, there are defects in the Groovy samples. For example, one of the functions is supposed to return error details, but uses an incorrect property:

15_defect

Another function omits an important slash. This causes the function to return html/text content instead of json/application content. This was fairly confusing to a new REST API developer!

16_defect

I would personally like to see some comments added to these samples. With all of that said, they are extremely helpful when getting started with the PBCS REST API, and are definitely worth review.

Step 3: Start Coding

Let’s get started by opening the GroovyConsole. The settings below will enhance your Groovy experience:

  • <View>, <Show Script in Output> (Unselect this option as it clutters up your output.)
  • <View>, <Auto Clear Output on Run> (Select this so you only see your latest output.)
  • <Script>, <Allow Interruption> (Enable this so you can cancel scripts that go south . . . )

The settings above are “sticky”, so developers will only have to set them once.

To begin, we’ll need to import two libraries:

  • json.JSONObject
  • json.JsonSlurper

The JSONObject library will allow the script to transmit data in a JSON format. The JsonSlurper library (included when you install Groovy) will allow the script to parse JSON data. Type the code below into the GroovyConsole (or copy/paste from here).

10_Import_Libraries

Press the Execute Groovy Script button highlighted above. Ensure that the completion notice is successful. If the notice states anything other than “Execution complete. Result was null.” there is a good chance that the java-json.jar file was not extracted into the correct location, or the GroovyConsole was not restarted.

Next, we’ll add some variables that identify our user and PBCS instance:

12_Variables

Be sure to update the URL, domain, user ID and password for PBCS (redacted above). This example will be using the Vision sample application. If using a different application, be sure to update the app name.

Step 4: Encode Your User Credentials

The PBCS REST API expects credentials to be sent in the following format: identitydomain.userID:password. In addition, authorization information is sent in an encoded format. The code below accomplishes this.

14_Authorization

Step 5: Get Some Help from Oracle

Next, we will borrow several bits of sample code from the Common Helper Functions for Groovy. The two functions below read input from our REST requests and slurp it into JSON structures. This sounds complex, however it’s really just reading a long string and placing the data into a more usable format.

20_Fetch_Status

Another important bit of borrowed code is the executeRequest function (also included in the Common Helper Functions for Groovy). I have made some slight modifications to this code to make it more readable while adding some comments. This function calls the two “fetch” functions above, and is responsible for assembling and executing our REST “request”.

25_REST_Request

Step 6: Your Wish is REST’s Command

At this point, we have three functions, all geared towards asking the PBCS REST API to handle a request. Now it’s time to actually construct a request and execute it. We will do this by building a function called addMember. We are going to add a Product to the Vision PBCS application. It is important to note that the parent to which we are adding this member must be enabled for dynamic children. We are going to add our new Product to the parent P_TP3.

30_Enable_for_Dynamic

I wrote the function below to add members to a dimension. For demonstration purposes, we are only adding the required properties (parent and member name). In a “real” application, we would include more properties and they would vary depending on the type of dimension being addressed.

40_add_member

In the end, a REST request needs four things:

  • A URL
  • A Request Type (“GET”, “PUT”, “POST”, “DELETE”)
  • A Payload (usually a JSON data structure)
  • A Content Type

The function above simply assembles these four items and passes them to the executeRequest function.

We call the addMember function as follows (passing the dimension, parent and member name):

50_Add_Member

Step 7: Test the Code!

Before we start our test, let’s look at the Product dimension in the Vision app. Here we see that our parent member P_TP3 has four children.

60_Before_Code

Our function should add a new child called P_999. Let’s run our script and see what happens! Press the Execute Groovy Script button.

65_Run_Script

Our script output says that the member was added to PBCS. Let’s look in Planning and make sure.

70_Mbr_Added

What happens if we try to add the member again?

75_Already_There

As expected, PBCS won’t let us add a member twice. If you don’t receive the error details above, you probably didn’t fix the defects I showed you with the sample code.

Summary

This was a very simple demonstration of the PBCS REST API, but there are some complex use cases for this type of automation. Many other sites on the web are RESTful. Imagine a process that builds hierarchies automatically based upon data from other websites. What about a set of functions that automatically update FX rates in your Planning apps? With the PBCS REST API, you can get very, very creative.

Essbase Pro Tip: Compare Outlines with Groovy

I’ve had a recurring challenge at several of my more recent clients. They have had lots of apps, lots of environments, and lots of developers working on different initiatives at the same time. So when it came time to move a cube from DEV to TEST, or TEST to PROD, I was always a bit nervous. I wanted to fully understand the differences between my outlines across environments before I migrated anything, so I set out to build an Outline Comparison utility.

Reinventing the Wheel . . .

I am, of course, not the first person to attack this challenge. Terry Ledet at US-Analytics has a very sophisticated and fast utility that leverages a relational database. I’m hoping he’ll post a blog about his approach soon. Doug Bliss has a solution that appears to leverage the VB API. Unfortunately, this API is no longer supported as of Essbase 11.1.2.4. Evgeniy Rasyuk has a utility written with the Java API, however it requires a commercial license. Other people have used a combination of the Applied OLAP Outline Extractor and file comparison utilities. While useful, this approach produces less-than-readable results when there are a variety of differences across cubes.

In the end, I wanted something that met my specific needs and would allow others to leverage my code. I wanted a solution that had:

  1. Open Source Code – use it and modify it however you want
  2. No Cost – free for everyone
  3. Straightforward Results – simply list the differences between two outlines
  4. Simple Architecture – no relational database required

Note – Readers may download the latest version of the Outline Comparison utility here.

I am Not a Java Developer . . .

The Java API is clearly a part of Oracle’s path forward for Essbase, and this makes it a logical choice for developing a utility of this sort. However, I’m not a Java developer. But I was lucky enough to co-edit Cameron Lackpour’s first book Developing Essbase Applications. And in this book, Joe Aultman wrote a chapter on automating common Essbase activities with Groovy. Now, I’ve always had a lot of respect for Joe’s Essbase knowledge. But to be completely honest, I thought his love of Groovy was a bit . . . eccentric. After all, how many people are actually going to write Java API code to automate Essbase tasks? Well, it turns out I was wrong, because Groovy is really easy to use and very powerful when combined with the Essbase Java API. Groovy provided the “training wheels” I needed to build a Java-based Outline Comparison utility.

What is Groovy?

Groovy is a programming language that leverages the Java platform and takes a lot of the hard work out of developing Java apps. While I’m probably oversimplifying, it’s basically a “crutch” for Java. Rather than go into the specifics of Groovy, I’d encourage you to read Joe’s chapter in Developing Essbase Applications. I’m going to jump into the good stuff – let’s build something.

Download and Install Groovy

Click here to navigate to the Groovy distributions. For Windows, select the “Windows Installer”, circled in green below.

00_Download

Download and launch the installer.

Select your preferred language.

01_Language

Select OK.

02_Setup

Select Next.

03_Agree

Select I Agree.

04_Options

Perform a Full installation. Select optional components. (I selected them all.)

05_Location

Tell Groovy where to install. Select Next.

06_Shortcuts

Select Install.

Groovy will begin installing.

08_Install_Complete

Look for the word Completed. Select Next.

09_Env_Variables

Configure Environment Variables and Path. Select Next.

10_File_Assoc

Configure File Associations. Select Next.

11_Complete

Select Finish.

In your applications, find the shortcut to Start GroovyConsole. Launch Groovy with this shortcut. A blank GroovyConsole will be displayed.

13_Groovy_Console

The Groovy Console

The Groovy console is pretty straightforward. To make sure things are working after the install, we’ll write a simple program below that outputs the text “Is this thing on?”.

Console_Features

Enter the text println “Is this thing on?” in the white script section. Press the Execute Script button. If the text “Is this thing on?” was printed in the yellow output section, you’re ready to start building your Outline Compare Utility.

 

Developing the Outline Compare Utility

The basic steps for building an Outline Compare Utility are as follows:

  1. Import the appropriate Essbase API files
  2. Read variables from a config file
  3. Define output file
  4. Connect to Cube #1 and Cube #2
  5. Open the Essbase Outlines
  6. Loop through all members in both cubes
    1. For Members in Cube #1 . . . Make sure they exist in Cube #2
      1. For Members in Both Cubes . . . Make sure properties are the same
      2. For Members in Both Cubes . . . Make sure member order is the same
    2. For Members in Cube #2 . . . Make sure they exist in Cube #1
  7. Close and disconnect

Importing the Essbase API Files

To take advantage of the Essbase Java API when using Groovy, we need to help Groovy find the appropriate jar files. These files will not exist on your computer unless you have installed the Essbase client software.

The location of the Essbase Java API jar files will vary depending on your installation options. On my laptop, they were located in C:\Oracle\Middleware\EPMSystem11R1\common\EssbaseJavaAPI\11.1.2.0\lib.

Copy the files below into the C:\Program Files (x86)\Groovy\Groovy-2.4.5\lib directory.

  • ess_es_server.jar
  • ess_japi.jar
  • ess_maxl

Please note – there are multiple ways to handle external jar files with Groovy. This is one of several methods.

Now we can import the jar files into our Groovy script with the code below:

15_Import_JAR_Files

Create a directory for your program, and save the file as OtlCompare.groovy.  (Place your properties file in the same directory.)

The Config File

I didn’t want to modify my code whenever I wanted to compare a new set of cubes. But I wasn’t ready to build a GUI to prompt users for login information, cube names, etc. This is something I may add at a later date. Instead, I decided to store this information in a config file and have the program read that file. Luckily, Groovy has a utility called ConfigSlurper() that reads config files. I simply created the file with the variables I wanted, defined the file and told ConfigSlurper() to go get my variables.

My Config file looks like this:

14_Config_File

Please note that I placed all of the files related to this application in the logDir referenced above, including the Groovy source code, properties, and output. You will need to modify this location.

Next, I updated my Groovy script to read the config file as follows:

15_Read_Config_Info

On line 12 above, we see the config file being defined. On line 13, ConfigSlurper parses the file. After that, we can reference any of the variables in the config file by prefixing the variable name with “config.”. We can see this on line 16, where we reference config.logDir. logDir was the location for the log file that we defined in our config file. This really highlights the beauty of Groovy. With two lines of code, we processed an entire config file, storing 13 separate variables.

On line 19, we create the log file. We will append to it in future steps. Starting on line 21, we write the current time to the log file.

Note – appending “\t” inserts a tab into a text file.

Note – appending “\r\n” inserts a carriage return into a text file.

Running this script should produce output similar to the image below. It should create a new log file whose location is defined by your config file:

16_Write_to_Logfile

The log file should look like this:

17_Logfile

Note that we are sending the output to both the console (using println statements) and the log file, (using logfile.append statements).

Connect to Essbase and Open Outlines

The lines below initialize the Essbase Java API and connect to Essbase. Note the use of variables defined in the config file. Any variable prefixed with “config.” is defined in the config file.

18_Connect_to_Cubes

Now that the two outlines are open, we can begin looping through the member names comparing properties.

Looping Through Members

To loop through the members in the outlines, we will create a function called checkMembers. What makes this function interesting is that it is recursive – it calls itself. The function starts by looping through the dimensions. It then loops through the children of each member until there are no more members to process, working its way through the entire tree.

We first loop through Outline #1, checking to see if each member exists in Outline #2. If the member exists in both outlines, we check to make sure the properties are the same. Anytime we run across a mismatch or missing member, we make a note in the log.

Next we loop through the members in Outline #2, checking to see if each member exists in Outline #1. But this time, if the member exists in both outlines, we don’t bother checking the properties, because we would have already done this in the first loop. Instead we’re simply looking for extra members that exist in Outline #2, but not in Outline #1. The function has a parameter that allows us to skip the member property check.

There is a significant amount of code in this section. Rather than list all of the code, readers can download the entire OtlCompare.groovy file here.

Close Outlines and Log Off

After we have looped through the members, we need to close the outlines and log off from Essbase.

19_Close_Logoff

In this example, I also write the ending time to the log and calculate the elapsed time for the procedure.

Final Thoughts

I’m really happy with the results of this script. It simplifies the process of comparing outlines, making it faster and more accurate. With that said, there’s always room for improvement. The script works well against small and medium sized outlines. But when I ran it against a pair of outlines with 100,000+ members each, it took almost 30 minutes. I’m sure I can make it faster. I’d also like to add support for checking all alias tables, attribute dimensions, non-unique member outlines, etc. Lastly, when the script flags members as missing, it then flags other members as “out of order”, when they really aren’t in the wrong order.  Their “previous member” or “next member” simply isn’t there. I should be able to clean this up.

In the end, it does exactly what I needed it to do – identify differences between outlines. And it does so in a tab-delimited file that can easily be sorted and filtered in Excel.

20_Results_in_Excel

EPM Pro Tip: Build & Maintain Security in Excel

Agatha Christie once argued against the notion that necessity is the mother of invention. She said, “Invention, in my opinion, arises directly from idleness, possibly also from laziness – to save oneself trouble”. This is the guiding principle I follow when designing and building security models for Hyperion Planning and Essbase. So how can a bit of laziness take the “trouble” out of EPM security? By leveraging Excel and letting formulas do the heavy lifting. This is an approach I’ve been using for over a decade, and I’m certain I’m not the only one. Although some of the tools have changed (think LCM), the general approach has been the same.

Step 1: Create a user-friendly, table-like input worksheet where administrators can enter security information.

Step 2: Create separate “export” worksheets for each of the basic components of EPM security (Shared Services, Essbase, Planning, etc.).

Step 3: Take the content from the input worksheet(s) and link it to the “export” worksheets with formulas, placing everything into the correct format.

Step 4: Import security & check for errors.

Process

Show & Tell

Let’s look at a simple example. Keep in mind that based upon your particular requirements, this approach can have a lot of variations. For example, you may not be able to create your own groups, because your organization prefers groups to be maintained in Active Directory. In our example, we’ll use native Shared Services users and groups. The same principles apply . . . you’ll just have to tweak things a bit.

In our example, our administrator has to maintain security in a Hyperion Planning application and a native Essbase reporting cube. The applications both have the same users, and they have similar security requirements. We’ll use my Starbucks Hyperion Planning demo. It contains all of the global Starbucks, Teavana and Seattle’s Best Coffee locations. (The locations are real, but everything else is made up . . .) We’ll create groups for the 12 main countries/regions in which Starbucks operates.

9-29-2015 9-00-25 PM

In the examples below, an Excel workbook will be highlighted.  The full workbook can be downloaded here.

 The Simple Input Worksheet

In our “simple input worksheet”, we will need some basic information. We’ll need to know:

  • What are the groups with access to the EPM products?
  • What members can these groups access?
  • What is the level of access?
  • Who is in the groups?

Here is what our simple input worksheets look like. In our example, we will create 24 groups (one read and one write for each country/region). We will also add one test user to each group. We will assume that all groups have access to both Planning and Essbase.

What are my groups and what can they access?

9-30-2015 12-48-33 AM

Who is in my groups?

9-29-2015 9-22-59 PM

The Lazy Bit

Now that we have collected the basic information we need, we can begin to create our outputs. These are the files that we will import into Shared Services, Essbase and Planning to actually build our EPM security.

We’ll start by creating an LCM import file that creates our groups. However, we may not know how to format this file. How do we get around this? Follow these steps:

  • Manually create one native group in Shared Services.
  • Manually add a user to the group.
  • Manually provision the group.

I provisioned my sample group like this:

Provision

Next we need to export the security we just created manually in Shared Services. This will give us an idea of what the Shared Services import files should look like.

Here’s what we will export from Shared Services in LCM:

9-29-2015 10-03-25 PM

Note that even groups and users that were created in Active Directory will be exported when the “Native Directory” above is selected, assuming they have been provisioned.

Select the Export button. Once the export is complete, browse to the file under the File System. Right-click on the file and select Download. Save the file to your desktop and unzip it. The following file structure will be available:

9-29-2015 10-12-00 PM

If we edit the Groups.csv file, we can immediately see the format of the file we need to create that will ultimately be used to create the new native groups in Shared Services. In this file we see the group that was previously created manually and we see the test user that was assigned to the group.

9-29-2015 10-14-08 PM

** Note - Leave the internal_id above blank when creating the new import file.

** Note - If you’re using Active Directory users or groups, your providers will be slightly different.

Create the Shared Services Group Import File

Using the same basic format as our exported Groups.csv file, we can create a new file (within our EPM Security Workbook). This file will create our groups and associate our users with the groups.

It’s important to note that many of the cells below are populated with simple formulas that reference the Groups_and_Entities or the Groups_and_Users worksheet. Note the formula circled below.

9-29-2015 11-01-23 PM

This file can either be saved as the Groups.csv file (in the previously unzipped folder structure), or the administrator can copy and paste the results into the Groups.csv file (the one previously exported from Shared Services). Then, simply zip the files back up (starting at the top of the LCM file structure).

9-29-2015 11-05-44 PM

Upload and Import the Zipped Shared Services Files

From within Shared Services, right-click on the File System. Select Upload.

9-29-2015 11-08-05 PM

Browse to the zipped Shared Services files. Select Finish, then OK.

Next, expand the file within the Shared Services File System. Select the Groups.

9-29-2015 11-11-55 PM

Select Import. Ensure that the groups were created successfully and that the users were added as expected.

9-29-2015 11-16-52 PM

Provision the Groups

Now that the groups are imported and the users have been added to the groups, it’s time to provision the groups. Navigate to the Assigned Roles directory in the previously exported LCM file.

2015_09_29_SS_Security\HSS-Shared Services\resource\Native Directory\Assigned Roles\Planning

Modify the Planning application file to provision each of your groups, again, using formulas to reference the groups in your Groups_and_Entities tab. Pay special attention to the role numbers. (This is why we create a Group and User manually first, and then export the file. This way, we can copy items such as the role number – something we otherwise would probably not know.)

9-29-2015 11-24-10 PM

Re-zip the Shared Services LCM files, upload the file and import the Planning Assigned Roles.

9-29-2015 11-29-49 PM

At this point, all of the groups should be provisioned as Planners.

9-29-2015 11-34-37 PM

Repeat this process and provision the groups for Essbase and FR (based upon the files previously exported containing our manually created test group). Groups must be provisioned in Shared Services for Essbase and Planning before application-specific security can be applied.

Create Essbase Filters and Grant Them to Groups

Woo hoo! At this point, we’re finished with Shared Services and LCM. All of the groups have been created. Users have been added to the groups. All of the groups have been provisioned. And all of this has been done by importing files built based upon simple formulas.

Next we need to create our Essbase filters. For this, we will use Excel formulas to craft MaxL statements. We can paste these statements directly from Excel into the MaxL command prompt.

Let’s start with the basic MaxL that we want in our Security workbook. The MaxL syntax can be found here.

Filter

Note the common practice of locking down a dimension with “no_access” and then selectively opening it up with read or write lines.

If we paste the MaxL into a MaxL command line, we’ll see that the filter is created and granted to the specified group. But what if we want to create these commands in bulk in Excel? We simply use formulas with the concatenate function.

Using Excel formulas, we can craft the following file:

9-30-2015 12-57-21 AM

The MaxL above is broken up into four columns. These columns have the following formulas:

Column A: =CONCATENATE(“create or replace filter SB_ESB.SB_ESB.f_”, Groups_and_Users!A3)

Column B: =CONCATENATE(” no_access on ‘@IDESCENDANTS(“”Location””)’, “)

Column C: =IF(RIGHT(A3, 4) = “Read”,CONCATENATE(” read on ‘@IDESCENDANTS(“””, Groups_and_Entities!B3, “””)'; “),CONCATENATE(” write on ‘@IDESCENDANTS(“””, Groups_and_Entities!B3, “””)'; “))

Column D: =CONCATENATE(“grant filter SB_ESB.SB_ESB.’f_”, Groups_and_Entities!A3, “‘ to ‘”, Groups_and_Entities!A3, “‘;”)

Note the use of spaces and quotes in the formulas above. Also note that by placing the words “read” or “write” in the actual group names, we can reference this in the formulas when building the filters, thus granting either read or write access.

Once our formulas are sorted, we can simply paste the MaxL directly from Excel into a command prompt. Our filters are immediately created and granted to all of our groups. Subsequent changes are as easy as a copy/paste into MaxL.

9-30-2015 12-54-20 AM

Create a Hyperion Planning Security Import File

The last step in our process is to create the file to import security into our Hyperion Planning application. Again, we will use Excel and basic formulas. Like LCM, it can be helpful to add a minimal amount of security to Hyperion Planning manually, and then export a sample secfile.txt file. This can be helpful when determining the format of the import file.

To export a Planning security file, navigate to the Planning directory on the server:

Oracle\Middleware\user_projects\epmsystem1\Planning\planning1

Execute the following command (update the app, user ID and file location):

ExportSecurity.cmd /A=SB_BSO,/U=admin,/TO_FILE=C:\\temp\\SecFile

Review the security export file.

Our test security export contains the one piece of security information we assigned manually in Hyperion Planning:

US_Read,US,READ,@IDESCENDANTS

We can now craft our full import file using this same format.

In Excel, we can build a file like this:

9-30-2015 1-37-40 AM

Here is the Excel formula that concatenates this information from the Groups_and_Entities tab:

=CONCATENATE(Groups_and_Entities!A3, “,”, Groups_and_Entities!B3, “,”, Groups_and_Entities!D3, “,”, Groups_and_Entities!C3)

We can then copy the contents of this file into the secfile.txt in the planning folder. Unfortunately, the ImportSecurity utility must generally be run from the server, in the following directory:

Oracle\Middleware\user_projects\epmsystem1\Planning\planning1

Upload the security with the following command:

ImportSecurity.cmd “SB_BSO,admin,SL_COMMA,0,”

Check the ImportSecurity.log in the following directory:

Oracle\Middleware\user_projects\epmsystem1\diagnostics\logs\planning

Look for something like this:

9-30-2015 1-40-01 AM

In Summary . . .

Building security manually is a tedious, time-wasting exercise. And in the end, administrators are left with a manual process to maintain. By centralizing security development in an Excel workbook, developers can create exports that feed the various EPM components. While these exports require different formats for LCM, Essbase and Planning, the mechanics that produce the files are similar. Developers can use basic Excel formulas to craft files into the correct import file formats. Then import the files.

This approach has several benefits:

  1. EPM Security Maintenance is centralized in a single workbook.
  2. EPM Security is self-documenting. (It’s all in the Excel workbook.)
  3. The EPM Security Workbook makes a good security backup.
  4. The EPM Security Workbook can be versioned and archived.
  5. Security migrations are simplified because the same files can be imported across multiple environments.
  6. Importing files is infinitely faster than creating security manually.

A copy of the workbook used in these examples is available here.

Essbase Pro Tip: Build & Maintain Partitions in a Text Editor

Partitions in Essbase are a bit like views in a relational database. They allow the results of a query in one cube to be available in another cube. This data may be “materialized”, or physically stored in both the source and target. These are referred to as replicated partitions. Or the partition may simply provide a “window” from one cube into another. These are referred to as transparent partitions. A third type of partition, linked, isn’t really a partition at all . . . it’s more of a navigational aid. It’s important to note that linked partitions are deprecated as of Essbase 11.1.2.4. That’s a nice way for Oracle to say “stop using linked partitions even though the option is still available”.

Embrace the Command-Line

Most Essbase professionals (myself included) learned to build partitions in EAS. It has a nice tabbed GUI that walks users through the various steps:

  • Type (Replicated, Transparent or Linked)
  • Connection (Source & Target Connection Info)
  • Areas (Source & Target Details)
  • Mappings (How to Handle Specific Differences Between Source & Target)
  • Validation (Is everything correct?)

There’s nothing “wrong” with this GUI . . . in the same way that there’s nothing “wrong” with training wheels. But for those of us that are comfortable in a command-line environment, a text file with good ole MaxL is just more straight-forward.

Why Ditch the GUI?

  1. MaxL is safer.
  2. MaxL is faster.
  3. MaxL is easier.
  4. MaxL is repeatable and migratable.

Have you ever “lost” a partition? Maybe it became orphaned and was no longer editable or repairable in EAS? It’s a frustrating experience, and unfortunately there are a variety of reasons partitions can head south. Building and maintaining partitions in a text editor using MaxL creates a tidy little backup. Re-creating the partition or migrating it to another environment is a snap. Simply paste some text into the MaxL command-line and you’ve got a partition!

And speaking of migrations, it’s important to note that LCM cannot currently migrate partitions (as of version 11.1.2.4). The LCM Guide recommends using MaxL for partition migrations. I recommend taking this a step further. Create and maintain your partitions in MaxL in the first place.

Show me the MaxL!

The typical MaxL statement for creating a partition can be broken down into seven sections:

  1. Login (Source)
  2. Create Command
  3. Source Area Definition
  4. Login (Target)
  5. Target Area Definition
  6. Mappings
  7. Logout

In the example below, I’m partitioning data from a Planning application (BSO) called “SrcApp”, into an ASO reporting cube called “TgtApp”. The Planning app (source), has a Version dimension, but the target does not. The target has a “Profit Center” and “Cost Center” dimension, however the source does not.

cmd_text

You may have noticed that there are no area alias “nicknames” in the statement above, and there is only a single area definition on the source and target. Multiple areas are allowed, and mappings at the bottom of the statement can be area-specific. But I usually find that one area definition (and global mappings) gets the job done.

Here’s what you should expect to see when the partition is pasted into MaxL:

cmd_partition

Review your results, looking for errors, warnings and in particular, cell-count mismatches.

Final Pointers

Here are some tips that may help when creating your partitions in MaxL:

  • Enclose area definitions in single quotes.
  • Separate members and functions within area definitions with commas.
  • Do not separate mapping lines with commas.
  • The first set of parenthesis in a mapping statement represents the source.
  • The second set of parenthesis in a mapping statement represents the target.
  • Map a member to or from a null value when a dimension in the source does not exist in the target (or vice-versa). In the example above, “Final Version” is mapped to “”, because there is no Version dimension in the target cube.
  • End the entire statement with a semi-colon.
  • Do not store passwords in MaxL files unless they’re encrypted.
  • If you’re using Essbase clustering, make sure you login to the cluster name when you create your partition, and not the server name.
  • Create partitions with service accounts, preferably ones with passwords that do not expire.

Hyperion Planning 11.1.2.3.700 Patch

I’m on a team that recently decided to patch a client environment, upgrading Hyperion Planning 11.1.2.3 from the .501 patch to the .700 patch.  We quickly started having deployment issues related to unknown members in member formulas.  This was with a “classic” Hyperion Planning application, not an EPMA app.

We soon realized that we had issues with member formulas that referenced members with names that included a “%” sign.  An example is below:

Pre_Save

The screenshot above represents a brand-new formula.  It hadn’t been saved, and it hadn’t been validated.  (As you have probably guessed, these two actions led to issues.)

Upon validating (or saving) the formula, an error was introduced.  Interestingly, the validation returned with a “successful” message.

Post_Validate_Error

The number 25 was appended to the “%”.  Subsequent validations added another “25”.  The application repository was clean – formulas there were correct.  LCM exports also looked good.  But saving formulas suddenly got risky.  This application had some very, very complex member formulas.

My immediate reaction was to start updating member names and formulas, replacing “%” with “Pct”.  This would normally get the job done, however this particular application had a LOT of members with “%” in the name.  Some of these members were referenced in FDMEE maps . . . and reports . . . and business rules . . . and partitions.  Renaming was going to be a pain.  Luckily we had another unpatched environment where development could move forward while we worked the SR.  This leads us to a conversation regarding patch best practices:

  • Read the Readme file!  It details known issues and fixes addressed by the patch.
  • Patch a Development or Sandbox environment first.
  • Have a back-out plan.
  • If you have apps in Production, make sure you plan for a thorough regression test.
  • Schedule your patches for “low-disruption” time periods.  (Don’t patch right before year-end close or go-live.)
  • If you can, wait for others to patch first.  (The best testers are strangers who don’t bill your project.)

Technical Blog Posts

Excuses . . . Excuses . . .

I do my best to publish as least one meaty, technical EPM blog post per month. But that’s just not going to happen this month. I have three excuses of varying quality:

  • Excuse #1:  My current project is in “sprint” mode with some aggressive development timelines.
  • Excuse #2:  I’m working on my presentation for Top Gun 2015 US.
  • Excuse #3:  I have a new toy.

I wouldn’t normally blog about that last excuse (the toy), but I’m going to try to tie it in with Excuse #2.

On September 17th, I’ll be speaking at Infratects’ Top Gun US 2015 conference. I’ll be presenting on Essbase Hybrid Aggregation Mode, and I’m very excited about the opportunity.

Top Gun US 2015 BannerI have two rules for these types of presentations:

  • Rule #1: Show a live demo of something really interesting.
  • Rule #2: Have some “giveaways” for people who ask questions.

Because I needed some “giveaways”, and because Essbase databases are typically referred to as “cubes”, I obviously had to go and buy a MakerGear M2 3D Printer kit and make some 3D printed “Gear Cubes”. I had no choice, really . . . so if you attend my presentation and ask an engaging question, you could be the proud new owner of one of these:

GearCube_SmallTop Gun US 2015 has some amazing speakers and should be a great opportunity to network with Oracle product management and your EPM peers. I hope to see you there!