Uncategorized

Intercompany Eliminations in PBCS

As part of a recent PBCS implementation, I had to design an Intercompany Elimination solution within a BSO plan type.  This is a relatively uncommon and somewhat complex requirement in a PBCS implementation, and as such, I was pretty excited to blog about it.  Unfortunately, the whole post nearly ended up in the garbage.

Every once in a while, you write a great post, only to find that someone else has covered the topic sooner than you, and better than you.  In my case, that person was Martin Neuliep.  In Developing Essbase Applications:  Hybrid Techniques and Practices, Martin shared a single formula that elegantly calculated Intercompany Eliminations in a BSO cube.  In addition, Martin was scheduled to discuss the topic of eliminations at Kscope in his presentation on June 28th, titled Eliminating the Eliminations Problem in Essbase.

Luckily, I had an ASO solution in addition to my BSO approach.  As it turns out, Martin was planning on sharing several elimination calculation methods in his presentation (some developed by him, and some developed by others).  Perhaps my ASO solution will make the cut?  (hint, hint)  Either way, you should check out his presentation – I will definitely be there.

With all of that said, here’s my approach to intercompany eliminations in an ASO plan type.

Why Eliminate Intercompany Transactions?

Most of my clients create their Budgets and Forecasts in a “fully eliminated” manner.  Within their financial plans, they pretend that certain transactions don’t occur, because these transactions don’t affect the company’s bottom line.  If one subsidiary has a planned (future) transaction with another subsidiary, these clients may not bother recording the transaction in their Budget or Forecast at all.  While this approach simplifies the planning process, it leaves out financial details that may be useful.

When a client’s requirements indicate that these transactions are relevant to the planning process, we need to build Intercompany Elimination logic into our PBCS applications.  The accounting behind these transactions can get pretty complex, but what we’re going to focus on today are the technical mechanics that facilitate these eliminations in PBCS, specifically in an ASO plan type.

So why do we eliminate these transactions?  Because depending on where we’re looking in the Entity dimension, we need to pretend that they never occurred.  As far as most investors are concerned, these intercompany transactions don’t count on a consolidated basis.  Imagine selling a car to your spouse . . . this transaction doesn’t change your combined income or net worth.  This leads us to an interesting question.  When and where do we eliminate these transactions?

The Land of Make Believe

Let’s start with a simple example – an imaginary company that mines raw materials and sells those materials to its manufacturing units.  These plants then sell finished goods to a distribution company.  All of these entities are part of the same vertically integrated company.  The Entity dimension in such a company might looks like this:

Image_1

To facilitate transactions between two entities in an OLAP database, it is generally beneficial to have separate Entity and Customer Dimensions.  The only unusual thing about this design is that many of the “Customers” are internal customers.  As such, our Customer dimension might look something like this:

Image_13

Note that all members under Intercompany Trading Partners correspond with a member from the Entity dimension exactly, but with an “ICP_” prefix.  This ASO solution will not work if there are discrepancies between Entity dimension members and the members underneath Intercompany Trading Partners.

Planning Intercompany Transactions

Intercompany transactions can occur across a variety of accounts within the Income Statement and Balance Sheet.  The simplest example is one subsidiary selling something to another subsidiary within the same company.  Let’s assume that US Mine 1 sells something to Plant 1 for $100.  Our level zero data would look something like this in PBCS:

Image_3

If we were to look at Total Sales for US_Mine_1, we would want to see $100 in sales.  But what if we wanted to see Total Sales for Total_Company?  Assuming that this was the only sale, we would want to see $0.  This is because the transaction must be eliminated at the first common ancestor between the Entity and the Intercompany Trading Partner.  Total_Company is the first common ancestor between US_Mine_1 and Plant_1.  What makes this calculation interesting is that the transaction should NOT be eliminated at ancestors before we arrive at the first common ancestor.  So we would definitely expect to see that $100 in sales to show up in the parent member Mining.

The Dreaded “E-Company”

The ins-and-outs of these transactions can get tricky, especially when mixing many transactions together with both internal and external customers.  Developers will likely have multiple sets of accounts being eliminated.  (E.g., Sales & Cost of Sales, Receivables & Payables, etc.)  Ragged hierarchies and alternate hierarchies can add additional complexity.  For this reason, it can be helpful to introduce “E-Companies” into the Entity dimension.  These are basically fake companies where we only calculate and store elimination data.

Adding E-Companies to the Entity dimension might look something like this:

Image_4

Unfortunately, E-Companies can make an Entity dimension convoluted.  If your company structure is particularly large or volatile, E-Companies can create a significant amount of maintenance.  They can also be confusing to end-users who might not understand their purpose.

** NOTE – Most intercompany elimination solutions in BSO databases require E-Companies!

ASO to the Rescue!

One of the nice things about PBCS implementations is that they often include an ASO reporting database.  In fact, some clients never consolidate their BSO databases at all, and instead, simply map their data to an ASO cube that rolls everything up on the fly – no business rule required!  And here’s where things get really awesome – in an ASO database, we can calculate intercompany eliminations without the need for E-Companies.

Here are some things to consider when designing your ASO plan type:

  • Both an Entity dimension and a Customer dimension are required.
  • The Intercompany Trading Partner hierarchy (within the Customer dimension) must match the Entity dimension exactly, with the exception of “ICP_” prefixes. This includes intermediate parents.
  • A “Data Source” dimension of some type is required to separate regular data from eliminations.
  • Account dimensions in ASO databases are automatically set to Dynamic. The Data Source dimension will also need to be dynamic to support member formulas.

The Data Source Dimension

In this solution, all of the logic associated with eliminations lives in a dimension called Data Source (or something similar).  In this dimension, all base (non-eliminated) data is loaded into a member called Amount.

Data_Source

This dimension also includes a special “Do Not Use” system-only type of member.  Here, it is called Do_Not_Use_Elim.  We generally do not want users querying this member.  It’s just a temporary holding spot for storing the inverse balance for any intercompany accounts.  This member can be populated with a procedural calculation.

Image_14

It is important to note that the “Do Not Use” member should be populated in the same member combinations as the original budgeted amount, with the exception of the Data Source dimension.  Remember – this “Do Not Use” member is simply a holding spot.  Users should not query this member.

Abracadabra

The “real” magic happens in the member above called Elim.  In this member, a formula is added that filters when eliminations (stored in the “Do Not Use” member) are displayed and subsequently added to the base data.

Elim_Code

When the Elim member above rolls up to the Consolidated_Amount member, we see that the intercompany sales amount goes to zero.  In other words, it is eliminated.  (See row 15 below)

Image_16

The example above shows our original sale (row 3) and the inverse amount stored in the “Do Not Use” member (row 4).  Rows 7-10 show our original Entity and its ancestors.  We can see that the eliminations are only displayed at the first common ancestor and above (row 9 – 10).  Finally, in rows 13 – 16, we see the view that most users will retrieve, using the member Consolidated_Amount.  This member takes the base data in the Amount member and layers in the data in the Elim member.  As such, we can see that the sale exists in rows 13 & 14, but is eliminated in rows 15 and above.

Wrap-Up

Like most calculations in PBCS (and Essbase in general), there are multiple options for solutions, each with its own pro’s and con’s.  This solution works well against very large databases and has the added benefit of not requiring E-Companies.  Happy Eliminating!

Getting Groovy with the EPM Cloud Migration REST API – Part 2

This post is late.  It’s embarrassingly late.  But it’s finally finished, and I hope you find it useful.

In the previous post, we discussed Oracle’s Migration API for EPM Cloud, and walked through some basic Groovy examples on:

  • Retrieving the API Version
  • Listing Files in the Inbox/Outbox
  • Uploading Files to the Inbox/Outbox
  • Downloading Files from the Inbox/Outbox

The features above are useful for tasks like loading data and metadata into an application because we often have to place files into the Inbox/Outbox before we can reference them with Data Management or as part of a job.

In Part 2 of this post, we will focus on the migration capabilities of the API.  I generally don’t automate full migrations, as they often involve a lot of communication with developers and coordination regarding the specific artifacts being moved.  However I almost always automate application backups.  Your needs may be different, so we’ll cover it all, including:

  • Listing Application Snapshots
  • Downloading Application Snapshots
  • Uploading Application Snapshots
  • Initiating an LCM Export
  • Initiating an LCM Import
  • Running the User Provisioning Report
  • Running the User Audit Report

Snapshots vs LCM Exports

Part of setting up PBCS involves defining a Maintenance Time.  During this one-hour window, the application is backed up (nightly) and an Application Snapshot is created.  The application snapshot contains everything necessary to restore the application, including data, metadata, forms, security, reports, business rules, mappings . . . you name it.  Only one Application Snapshot is maintained in the Inbox/Outbox.  It gets overwritten every night, so if you want to keep a rolling set of backups, you will likely want to automate and schedule the export of these files.

When we talk about an LCM Export, we are generally talking about a more focused artifact export, initiated by an admin.  For example, when a developer wants to migrate a specific set of artifacts from the TEST instance to PROD.  But technically speaking, an Application Snapshot is an LCM export . . . it’s just the automated nightly export that includes all artifacts and data.

Listing the Application Snapshots

We previously stated that the Inbox/Outbox only maintains the most recent Application Snapshot, so why would we bother listing these files?  There’s only one of them at any given point in time, right?  Well it turns out that when you use the Migration API to list Application Snapshots, you’re really just listing all files in the Inbox/Outbox.  In fact, you may have noticed that the listFiles() function in the Groovy Common Helper Functions uses the URL for “/application snapshots”.

It’s interesting to note that the files in the Inbox/Outbox are classified as either “LCM” or “External”.  LCM files include Application Snapshots and anything exported using LCM.  But curiously, this property also applies to internal Inbox/Outbox directories.  “External” files make up the remaining contents of the Inbox/Outbox.  These are usually files uploaded for data and metadata.

We can see this LCM vs External designation by adding the code circled in red below to the listFiles() function.

Image_16

If we then run the listFiles() function, we should see the current Artifact Snapshot (among any other files in the Inbox/Outbox:

Image_17

Downloading Application Snapshots

Downloading Application Snapshots is one of the more useful aspects of the Migration API, as most clients want to keep a rolling set of files over a given time period.  Because the Application Snapshot in the Inbox/Outbox gets overwritten every night, it’s important to download this file in an automated and scheduled manner.

Downloading an application snapshot is just like downloading any other file, however there are a couple of “gotchas”.  While all of the Migration API documentation refers to the “Application Snapshot”, the actual file that gets produced nightly in the Inbox/Outbox is called the “Artifact Snapshot”.

Another unusual fact about this file is that it has no file extension, even though it is a .zip file.  If you want to easily browse the contents of this file, you’ll need to add the .zip file extension once downloaded.

Other than those two items, the Application/Artifact Snapshot can be downloaded using the downloadFile() function in the Groovy Common Helper Functions.  Please see the previous post here with some useful updates to this function.

Image_17a

Uploading Application Snapshots

Uploading an application snapshot is similar to uploading any other file to the Inbox/Outbox.  With that said, I have not been successful in uploading a previously downloaded Artifact Snapshot without first updating the file name.  This is because the nightly “Artifact Snapshot” file has a space in the file name, and has no file extension.  Remove the space and add a “.zip” file extension and everything works as expected.  But I have to ask, why would Oracle produce a download file that cannot be uploaded in its original state?  It may have something to do with the fact that Oracle can’t be sure which operating system exists on the target file system, and different operating systems use different file compression programs.

Note the different file name below:

Image_20

Regardless of the filename, the Migration API will recognize that the file being uploaded is an Artifact Snapshot (as opposed to any other file being uploaded to the Inbox/Outbox), however the file must have a “.zip” file extension.

Initiating an LCM Export

LCM Exports initiated via the Migration API aren’t especially intuitive, however once you know how they work, they’re pretty flexible.  I’ll explain . . . in order to execute an LCM export using the Migration REST API, users must first execute a successful export via the PBCS interface.  Once that has been done, developers can re-execute that same export using the API.

My initial reaction to this approach was concern.  What if I setup a process in the PBCS user interface that archives all of my forms?  Will that export contain a hard-coded list of forms, missing any future artifacts I might add to the application?  The answer is no (and yes, depending on how you specify your artifacts).  If I tell LCM to export all of my forms (without selecting any individual forms), then any new forms added at a later date will get picked up in future exports initiated using the API.  If I specify specific forms in the initial selection using the PBCS user interface, then only those specific forms will get exported in the future when using the API.

Let’s look at an example, but before we get started, you will need to add the function exportSnapshot() from the Groovy Common Helper Functions to your script.  Next we will log into PBCS, and select the Application icon.  Once expanded, select the Migration icon.  (Your icons may vary somewhat if your environment hasn’t been updated.)

Image_21

Navigate through the Artifact Categories, and select the objects you want to export.  Remember that selecting an artifact category will allow future exports to capture newly added artifacts, while selecting specific artifacts within categories will limit future extracts to those just those specific artifacts.

Select the Export button and provide an export name.  I selected “Planning” and named my export “Full_Planning”.  Assuming that I have copied the exportSnapshot() function from the Common Helper Functions for Groovy, I only need a single line of code to re-execute an LCM export:

Image_22

At this point, you should have a fresh LCM export sitting in the Inbox/Outbox.

Initiating an LCM Import

I wouldn’t typically automate an LCM import into an application, however this could conceivably be useful if you’re refreshing a number of testing or training environments on a regular basis.

To import a snapshot using the Migration API, first copy the importSnapshot() function from the Common Helper Functions for Groovy.  Then add the line highlighted below and run the script.

Image_23

A quick check of the Migration Status Report will confirm the success of your LCM import.

Image_24

Running the User Provisioning Report

Running user provisioning reports in an automated manner can be useful for organizations with rigorous IT audit and documentation requirements.  To run this report using the Migration API, copy the waitForCompletion() function from the Common Helper Functions.  You will also need to copy the provisionReport() function from the chapter “User Provisioning Report”.  (This function isn’t currently listed in the Common Helper Functions for Groovy.)

Then add the following line to your script:

Image_25

It is important to note that the User Provisioning Report will be placed into the Inbox/Outbox.  From there it can be downloaded automatically with the downLoadFile() function.

The User Provisioning Report itself will look something like this:

Image_26

Running the User Audit Report

While the User Provisioning Report will highlight all of the roles assigned to a give user (including the group through which they inherited a role), the User Audit Report shows when the user accessed the application and their associated IP address.  This report doesn’t address specific activities undertaken within the application, but rather the fact that the app was accessed.

To run the User Audit Report using the REST API, you will need to copy the userAuditReport() function from the chapter “User Audit Report”.  (This function isn’t currently listed in the Common Helper Functions for Groovy.)

Then add the following line to your script:

Image_27

Like the User Provisioning Report, the User Audit Report will be placed into the Inbox/Outbox.  From there it can be downloaded automatically with the downLoadFile() function.

The User Provisioning Report itself will look something like this:

Image_28

You may have noticed that both report functions allow the specification of a report type (either “provisionreport” or “userauditreport”).  The documentation suggests that the “provisionReport()” function goes with the “provisionreport” parameter, and the “userAuditReport()” function goes with the “userauditreport” parameter.  This makes sense, but can you run an audit report with the “provisionReport()” function and vice versa?  Here’s what I noted:

Image_29

Wrap-up

So there you have it!  You can now automate backups, migrations and usage reports using the REST API’s.

 

 

Getting Groovy with the EPM Cloud Migration REST API – Part 1

Oracle provides specific REST APIs for many of its cloud offerings, however one API in particular is useful for multiple cloud products – the Migration API.  This API is sometimes referred to as the “LCM” API or the “interop” API.  Regardless of the name, this is the API that provides Lifecycle Management functionality for EPM Cloud products.

If you’ve never worked with any of Oracle’s EPM REST APIs, have a look at my previous post here on the PBCS REST API.  This will give you some context and background on the subject.  We will be using Groovy for all examples.  Steps for installing Groovy can be found within this post.

Using the Migration API, developers can perform the following tasks:

  • Get API Versions and Details
  • List Files in the Inbox/Outbox
  • Upload Files to the Inbox/Outbox
  • Download Files from the Inbox/Outbox
  • Delete Files from the Inbox/Outbox
  • List Available Services
  • Restart a Service Instance
  • Reset a Service Instance
  • Upload/Download Application Snapshots
  • Initiate LCM Imports & Exports
  • Send Feedback Reports to Oracle
  • Generate a User Provisioning Report
  • Generate a User Audit Report

Some of the tasks above are not great candidates for automation.  For example, it would be highly unusual to reset your service instance in an automated manner.  I also wouldn’t recommend sending Oracle feedback in any automated fashion.  They may have some “feedback” for you . . .  with that said, there are some extremely useful features of the Migration API, and we will cover them below.

A Word About the Common Helper Functions . . .

As noted in previous posts, the Common Helper Functions offer a significant head start in automating an EPM environment.  This post will reference many of these functions while illustrating how to put them together into a cohesive script.

** NOTE – Each section below has a link where readers may download the code samples.

Getting Started

After opening the Groovy Console, we will need to import some classes from the groovy.json package.  If you’re unfamiliar with JSON, it’s basically a format for passing arrays commonly used in REST APIs.  The code below will import the appropriate package (no need to download any additional jar files – these come with Groovy):

Image_1

Next, we must define variables for our EPM environment and authentication.  If you’ve read my previous post on the PBCS REST API, these should look familiar.  You will need to update these with your instance information and credentials.

Image_2

First Things First – Determine the API Version

It’s important to note that the API versions for the various EPM products can differ.  The current version for the PBCS API is “V3”, however the Migration API version is “11.1.2.3.600”.  Several functions require the API version, and they will not work if the wrong API version is passed as a parameter.

To get the current Migration API version, we will need the following functions from the Common Helper Functions:

  • getLCMVersions()
  • executeRequest()
  • fetchResponse()
  • fetchJobStatusFromResponse()

Copy and paste these functions into your Groovy script below the EPM Instance Variables, then insert a line to call the getLCMVersions() function.

Image_3

At this point, your code should look something like this.

Run your script and review the results in the Groovy Console output window:

Image_4

At this point, it’s a good idea to define a variable apiVersion, and assign that variable a value within the getLCMVersions() function.

Listing the Files in the Inbox/Outbox

Now that we have the correct API version, we can perform some of the more interesting functions of the Migration API.  For example, we can list the contents of the Inbox/Outbox.  To do this, add the listFiles() function from the Common Helper Functions to your Groovy script.  Then call the function as follows:

Image_5

Run the script and review the list of files in the Groovy Console output window.  At this point, your code should look something like this.

Uploading Files to the Inbox/Outbox

To add files to the Inbox/Outbox, we need to copy the following four functions from the Common Helper Functions into our Groovy script:

  • fetchPingUrlFromResponse()
  • getJobStatus()
  • sendRequestToRest()
  • uploadFile()

Note that the sendRequestToRest() function leverages a third-party library called JSON-lib.  To use this library, download the java-json.jar file from http://www.java2s.com/Code/JarDownload/java/java-json.jar.zip and unzip it into the C:\Program Files (x86)\Groovy\Groovy-2.4.6\lib directory.  Add the following line (highlighted in yellow) to the top of your Groovy script and you’re ready to go!

Image_6

Out of the box, the functions above will get the job done, however uploading files from a specific directory produces some undesirable behavior.  For example, if we have a local file called C:\Temp\Test_File.txt, and we upload that file, here’s what shows up in the Inbox/Outbox:

Image_7

Notice that the file path gets baked into the file name.  We didn’t want to upload a file called “C:\Temp\Test_File.txt”.  We wanted a file called “Test_File.txt”, and it just happened to be originally located in the C:\Temp directory (which is irrelevant once the file is uploaded).

We can rectify this situation by modifying the uploadFile() function, separating parameters for the directory and file name.  Modify the uploadFile() function by adding the sections highlighted in yellow below:

Image_8

Next, call the uploadFile() function with two separate parameters for the directory and file name.  Note the double-slashes below.

Image_9

Running the script now uploads the file to the Inbox/Outbox with the correct file name.

Image_10

At this point, your code should look something like this.

Downloading Files from the Inbox/Outbox

To download files from the Inbox/Outbox, we need to copy the following function from the Common Helper Functions into our Groovy script:

  • downloadFile()

Developers may find that this function benefits from a couple of key changes.  The downloadFile() function doesn’t allow developers to specify a target directory for the downloaded file.  By adding a parameter for the directory, and updating the saveFilePath, we can specify the target directory.  Modify the function as follows (note the sections highlighted in yellow):

Image_11

Image_12

After successfully running the function, you may notice that Groovy doesn’t “release” the downloaded file.  Trying to delete the file in Windows File Explorer results in the following error:

Image_13

The only way to get past this error is to close the Groovy console completely.  Upon inspection, it looks like the code opens an output stream, but never closes it.  This is easily rectified by inserting the line below (highlighted in yellow) into the function:

Image_14

After making the changes above, developers should be able to download files from the Inbox/Outbox to any local directory.

Test the downloadFile() function by calling it at the end of the Groovy script.  Note the two parameters used by the function.

Image_15

If you’ve been following along this whole time, that’s awesome!  Your code should look something like this.

Bed Time . . .

In Part 2 of this post we will cover the remaining aspects of the Migration API . . . Artifact Snapshots, LCM and User Reports.  Check back soon – I should have it posted by the end of next week!

One More Thing

I’m speaking at OpenWorld this year!  My presentation on Essbase Hybrid Aggregation Mode will be in Moscone South – Room 303 on Sunday 9/18 at 1pm.

PBCS Pro Tip: Manage Multiple Test Accounts with One Gmail Address

During the testing phase of most Planning implementations, developers need to create test user accounts.  I typically create at least one test user for each security group so I can verify that the correct access has been assigned.  With an on-premises Hyperion Planning implementation, this is easy – simply create user ID’s in the Shared Services native directory.  With PBCS, creating bulk test ID’s can be difficult, as each user ID requires a unique e-mail address.  If you need 50 test users, should you create 50 fake/temporary e-mail accounts?  Luckily the answer is no.

The Easy Way . . .

Gmail has a clever feature that allows developers to use the same address (almost) for all test user e-mail accounts, and it involves the “+” sign.  Specifically, the plus sign and everything to the right of a “+” in a Gmail e-mail address is ignored.  Here’s an example.  Let’s say I want to create one test account per security group, and I have the following groups:

  • Sales_Read
  • Sales_Write
  • Marketing_Read
  • Marketing_Write
  • Executive_Read
  • Executive_Write

I created a free Gmail account with the e-mail address pbcstest@gmail.com.  If I try to use this same e-mail address for all six of my PBCS test accounts, I’ll get an error.  Each account requires a unique e-mail address.  However I want to manage all of my test accounts from a single e-mail address.  To do this, I need to use a “+” sign in the e-mail addresses of my test users.

Here’s an example.  For the Sales_Read group, I might create an account called Test_Sales_Read.  When I assign an e-mail address to this account, I’ll use pbcstest+sales_read@gmail.com.  Gmail will ignore the plus sign and the text to the right of the “+”.  All confirmation e-mails will be send directly to the pbcstest@gmail.com account.  I can create as many users like this as I want, as long as the text to the right of the “+” sign is unique for each account. PBCS will treat the e-mail addresses as if they are unique.  My six accounts above might look something like this:

User_Table

When Oracle Cloud sends the user ID confirmation messages with temporary passwords, they will all be delivered to pbcstest@gmail.com.

On a Personal Note . . .

This feature isn’t just useful for PBCS.  Imagine creating rules within Outlook that route your personal e-mails to folders based upon the following:

  • myaccount+netflix@gmail.com
  • myaccount+bankstuff@gmail.com
  • myaccount+amazon@gmail.com
  • myaccount+travel@gmail.com
  • myaccount+odtug@gmail.com

Let your e-mail OCD run wild!  (and simplify the creation of PBCS test accounts)

Essbase Pro Tip: Compare Outlines with Groovy

I’ve had a recurring challenge at several of my more recent clients. They have had lots of apps, lots of environments, and lots of developers working on different initiatives at the same time. So when it came time to move a cube from DEV to TEST, or TEST to PROD, I was always a bit nervous. I wanted to fully understand the differences between my outlines across environments before I migrated anything, so I set out to build an Outline Comparison utility.

Reinventing the Wheel . . .

I am, of course, not the first person to attack this challenge. Terry Ledet at US-Analytics has a very sophisticated and fast utility that leverages a relational database. I’m hoping he’ll post a blog about his approach soon. Doug Bliss has a solution that appears to leverage the VB API. Unfortunately, this API is no longer supported as of Essbase 11.1.2.4. Evgeniy Rasyuk has a utility written with the Java API, however it requires a commercial license. Other people have used a combination of the Applied OLAP Outline Extractor and file comparison utilities. While useful, this approach produces less-than-readable results when there are a variety of differences across cubes.

In the end, I wanted something that met my specific needs and would allow others to leverage my code. I wanted a solution that had:

  1. Open Source Code – use it and modify it however you want
  2. No Cost – free for everyone
  3. Straightforward Results – simply list the differences between two outlines
  4. Simple Architecture – no relational database required

Note – Readers may download the latest version of the Outline Comparison utility here.

I am Not a Java Developer . . .

The Java API is clearly a part of Oracle’s path forward for Essbase, and this makes it a logical choice for developing a utility of this sort. However, I’m not a Java developer. But I was lucky enough to co-edit Cameron Lackpour’s first book Developing Essbase Applications. And in this book, Joe Aultman wrote a chapter on automating common Essbase activities with Groovy. Now, I’ve always had a lot of respect for Joe’s Essbase knowledge. But to be completely honest, I thought his love of Groovy was a bit . . . eccentric. After all, how many people are actually going to write Java API code to automate Essbase tasks? Well, it turns out I was wrong, because Groovy is really easy to use and very powerful when combined with the Essbase Java API. Groovy provided the “training wheels” I needed to build a Java-based Outline Comparison utility.

What is Groovy?

Groovy is a programming language that leverages the Java platform and takes a lot of the hard work out of developing Java apps. While I’m probably oversimplifying, it’s basically a “crutch” for Java. Rather than go into the specifics of Groovy, I’d encourage you to read Joe’s chapter in Developing Essbase Applications. I’m going to jump into the good stuff – let’s build something.

Download and Install Groovy

Click here to navigate to the Groovy distributions. For Windows, select the “Windows Installer”, circled in green below.

00_Download

Download and launch the installer.

Select your preferred language.

01_Language

Select OK.

02_Setup

Select Next.

03_Agree

Select I Agree.

04_Options

Perform a Full installation. Select optional components. (I selected them all.)

05_Location

Tell Groovy where to install. Select Next.

06_Shortcuts

Select Install.

Groovy will begin installing.

08_Install_Complete

Look for the word Completed. Select Next.

09_Env_Variables

Configure Environment Variables and Path. Select Next.

10_File_Assoc

Configure File Associations. Select Next.

11_Complete

Select Finish.

In your applications, find the shortcut to Start GroovyConsole. Launch Groovy with this shortcut. A blank GroovyConsole will be displayed.

13_Groovy_Console

The Groovy Console

The Groovy console is pretty straightforward. To make sure things are working after the install, we’ll write a simple program below that outputs the text “Is this thing on?”.

Console_Features

Enter the text println “Is this thing on?” in the white script section. Press the Execute Script button. If the text “Is this thing on?” was printed in the yellow output section, you’re ready to start building your Outline Compare Utility.

 

Developing the Outline Compare Utility

The basic steps for building an Outline Compare Utility are as follows:

  1. Import the appropriate Essbase API files
  2. Read variables from a config file
  3. Define output file
  4. Connect to Cube #1 and Cube #2
  5. Open the Essbase Outlines
  6. Loop through all members in both cubes
    1. For Members in Cube #1 . . . Make sure they exist in Cube #2
      1. For Members in Both Cubes . . . Make sure properties are the same
      2. For Members in Both Cubes . . . Make sure member order is the same
    2. For Members in Cube #2 . . . Make sure they exist in Cube #1
  7. Close and disconnect

Importing the Essbase API Files

To take advantage of the Essbase Java API when using Groovy, we need to help Groovy find the appropriate jar files. These files will not exist on your computer unless you have installed the Essbase client software.

The location of the Essbase Java API jar files will vary depending on your installation options. On my laptop, they were located in C:\Oracle\Middleware\EPMSystem11R1\common\EssbaseJavaAPI\11.1.2.0\lib.

Copy the files below into the C:\Program Files (x86)\Groovy\Groovy-2.4.5\lib directory.

  • ess_es_server.jar
  • ess_japi.jar
  • ess_maxl

Please note – there are multiple ways to handle external jar files with Groovy. This is one of several methods.

Now we can import the jar files into our Groovy script with the code below:

15_Import_JAR_Files

Create a directory for your program, and save the file as OtlCompare.groovy.  (Place your properties file in the same directory.)

The Config File

I didn’t want to modify my code whenever I wanted to compare a new set of cubes. But I wasn’t ready to build a GUI to prompt users for login information, cube names, etc. This is something I may add at a later date. Instead, I decided to store this information in a config file and have the program read that file. Luckily, Groovy has a utility called ConfigSlurper() that reads config files. I simply created the file with the variables I wanted, defined the file and told ConfigSlurper() to go get my variables.

My Config file looks like this:

14_Config_File

Please note that I placed all of the files related to this application in the logDir referenced above, including the Groovy source code, properties, and output. You will need to modify this location.

Next, I updated my Groovy script to read the config file as follows:

15_Read_Config_Info

On line 12 above, we see the config file being defined. On line 13, ConfigSlurper parses the file. After that, we can reference any of the variables in the config file by prefixing the variable name with “config.”. We can see this on line 16, where we reference config.logDir. logDir was the location for the log file that we defined in our config file. This really highlights the beauty of Groovy. With two lines of code, we processed an entire config file, storing 13 separate variables.

On line 19, we create the log file. We will append to it in future steps. Starting on line 21, we write the current time to the log file.

Note – appending “\t” inserts a tab into a text file.

Note – appending “\r\n” inserts a carriage return into a text file.

Running this script should produce output similar to the image below. It should create a new log file whose location is defined by your config file:

16_Write_to_Logfile

The log file should look like this:

17_Logfile

Note that we are sending the output to both the console (using println statements) and the log file, (using logfile.append statements).

Connect to Essbase and Open Outlines

The lines below initialize the Essbase Java API and connect to Essbase. Note the use of variables defined in the config file. Any variable prefixed with “config.” is defined in the config file.

18_Connect_to_Cubes

Now that the two outlines are open, we can begin looping through the member names comparing properties.

Looping Through Members

To loop through the members in the outlines, we will create a function called checkMembers. What makes this function interesting is that it is recursive – it calls itself. The function starts by looping through the dimensions. It then loops through the children of each member until there are no more members to process, working its way through the entire tree.

We first loop through Outline #1, checking to see if each member exists in Outline #2. If the member exists in both outlines, we check to make sure the properties are the same. Anytime we run across a mismatch or missing member, we make a note in the log.

Next we loop through the members in Outline #2, checking to see if each member exists in Outline #1. But this time, if the member exists in both outlines, we don’t bother checking the properties, because we would have already done this in the first loop. Instead we’re simply looking for extra members that exist in Outline #2, but not in Outline #1. The function has a parameter that allows us to skip the member property check.

There is a significant amount of code in this section. Rather than list all of the code, readers can download the entire OtlCompare.groovy file here.

Close Outlines and Log Off

After we have looped through the members, we need to close the outlines and log off from Essbase.

19_Close_Logoff

In this example, I also write the ending time to the log and calculate the elapsed time for the procedure.

Final Thoughts

I’m really happy with the results of this script. It simplifies the process of comparing outlines, making it faster and more accurate. With that said, there’s always room for improvement. The script works well against small and medium sized outlines. But when I ran it against a pair of outlines with 100,000+ members each, it took almost 30 minutes. I’m sure I can make it faster. I’d also like to add support for checking all alias tables, attribute dimensions, non-unique member outlines, etc. Lastly, when the script flags members as missing, it then flags other members as “out of order”, when they really aren’t in the wrong order.  Their “previous member” or “next member” simply isn’t there. I should be able to clean this up.

In the end, it does exactly what I needed it to do – identify differences between outlines. And it does so in a tab-delimited file that can easily be sorted and filtered in Excel.

20_Results_in_Excel

EPM Pro Tip: Build & Maintain Security in Excel

Agatha Christie once argued against the notion that necessity is the mother of invention. She said, “Invention, in my opinion, arises directly from idleness, possibly also from laziness – to save oneself trouble”. This is the guiding principle I follow when designing and building security models for Hyperion Planning and Essbase. So how can a bit of laziness take the “trouble” out of EPM security? By leveraging Excel and letting formulas do the heavy lifting. This is an approach I’ve been using for over a decade, and I’m certain I’m not the only one. Although some of the tools have changed (think LCM), the general approach has been the same.

Step 1: Create a user-friendly, table-like input worksheet where administrators can enter security information.

Step 2: Create separate “export” worksheets for each of the basic components of EPM security (Shared Services, Essbase, Planning, etc.).

Step 3: Take the content from the input worksheet(s) and link it to the “export” worksheets with formulas, placing everything into the correct format.

Step 4: Import security & check for errors.

Process

Show & Tell

Let’s look at a simple example. Keep in mind that based upon your particular requirements, this approach can have a lot of variations. For example, you may not be able to create your own groups, because your organization prefers groups to be maintained in Active Directory. In our example, we’ll use native Shared Services users and groups. The same principles apply . . . you’ll just have to tweak things a bit.

In our example, our administrator has to maintain security in a Hyperion Planning application and a native Essbase reporting cube. The applications both have the same users, and they have similar security requirements. We’ll use my Starbucks Hyperion Planning demo. It contains all of the global Starbucks, Teavana and Seattle’s Best Coffee locations. (The locations are real, but everything else is made up . . .) We’ll create groups for the 12 main countries/regions in which Starbucks operates.

9-29-2015 9-00-25 PM

In the examples below, an Excel workbook will be highlighted.  The full workbook can be downloaded here.

 The Simple Input Worksheet

In our “simple input worksheet”, we will need some basic information. We’ll need to know:

  • What are the groups with access to the EPM products?
  • What members can these groups access?
  • What is the level of access?
  • Who is in the groups?

Here is what our simple input worksheets look like. In our example, we will create 24 groups (one read and one write for each country/region). We will also add one test user to each group. We will assume that all groups have access to both Planning and Essbase.

What are my groups and what can they access?

9-30-2015 12-48-33 AM

Who is in my groups?

9-29-2015 9-22-59 PM

The Lazy Bit

Now that we have collected the basic information we need, we can begin to create our outputs. These are the files that we will import into Shared Services, Essbase and Planning to actually build our EPM security.

We’ll start by creating an LCM import file that creates our groups. However, we may not know how to format this file. How do we get around this? Follow these steps:

  • Manually create one native group in Shared Services.
  • Manually add a user to the group.
  • Manually provision the group.

I provisioned my sample group like this:

Provision

Next we need to export the security we just created manually in Shared Services. This will give us an idea of what the Shared Services import files should look like.

Here’s what we will export from Shared Services in LCM:

9-29-2015 10-03-25 PM

Note that even groups and users that were created in Active Directory will be exported when the “Native Directory” above is selected, assuming they have been provisioned.

Select the Export button. Once the export is complete, browse to the file under the File System. Right-click on the file and select Download. Save the file to your desktop and unzip it. The following file structure will be available:

9-29-2015 10-12-00 PM

If we edit the Groups.csv file, we can immediately see the format of the file we need to create that will ultimately be used to create the new native groups in Shared Services. In this file we see the group that was previously created manually and we see the test user that was assigned to the group.

9-29-2015 10-14-08 PM

** Note - Leave the internal_id above blank when creating the new import file.

** Note - If you’re using Active Directory users or groups, your providers will be slightly different.

Create the Shared Services Group Import File

Using the same basic format as our exported Groups.csv file, we can create a new file (within our EPM Security Workbook). This file will create our groups and associate our users with the groups.

It’s important to note that many of the cells below are populated with simple formulas that reference the Groups_and_Entities or the Groups_and_Users worksheet. Note the formula circled below.

9-29-2015 11-01-23 PM

This file can either be saved as the Groups.csv file (in the previously unzipped folder structure), or the administrator can copy and paste the results into the Groups.csv file (the one previously exported from Shared Services). Then, simply zip the files back up (starting at the top of the LCM file structure).

9-29-2015 11-05-44 PM

Upload and Import the Zipped Shared Services Files

From within Shared Services, right-click on the File System. Select Upload.

9-29-2015 11-08-05 PM

Browse to the zipped Shared Services files. Select Finish, then OK.

Next, expand the file within the Shared Services File System. Select the Groups.

9-29-2015 11-11-55 PM

Select Import. Ensure that the groups were created successfully and that the users were added as expected.

9-29-2015 11-16-52 PM

Provision the Groups

Now that the groups are imported and the users have been added to the groups, it’s time to provision the groups. Navigate to the Assigned Roles directory in the previously exported LCM file.

2015_09_29_SS_Security\HSS-Shared Services\resource\Native Directory\Assigned Roles\Planning

Modify the Planning application file to provision each of your groups, again, using formulas to reference the groups in your Groups_and_Entities tab. Pay special attention to the role numbers. (This is why we create a Group and User manually first, and then export the file. This way, we can copy items such as the role number – something we otherwise would probably not know.)

9-29-2015 11-24-10 PM

Re-zip the Shared Services LCM files, upload the file and import the Planning Assigned Roles.

9-29-2015 11-29-49 PM

At this point, all of the groups should be provisioned as Planners.

9-29-2015 11-34-37 PM

Repeat this process and provision the groups for Essbase and FR (based upon the files previously exported containing our manually created test group). Groups must be provisioned in Shared Services for Essbase and Planning before application-specific security can be applied.

Create Essbase Filters and Grant Them to Groups

Woo hoo! At this point, we’re finished with Shared Services and LCM. All of the groups have been created. Users have been added to the groups. All of the groups have been provisioned. And all of this has been done by importing files built based upon simple formulas.

Next we need to create our Essbase filters. For this, we will use Excel formulas to craft MaxL statements. We can paste these statements directly from Excel into the MaxL command prompt.

Let’s start with the basic MaxL that we want in our Security workbook. The MaxL syntax can be found here.

Filter

Note the common practice of locking down a dimension with “no_access” and then selectively opening it up with read or write lines.

If we paste the MaxL into a MaxL command line, we’ll see that the filter is created and granted to the specified group. But what if we want to create these commands in bulk in Excel? We simply use formulas with the concatenate function.

Using Excel formulas, we can craft the following file:

9-30-2015 12-57-21 AM

The MaxL above is broken up into four columns. These columns have the following formulas:

Column A: =CONCATENATE(“create or replace filter SB_ESB.SB_ESB.f_”, Groups_and_Users!A3)

Column B: =CONCATENATE(” no_access on ‘@IDESCENDANTS(“”Location””)’, “)

Column C: =IF(RIGHT(A3, 4) = “Read”,CONCATENATE(” read on ‘@IDESCENDANTS(“””, Groups_and_Entities!B3, “””)'; “),CONCATENATE(” write on ‘@IDESCENDANTS(“””, Groups_and_Entities!B3, “””)'; “))

Column D: =CONCATENATE(“grant filter SB_ESB.SB_ESB.’f_”, Groups_and_Entities!A3, “‘ to ‘”, Groups_and_Entities!A3, “‘;”)

Note the use of spaces and quotes in the formulas above. Also note that by placing the words “read” or “write” in the actual group names, we can reference this in the formulas when building the filters, thus granting either read or write access.

Once our formulas are sorted, we can simply paste the MaxL directly from Excel into a command prompt. Our filters are immediately created and granted to all of our groups. Subsequent changes are as easy as a copy/paste into MaxL.

9-30-2015 12-54-20 AM

Create a Hyperion Planning Security Import File

The last step in our process is to create the file to import security into our Hyperion Planning application. Again, we will use Excel and basic formulas. Like LCM, it can be helpful to add a minimal amount of security to Hyperion Planning manually, and then export a sample secfile.txt file. This can be helpful when determining the format of the import file.

To export a Planning security file, navigate to the Planning directory on the server:

Oracle\Middleware\user_projects\epmsystem1\Planning\planning1

Execute the following command (update the app, user ID and file location):

ExportSecurity.cmd /A=SB_BSO,/U=admin,/TO_FILE=C:\\temp\\SecFile

Review the security export file.

Our test security export contains the one piece of security information we assigned manually in Hyperion Planning:

US_Read,US,READ,@IDESCENDANTS

We can now craft our full import file using this same format.

In Excel, we can build a file like this:

9-30-2015 1-37-40 AM

Here is the Excel formula that concatenates this information from the Groups_and_Entities tab:

=CONCATENATE(Groups_and_Entities!A3, “,”, Groups_and_Entities!B3, “,”, Groups_and_Entities!D3, “,”, Groups_and_Entities!C3)

We can then copy the contents of this file into the secfile.txt in the planning folder. Unfortunately, the ImportSecurity utility must generally be run from the server, in the following directory:

Oracle\Middleware\user_projects\epmsystem1\Planning\planning1

Upload the security with the following command:

ImportSecurity.cmd “SB_BSO,admin,SL_COMMA,0,”

Check the ImportSecurity.log in the following directory:

Oracle\Middleware\user_projects\epmsystem1\diagnostics\logs\planning

Look for something like this:

9-30-2015 1-40-01 AM

In Summary . . .

Building security manually is a tedious, time-wasting exercise. And in the end, administrators are left with a manual process to maintain. By centralizing security development in an Excel workbook, developers can create exports that feed the various EPM components. While these exports require different formats for LCM, Essbase and Planning, the mechanics that produce the files are similar. Developers can use basic Excel formulas to craft files into the correct import file formats. Then import the files.

This approach has several benefits:

  1. EPM Security Maintenance is centralized in a single workbook.
  2. EPM Security is self-documenting. (It’s all in the Excel workbook.)
  3. The EPM Security Workbook makes a good security backup.
  4. The EPM Security Workbook can be versioned and archived.
  5. Security migrations are simplified because the same files can be imported across multiple environments.
  6. Importing files is infinitely faster than creating security manually.

A copy of the workbook used in these examples is available here.

Essbase Pro Tip: Build & Maintain Partitions in a Text Editor

Partitions in Essbase are a bit like views in a relational database. They allow the results of a query in one cube to be available in another cube. This data may be “materialized”, or physically stored in both the source and target. These are referred to as replicated partitions. Or the partition may simply provide a “window” from one cube into another. These are referred to as transparent partitions. A third type of partition, linked, isn’t really a partition at all . . . it’s more of a navigational aid. It’s important to note that linked partitions are deprecated as of Essbase 11.1.2.4. That’s a nice way for Oracle to say “stop using linked partitions even though the option is still available”.

Embrace the Command-Line

Most Essbase professionals (myself included) learned to build partitions in EAS. It has a nice tabbed GUI that walks users through the various steps:

  • Type (Replicated, Transparent or Linked)
  • Connection (Source & Target Connection Info)
  • Areas (Source & Target Details)
  • Mappings (How to Handle Specific Differences Between Source & Target)
  • Validation (Is everything correct?)

There’s nothing “wrong” with this GUI . . . in the same way that there’s nothing “wrong” with training wheels. But for those of us that are comfortable in a command-line environment, a text file with good ole MaxL is just more straight-forward.

Why Ditch the GUI?

  1. MaxL is safer.
  2. MaxL is faster.
  3. MaxL is easier.
  4. MaxL is repeatable and migratable.

Have you ever “lost” a partition? Maybe it became orphaned and was no longer editable or repairable in EAS? It’s a frustrating experience, and unfortunately there are a variety of reasons partitions can head south. Building and maintaining partitions in a text editor using MaxL creates a tidy little backup. Re-creating the partition or migrating it to another environment is a snap. Simply paste some text into the MaxL command-line and you’ve got a partition!

And speaking of migrations, it’s important to note that LCM cannot currently migrate partitions (as of version 11.1.2.4). The LCM Guide recommends using MaxL for partition migrations. I recommend taking this a step further. Create and maintain your partitions in MaxL in the first place.

Show me the MaxL!

The typical MaxL statement for creating a partition can be broken down into seven sections:

  1. Login (Source)
  2. Create Command
  3. Source Area Definition
  4. Login (Target)
  5. Target Area Definition
  6. Mappings
  7. Logout

In the example below, I’m partitioning data from a Planning application (BSO) called “SrcApp”, into an ASO reporting cube called “TgtApp”. The Planning app (source), has a Version dimension, but the target does not. The target has a “Profit Center” and “Cost Center” dimension, however the source does not.

cmd_text

You may have noticed that there are no area alias “nicknames” in the statement above, and there is only a single area definition on the source and target. Multiple areas are allowed, and mappings at the bottom of the statement can be area-specific. But I usually find that one area definition (and global mappings) gets the job done.

Here’s what you should expect to see when the partition is pasted into MaxL:

cmd_partition

Review your results, looking for errors, warnings and in particular, cell-count mismatches.

Final Pointers

Here are some tips that may help when creating your partitions in MaxL:

  • Enclose area definitions in single quotes.
  • Separate members and functions within area definitions with commas.
  • Do not separate mapping lines with commas.
  • The first set of parenthesis in a mapping statement represents the source.
  • The second set of parenthesis in a mapping statement represents the target.
  • Map a member to or from a null value when a dimension in the source does not exist in the target (or vice-versa). In the example above, “Final Version” is mapped to “”, because there is no Version dimension in the target cube.
  • End the entire statement with a semi-colon.
  • Do not store passwords in MaxL files unless they’re encrypted.
  • If you’re using Essbase clustering, make sure you login to the cluster name when you create your partition, and not the server name.
  • Create partitions with service accounts, preferably ones with passwords that do not expire.

Hyperion Planning 11.1.2.3.700 Patch

I’m on a team that recently decided to patch a client environment, upgrading Hyperion Planning 11.1.2.3 from the .501 patch to the .700 patch.  We quickly started having deployment issues related to unknown members in member formulas.  This was with a “classic” Hyperion Planning application, not an EPMA app.

We soon realized that we had issues with member formulas that referenced members with names that included a “%” sign.  An example is below:

Pre_Save

The screenshot above represents a brand-new formula.  It hadn’t been saved, and it hadn’t been validated.  (As you have probably guessed, these two actions led to issues.)

Upon validating (or saving) the formula, an error was introduced.  Interestingly, the validation returned with a “successful” message.

Post_Validate_Error

The number 25 was appended to the “%”.  Subsequent validations added another “25”.  The application repository was clean – formulas there were correct.  LCM exports also looked good.  But saving formulas suddenly got risky.  This application had some very, very complex member formulas.

My immediate reaction was to start updating member names and formulas, replacing “%” with “Pct”.  This would normally get the job done, however this particular application had a LOT of members with “%” in the name.  Some of these members were referenced in FDMEE maps . . . and reports . . . and business rules . . . and partitions.  Renaming was going to be a pain.  Luckily we had another unpatched environment where development could move forward while we worked the SR.  This leads us to a conversation regarding patch best practices:

  • Read the Readme file!  It details known issues and fixes addressed by the patch.
  • Patch a Development or Sandbox environment first.
  • Have a back-out plan.
  • If you have apps in Production, make sure you plan for a thorough regression test.
  • Schedule your patches for “low-disruption” time periods.  (Don’t patch right before year-end close or go-live.)
  • If you can, wait for others to patch first.  (The best testers are strangers who don’t bill your project.)

The Ultimate EPM Demo Laptop

As part of my job, I’m often running demos for clients or presenting at conferences and user groups. As such, I need to run a good portion of the EPM stack on my laptop (running virtual machines). Because this requires a fairly beefy laptop, I typically gravitate towards “workstation replacement” type notebooks. I’ve had a Lenovo W520 for several years, and it has served me very well, but with the rapid pace of hardware improvements and my increasing needs, it was time to upgrade. Websites like Tom’s Hardware and AnandTech do a great job of highlighting the newest and fastest bits of hardware, and some are so new that you need an entirely new laptop to run them. I spent a fair amount of time selecting and configuring my demo laptop, and I thought others might benefit from my experience.

Calling a laptop the “ultimate” laptop is a risky proposition. Different people use laptops in different ways. Hardware that represents the epitome of high performance today can look embarrassingly low-tech tomorrow. The items below represent the values that guided my selection:

  • I need to run complex VM’s during demos and presentations.
  • I need a machine that’s portable, but just barely.
  • I need a machine that supports 32 GB of RAM.
  • I need a machine that will support the latest hard drives.
  • I value overall performance more than finding a good deal.

The search for a workstation-class laptop usually starts with either Lenovo’s “W” line of laptops or Dell’s “Precision” line. I’ve had both in the past, and I’ve been happy in each case. But neither line currently pushes the performance envelope. There are currently (as of May 2015) two big limitations with these lines:

  1. They are limited to mobile CPUs (currently the Intel Core i7 4940MX, up to 4.0 GHz).
  2. They do not support the latest PCIe 80mm M.2 SSDs (very fast hard drives).

Toyota vs Ferrari

Mainstream manufacturers need to supply laptops that can satisfy corporate purchasing departments that want reliable and affordable hardware . . . something akin to a Toyota. But what if you’re looking for more of a Ferrari? What if you want bleeding edge technology? The answer can be found in video games. Custom “gaming” laptop manufacturers often offer the latest, greatest components long before the mainstream manufacturers. And they also offer premium services (think monitor calibration, or custom paint jobs). If the idea of selecting the thermal paste used to mount your CPU to the motherboard excites you, this is an avenue you should explore.

As a khaki pants wearing, corporate type of guy, ordering a custom gaming laptop for work did not seem like a mature or even sensible idea. However, I found that most of these builders have options that look fairly tame on the outside. These are laptops that won’t stand out in the corporate world. I ended up looking at laptops from companies like XOTIC PC, Sager, Eurocom and Digital Storm. Several of these companies leverage a chassis and other components from Clevo, and to varying degrees, offer their own special software and internals. In other words, don’t be surprised if you see laptops that look similar on the outside from several of these companies. Pay attention to what’s being offered on the inside, because it can vary, and the internal components are what’s really important. Let’s discuss a couple of the more important components.

Go Big or Go Mobile?

The natural assumption is that laptops are limited to mobile processors, but this does not have to be the case. Several of the aforementioned companies will drop a desktop or even server CPU into a laptop. For example:

  • Sager NP9772-S (Desktop Processor: Intel Core i7 4970K, up to 4.4 GHz)
  • Eurocom Panther 5 SE (Server Processor: Intel XEON with 12 cores)

These processors can offer better performance than the highest-spec mobile processors – sometimes faster, sometimes with more cores. Of course, the consequence is usually a battery that doesn’t last as long. But then again, no one who drives a Ferrari is worried about gas mileage, and the same should be true for a high performance laptop.

Good Things Come in Small Packages

Most of us are familiar with the old 2.5” laptop hard drives (HDDs). These are quite slow by modern standards. If you still have one of these, you are overdue for an upgrade.

HDD

More recent drives look like the SSD below. Unlike the HDD above, these do not have physical disks that spin within the drive. Instead, they leverage flash memory for dramatically improved performance.

SSD

Unfortunately, the drives above both leverage the SATA standard for connecting storage devices within a computer. The SATA standard works with SSDs, but can’t fully exploit the potential speeds of newer hard drives. In other words, as drives have become faster, the software that connects those drives to the computer has become the limiting factor affecting performance. Most current laptops only support SATA drives. But certain newer ones support PCIe, which offers higher throughput than SATA.

The PCIe drive below is a 512GB Samsung SM951. Technically, this is also an SSD because it uses flash-based memory to store data, but it does not leverage the throughput-limited SATA interface like the Samsung 840 EVO above. It’s difficult to tell from the picture, but it’s about the size of a stick of Wrigley’s chewing gum. And it is very, very fast.

PCIe

Note the dramatically higher sequential read/write speeds of the PCIe drive (right), compared to a SATA SSD (left).

Crystal_Diskmark

Current PCIe laptop drives leverage the ACHI standard. ACHI was originally designed to work with older HDD’s (as opposed to flash-memory based drives). A newer standard called NVMe has been developed specifically for PCIe drives, however PCIe/NVMe drives are not currently available for laptops. Several have been announced, and should be available within months. These newer PCIe drives that leverage the newest standard (NVMe) should be even faster than the PCIe drive above, especially with regard to random input/output operations per second (IOPS).

The key takeaway is this . . . laptop hard drive performance has recently improved dramatically, but few laptops currently have the PCIe slots required for the new drives. In the next several months, PCIe drives that leverage the NVMe standard will further improve laptop hard drive performance. All of this will translate into better performing demos.

In summary . . .

Drive_Hierarchy

The “Ultimate” Laptop

After a significant amount of research, I decided on the following specifications for my “Ultimate” EPM Demo Laptop:

  • Sager NP9772-S (17.3” IPS LED-Backlit Matt Finish Display)
  • Intel Core i7 4970K, up to 4.4 GHz (Desktop Processor)
  • 32GB Kingston HyperX RAM
  • Nvidia GeForce GTX 980M GPU w/ 8GB Video Memory
  • 330W Power Adapters (2)
  • Hard Drives:
    • C:\ (OS) Micron M600 512GB M.2 SSD (SATA)
    • D:\ (Data) Samsung 840 Evo 1TB SSD (SATA . . . from old laptop)
    • E:\ (VMs) Samsung SM951 80mm M.2 512GB SSD (PCIe)

To say this laptop is fast is an understatement. It screams. It’s an upgrade from my old laptop in so many ways that it’s difficult to tell exactly what components are contributing to the improved performance. Everything is faster.

Of course, there are a few downsides . . .

  • It’s big and heavy. I wasn’t worried about that . . . I have a roller bag for my laptop.
  • The keyboard is just “ok”. Sadly, nobody makes keyboards like the old Lenovos.
  • The AC Adapter is so big and heavy that I bought two, just so I wouldn’t have to lug it around. On a positive note, it can double as ship ballast or a blunt weapon.
  • The latest hard-drives aren’t always available directly from computer manufacturers. You will need to be comfortable opening up your laptop and swapping some components.
  • This laptop will definitely not improve your carbon footprint.

 

Putting it All Together

At the time I ordered my laptop, the Samsung SM951 wasn’t available from Sager, nor was it available from regular retail outlets. In fact, it’s currently an “OEM only” component, meaning it is only sold directly to computer manufacturers like Dell, Lenovo, etc. for use in a select few of their laptops. That meant I had to order it from Ram City . . . in Australia. On a positive note, the exchange rate is pretty good right now, and Ram City’s customer service is excellent. Everything else was purchased as part of a complete laptop directly from Sager Notebooks.

Laptop

Summary

If you’re looking for a laptop that will present your EPM demos in a positive light, it’s worth it to color outside the lines, and to step away from the mainstream manufacturers. Purchasing a gaming laptop will allow you to use the fastest processors and the latest components in your pursuit of ultimate speed.

Disclaimer:

I have not been compensated by the companies discussed in this post in any way. All opinions expressed are my own and are based upon my own research and experience.

DIY Hyperion Planning – Create a Planning App

Some Context . . .

This is the ninth (and final) post in a multi-part blog educating readers on how to build a personal “sandbox” environment for Hyperion Planning. Click here to see all nine posts in the series.

Creating a Test Hyperion Planning Application

Once the EPM products are configured, there are three additional prerequisites for creating an application:

  • Create the Planning Application Repository
  • Create the Planning Application Repository Schema User
  • Create the Planning Data Source

If you’ve been following this tutorial from the beginning, you will have already created the Repository (PLANAPP1) and Repository User (also PLANAPP1). These were created when you pasted the SQL code into SQL Plus.

For the purposes of this example, we will be creating a “Classic” Hyperion Planning application. The alternative approach (using EPMA) is rapidly losing popularity. Oracle does not appear to be investing in this product, so “Classic” it is . . .

Start the EPM Services

Locate the “Start EPM System” icon within the apps on your VM. (I would recommend right-clicking on this icon and selecting “Pin to Start”. Do the same with the “Stop EPM System” script.) Run the start script to startup the EPM services on your VM. Some notes about the startup process:

  • On my laptop, startup takes a little less than 9 minutes.
  • Faster processors and hard drives will improve startup time.
  • Compact deployments will start faster than “regular” deployments.
  • The script will close when startup is complete.
  • Watch Task Manager on the VM during startup to monitor CPU and memory utilization.
  • Upon completion of the script, ensure that all services started as expected.
  • Don’t forget to stop the EPM services before you shut down your VM.

Create a Data Source

Once the services have started, open a browser on your host. Navigate to the following URL:

http://<INSERT SERVER NAME HERE>:8300/HyperionPlanning

2-17-2015 11-46-14 AM

Enter the Shared Services admin user ID and password.

Select “Sign In”.

2-17-2015 11-49-38 AM

Select “Manage Data Source”.

2-17-2015 11-50-02 AM

Select “Create”.

2-17-2015 8-08-47 AM

Enter the App Database connectivity information.

Enter the Essbase Server connectivity information.

Validate each connection.

Select “Save”.

Create the Planning Application

2-17-2015 8-10-22 AM

Select Manage Applications

2-17-2015 8-12-30 AM

Select “Create”.

2-17-2015 8-55-03 AM

Select “Advanced Planning”.

2-17-2015 8-55-46 AM

Name your application and select “Next”.

2-17-2015 8-57-05 AM

Select the application parameters. The simplified Planning interface appears to force applications to have an ASO plan type.

(This would appear to be a bug, as the on-premise interface does not require this.)

Select “Next”.

2-17-2015 8-57-38 AM

Review the application settings. Select “Create”.

2-17-2015 8-58-10 AM

Planning will begin creating the application.

2-17-2015 8-58-35 AM

Select “OK”. The application interface will be displayed.

2-17-2015 8-58-58 AM

Next Steps

With the application created, users may now begin the typical Hyperion Planning development activities, including:

  • Building Dimensions
  • Building Input Forms
  • Loading Data
  • Building Calc Manager Rules
  • Building Reports
  • Creating Approvals Processes