Latest Posts

Getting Groovy with the EPM Cloud Migration REST API – Part 1

Oracle provides specific REST APIs for many of its cloud offerings, however one API in particular is useful for multiple cloud products – the Migration API.  This API is sometimes referred to as the “LCM” API or the “interop” API.  Regardless of the name, this is the API that provides Lifecycle Management functionality for EPM Cloud products.

If you’ve never worked with any of Oracle’s EPM REST APIs, have a look at my previous post here on the PBCS REST API.  This will give you some context and background on the subject.  We will be using Groovy for all examples.  Steps for installing Groovy can be found within this post.

Using the Migration API, developers can perform the following tasks:

  • Get API Versions and Details
  • List Files in the Inbox/Outbox
  • Upload Files to the Inbox/Outbox
  • Download Files from the Inbox/Outbox
  • Delete Files from the Inbox/Outbox
  • List Available Services
  • Restart a Service Instance
  • Reset a Service Instance
  • Upload/Download Application Snapshots
  • Initiate LCM Imports & Exports
  • Send Feedback Reports to Oracle
  • Generate a User Provisioning Report
  • Generate a User Audit Report

Some of the tasks above are not great candidates for automation.  For example, it would be highly unusual to reset your service instance in an automated manner.  I also wouldn’t recommend sending Oracle feedback in any automated fashion.  They may have some “feedback” for you . . .  with that said, there are some extremely useful features of the Migration API, and we will cover them below.

A Word About the Common Helper Functions . . .

As noted in previous posts, the Common Helper Functions offer a significant head start in automating an EPM environment.  This post will reference many of these functions while illustrating how to put them together into a cohesive script.

** NOTE – Each section below has a link where readers may download the code samples.

Getting Started

After opening the Groovy Console, we will need to import some classes from the groovy.json package.  If you’re unfamiliar with JSON, it’s basically a format for passing arrays commonly used in REST APIs.  The code below will import the appropriate package (no need to download any additional jar files – these come with Groovy):


Next, we must define variables for our EPM environment and authentication.  If you’ve read my previous post on the PBCS REST API, these should look familiar.  You will need to update these with your instance information and credentials.


First Things First – Determine the API Version

It’s important to note that the API versions for the various EPM products can differ.  The current version for the PBCS API is “V3”, however the Migration API version is “”.  Several functions require the API version, and they will not work if the wrong API version is passed as a parameter.

To get the current Migration API version, we will need the following functions from the Common Helper Functions:

  • getLCMVersions()
  • executeRequest()
  • fetchResponse()
  • fetchJobStatusFromResponse()

Copy and paste these functions into your Groovy script below the EPM Instance Variables, then insert a line to call the getLCMVersions() function.


At this point, your code should look something like this.

Run your script and review the results in the Groovy Console output window:


At this point, it’s a good idea to define a variable apiVersion, and assign that variable a value within the getLCMVersions() function.

Listing the Files in the Inbox/Outbox

Now that we have the correct API version, we can perform some of the more interesting functions of the Migration API.  For example, we can list the contents of the Inbox/Outbox.  To do this, add the listFiles() function from the Common Helper Functions to your Groovy script.  Then call the function as follows:


Run the script and review the list of files in the Groovy Console output window.  At this point, your code should look something like this.

Uploading Files to the Inbox/Outbox

To add files to the Inbox/Outbox, we need to copy the following four functions from the Common Helper Functions into our Groovy script:

  • fetchPingUrlFromResponse()
  • getJobStatus()
  • sendRequestToRest()
  • uploadFile()

Note that the sendRequestToRest() function leverages a third-party library called JSON-lib.  To use this library, download the java-json.jar file from and unzip it into the C:\Program Files (x86)\Groovy\Groovy-2.4.6\lib directory.  Add the following line (highlighted in yellow) to the top of your Groovy script and you’re ready to go!


Out of the box, the functions above will get the job done, however uploading files from a specific directory produces some undesirable behavior.  For example, if we have a local file called C:\Temp\Test_File.txt, and we upload that file, here’s what shows up in the Inbox/Outbox:


Notice that the file path gets baked into the file name.  We didn’t want to upload a file called “C:\Temp\Test_File.txt”.  We wanted a file called “Test_File.txt”, and it just happened to be originally located in the C:\Temp directory (which is irrelevant once the file is uploaded).

We can rectify this situation by modifying the uploadFile() function, separating parameters for the directory and file name.  Modify the uploadFile() function by adding the sections highlighted in yellow below:


Next, call the uploadFile() function with two separate parameters for the directory and file name.  Note the double-slashes below.


Running the script now uploads the file to the Inbox/Outbox with the correct file name.


At this point, your code should look something like this.

Downloading Files from the Inbox/Outbox

To download files from the Inbox/Outbox, we need to copy the following function from the Common Helper Functions into our Groovy script:

  • downloadFile()

Developers may find that this function benefits from a couple of key changes.  The downloadFile() function doesn’t allow developers to specify a target directory for the downloaded file.  By adding a parameter for the directory, and updating the saveFilePath, we can specify the target directory.  Modify the function as follows (note the sections highlighted in yellow):



After successfully running the function, you may notice that Groovy doesn’t “release” the downloaded file.  Trying to delete the file in Windows File Explorer results in the following error:


The only way to get past this error is to close the Groovy console completely.  Upon inspection, it looks like the code opens an output stream, but never closes it.  This is easily rectified by inserting the line below (highlighted in yellow) into the function:


After making the changes above, developers should be able to download files from the Inbox/Outbox to any local directory.

Test the downloadFile() function by calling it at the end of the Groovy script.  Note the two parameters used by the function.


If you’ve been following along this whole time, that’s awesome!  Your code should look something like this.

Bed Time . . .

In Part 2 of this post we will cover the remaining aspects of the Migration API . . . Artifact Snapshots, LCM and User Reports.  Check back soon – I should have it posted by the end of next week!

One More Thing

I’m speaking at OpenWorld this year!  My presentation on Essbase Hybrid Aggregation Mode will be in Moscone South – Room 303 on Sunday 9/18 at 1pm.

PBCS Pro Tip: Manage Multiple Test Accounts with One Gmail Address

During the testing phase of most Planning implementations, developers need to create test user accounts.  I typically create at least one test user for each security group so I can verify that the correct access has been assigned.  With an on-premises Hyperion Planning implementation, this is easy – simply create user ID’s in the Shared Services native directory.  With PBCS, creating bulk test ID’s can be difficult, as each user ID requires a unique e-mail address.  If you need 50 test users, should you create 50 fake/temporary e-mail accounts?  Luckily the answer is no.

The Easy Way . . .

Gmail has a clever feature that allows developers to use the same address (almost) for all test user e-mail accounts, and it involves the “+” sign.  Specifically, the plus sign and everything to the right of a “+” in a Gmail e-mail address is ignored.  Here’s an example.  Let’s say I want to create one test account per security group, and I have the following groups:

  • Sales_Read
  • Sales_Write
  • Marketing_Read
  • Marketing_Write
  • Executive_Read
  • Executive_Write

I created a free Gmail account with the e-mail address  If I try to use this same e-mail address for all six of my PBCS test accounts, I’ll get an error.  Each account requires a unique e-mail address.  However I want to manage all of my test accounts from a single e-mail address.  To do this, I need to use a “+” sign in the e-mail addresses of my test users.

Here’s an example.  For the Sales_Read group, I might create an account called Test_Sales_Read.  When I assign an e-mail address to this account, I’ll use  Gmail will ignore the plus sign and the text to the right of the “+”.  All confirmation e-mails will be send directly to the account.  I can create as many users like this as I want, as long as the text to the right of the “+” sign is unique for each account. PBCS will treat the e-mail addresses as if they are unique.  My six accounts above might look something like this:


When Oracle Cloud sends the user ID confirmation messages with temporary passwords, they will all be delivered to

On a Personal Note . . .

This feature isn’t just useful for PBCS.  Imagine creating rules within Outlook that route your personal e-mails to folders based upon the following:


Let your e-mail OCD run wild!  (and simplify the creation of PBCS test accounts)

Getting Groovy with the PBCS REST API

For many years, developers have had access to a variety of Essbase API’s that allowed them to write their own programs to automate and interact with their Essbase environments. Hyperion Planning, on the other hand, has had pre-built “utilities”, but it never had anything approaching a true API. With the release of Oracle Planning and Budgeting Cloud Service (PBCS), we now have access to the PBCS REST API. So why is this important?

Automation capabilities affect software adoption (aka sales for Oracle). When you think about it, PBCS offers some truly amazing benefits. It’s cost-effective compared to on-premises environments. It’s faster than many on-premises environments. Administrators don’t have to worry about upgrades or maintaining complex infrastructure. PBCS gets new features long before on-premises. PBCS practically sells itself, right? If PBCS is cheaper, better, faster and easier than on-prem, what could possibly make an organization hold on to their on-prem environment?

For many, it’s the thought of having to rebuild the automation and integration that has been built up around their on-prem applications over the years. Many of the tools and techniques for automating an on-prem environment will not work with PBCS. But while the idea of tearing out years of work related to data integration and application automation can be daunting, it is doable. Let’s look at some options.

Automating PBCS

The primary tool used to automate PBCS is aptly called the EPM Automate Utility. This command line utility allows admins to import and export data and metadata, refresh apps, run business rules, etc. It also allows administrators to upload and download artifacts to the PBCS repository (think automated LCM). The EPM Automate Utility has been discussed extensively in other blogs. It’s well documented and easy to use (albeit somewhat limited). We are going to cover something much more interesting.

RESTing in the Cloud

Oracle’s recent cloud offerings are designed to be RESTful. This means that they conform to a REST (representational state transfer) architectural style. More importantly, it means that apps in Oracle’s cloud are accessible with a REST API. As a matter of fact, the EPM Automate Utility was built on top of the REST API for PBCS.


So why would a developer choose to build a program in Groovy, Perl or PowerShell if they could simply create a batch file that calls the EPM Automate Utility? Because, like all utilities, the EPM Automate Utility has a limited set of commands and invocation methods.  The PBCS REST API opens up possibilities for creative and elaborate automation and integration, using many different languages.

Let’s Build an Example

There are an infinite number of ways developers can invoke and use the PBCS REST API. We are going to build a simple example that uses Groovy to add a Product to the Vision application in PBCS. Before we get started, this example assumes:

Please don’t be too concerned with that last point. I didn’t know what JSON was before I started working with the REST API for PBCS. It’s basically a data structure for passing lists and arrays between web apps.

Step 1 – Download the JSON Jar File

In order to leverage JSON from within Groovy, developers will need the java-json.jar file. This can be downloaded from a variety of locations. It should be placed in the Groovy library. Mine is here:

C:\Program Files (x86)\Groovy\Groovy-2.4.6\lib

Step 2: Get Familiar with the Common Helper Functions

Oracle provides us with a set of Common Helper Functions for Java, Groovy and cURL. This is nice, because they provide a good head start when using the PBCS REST API. Unfortunately, there are defects in the Groovy samples. For example, one of the functions is supposed to return error details, but uses an incorrect property:


Another function omits an important slash. This causes the function to return html/text content instead of json/application content. This was fairly confusing to a new REST API developer!


I would personally like to see some comments added to these samples. With all of that said, they are extremely helpful when getting started with the PBCS REST API, and are definitely worth review.

Step 3: Start Coding

Let’s get started by opening the GroovyConsole. The settings below will enhance your Groovy experience:

  • <View>, <Show Script in Output> (Unselect this option as it clutters up your output.)
  • <View>, <Auto Clear Output on Run> (Select this so you only see your latest output.)
  • <Script>, <Allow Interruption> (Enable this so you can cancel scripts that go south . . . )

The settings above are “sticky”, so developers will only have to set them once.

To begin, we’ll need to import two libraries:

  • json.JSONObject
  • json.JsonSlurper

The JSONObject library will allow the script to transmit data in a JSON format. The JsonSlurper library (included when you install Groovy) will allow the script to parse JSON data. Type the code below into the GroovyConsole (or copy/paste from here).


Press the Execute Groovy Script button highlighted above. Ensure that the completion notice is successful. If the notice states anything other than “Execution complete. Result was null.” there is a good chance that the java-json.jar file was not extracted into the correct location, or the GroovyConsole was not restarted.

Next, we’ll add some variables that identify our user and PBCS instance:


Be sure to update the URL, domain, user ID and password for PBCS (redacted above). This example will be using the Vision sample application. If using a different application, be sure to update the app name.

Step 4: Encode Your User Credentials

The PBCS REST API expects credentials to be sent in the following format: identitydomain.userID:password. In addition, authorization information is sent in an encoded format. The code below accomplishes this.


Step 5: Get Some Help from Oracle

Next, we will borrow several bits of sample code from the Common Helper Functions for Groovy. The two functions below read input from our REST requests and slurp it into JSON structures. This sounds complex, however it’s really just reading a long string and placing the data into a more usable format.


Another important bit of borrowed code is the executeRequest function (also included in the Common Helper Functions for Groovy). I have made some slight modifications to this code to make it more readable while adding some comments. This function calls the two “fetch” functions above, and is responsible for assembling and executing our REST “request”.


Step 6: Your Wish is REST’s Command

At this point, we have three functions, all geared towards asking the PBCS REST API to handle a request. Now it’s time to actually construct a request and execute it. We will do this by building a function called addMember. We are going to add a Product to the Vision PBCS application. It is important to note that the parent to which we are adding this member must be enabled for dynamic children. We are going to add our new Product to the parent P_TP3.


I wrote the function below to add members to a dimension. For demonstration purposes, we are only adding the required properties (parent and member name). In a “real” application, we would include more properties and they would vary depending on the type of dimension being addressed.


In the end, a REST request needs four things:

  • A URL
  • A Request Type (“GET”, “PUT”, “POST”, “DELETE”)
  • A Payload (usually a JSON data structure)
  • A Content Type

The function above simply assembles these four items and passes them to the executeRequest function.

We call the addMember function as follows (passing the dimension, parent and member name):


Step 7: Test the Code!

Before we start our test, let’s look at the Product dimension in the Vision app. Here we see that our parent member P_TP3 has four children.


Our function should add a new child called P_999. Let’s run our script and see what happens! Press the Execute Groovy Script button.


Our script output says that the member was added to PBCS. Let’s look in Planning and make sure.


What happens if we try to add the member again?


As expected, PBCS won’t let us add a member twice. If you don’t receive the error details above, you probably didn’t fix the defects I showed you with the sample code.


This was a very simple demonstration of the PBCS REST API, but there are some complex use cases for this type of automation. Many other sites on the web are RESTful. Imagine a process that builds hierarchies automatically based upon data from other websites. What about a set of functions that automatically update FX rates in your Planning apps? With the PBCS REST API, you can get very, very creative.

Essbase Pro Tip: Compare Outlines with Groovy

I’ve had a recurring challenge at several of my more recent clients. They have had lots of apps, lots of environments, and lots of developers working on different initiatives at the same time. So when it came time to move a cube from DEV to TEST, or TEST to PROD, I was always a bit nervous. I wanted to fully understand the differences between my outlines across environments before I migrated anything, so I set out to build an Outline Comparison utility.

Reinventing the Wheel . . .

I am, of course, not the first person to attack this challenge. Terry Ledet at US-Analytics has a very sophisticated and fast utility that leverages a relational database. I’m hoping he’ll post a blog about his approach soon. Doug Bliss has a solution that appears to leverage the VB API. Unfortunately, this API is no longer supported as of Essbase Evgeniy Rasyuk has a utility written with the Java API, however it requires a commercial license. Other people have used a combination of the Applied OLAP Outline Extractor and file comparison utilities. While useful, this approach produces less-than-readable results when there are a variety of differences across cubes.

In the end, I wanted something that met my specific needs and would allow others to leverage my code. I wanted a solution that had:

  1. Open Source Code – use it and modify it however you want
  2. No Cost – free for everyone
  3. Straightforward Results – simply list the differences between two outlines
  4. Simple Architecture – no relational database required

Note – Readers may download the latest version of the Outline Comparison utility here.

I am Not a Java Developer . . .

The Java API is clearly a part of Oracle’s path forward for Essbase, and this makes it a logical choice for developing a utility of this sort. However, I’m not a Java developer. But I was lucky enough to co-edit Cameron Lackpour’s first book Developing Essbase Applications. And in this book, Joe Aultman wrote a chapter on automating common Essbase activities with Groovy. Now, I’ve always had a lot of respect for Joe’s Essbase knowledge. But to be completely honest, I thought his love of Groovy was a bit . . . eccentric. After all, how many people are actually going to write Java API code to automate Essbase tasks? Well, it turns out I was wrong, because Groovy is really easy to use and very powerful when combined with the Essbase Java API. Groovy provided the “training wheels” I needed to build a Java-based Outline Comparison utility.

What is Groovy?

Groovy is a programming language that leverages the Java platform and takes a lot of the hard work out of developing Java apps. While I’m probably oversimplifying, it’s basically a “crutch” for Java. Rather than go into the specifics of Groovy, I’d encourage you to read Joe’s chapter in Developing Essbase Applications. I’m going to jump into the good stuff – let’s build something.

Download and Install Groovy

Click here to navigate to the Groovy distributions. For Windows, select the “Windows Installer”, circled in green below.


Download and launch the installer.

Select your preferred language.


Select OK.


Select Next.


Select I Agree.


Perform a Full installation. Select optional components. (I selected them all.)


Tell Groovy where to install. Select Next.


Select Install.

Groovy will begin installing.


Look for the word Completed. Select Next.


Configure Environment Variables and Path. Select Next.


Configure File Associations. Select Next.


Select Finish.

In your applications, find the shortcut to Start GroovyConsole. Launch Groovy with this shortcut. A blank GroovyConsole will be displayed.


The Groovy Console

The Groovy console is pretty straightforward. To make sure things are working after the install, we’ll write a simple program below that outputs the text “Is this thing on?”.


Enter the text println “Is this thing on?” in the white script section. Press the Execute Script button. If the text “Is this thing on?” was printed in the yellow output section, you’re ready to start building your Outline Compare Utility.


Developing the Outline Compare Utility

The basic steps for building an Outline Compare Utility are as follows:

  1. Import the appropriate Essbase API files
  2. Read variables from a config file
  3. Define output file
  4. Connect to Cube #1 and Cube #2
  5. Open the Essbase Outlines
  6. Loop through all members in both cubes
    1. For Members in Cube #1 . . . Make sure they exist in Cube #2
      1. For Members in Both Cubes . . . Make sure properties are the same
      2. For Members in Both Cubes . . . Make sure member order is the same
    2. For Members in Cube #2 . . . Make sure they exist in Cube #1
  7. Close and disconnect

Importing the Essbase API Files

To take advantage of the Essbase Java API when using Groovy, we need to help Groovy find the appropriate jar files. These files will not exist on your computer unless you have installed the Essbase client software.

The location of the Essbase Java API jar files will vary depending on your installation options. On my laptop, they were located in C:\Oracle\Middleware\EPMSystem11R1\common\EssbaseJavaAPI\\lib.

Copy the files below into the C:\Program Files (x86)\Groovy\Groovy-2.4.5\lib directory.

  • ess_es_server.jar
  • ess_japi.jar
  • ess_maxl

Please note – there are multiple ways to handle external jar files with Groovy. This is one of several methods.

Now we can import the jar files into our Groovy script with the code below:


Create a directory for your program, and save the file as OtlCompare.groovy.  (Place your properties file in the same directory.)

The Config File

I didn’t want to modify my code whenever I wanted to compare a new set of cubes. But I wasn’t ready to build a GUI to prompt users for login information, cube names, etc. This is something I may add at a later date. Instead, I decided to store this information in a config file and have the program read that file. Luckily, Groovy has a utility called ConfigSlurper() that reads config files. I simply created the file with the variables I wanted, defined the file and told ConfigSlurper() to go get my variables.

My Config file looks like this:


Please note that I placed all of the files related to this application in the logDir referenced above, including the Groovy source code, properties, and output. You will need to modify this location.

Next, I updated my Groovy script to read the config file as follows:


On line 12 above, we see the config file being defined. On line 13, ConfigSlurper parses the file. After that, we can reference any of the variables in the config file by prefixing the variable name with “config.”. We can see this on line 16, where we reference config.logDir. logDir was the location for the log file that we defined in our config file. This really highlights the beauty of Groovy. With two lines of code, we processed an entire config file, storing 13 separate variables.

On line 19, we create the log file. We will append to it in future steps. Starting on line 21, we write the current time to the log file.

Note – appending “\t” inserts a tab into a text file.

Note – appending “\r\n” inserts a carriage return into a text file.

Running this script should produce output similar to the image below. It should create a new log file whose location is defined by your config file:


The log file should look like this:


Note that we are sending the output to both the console (using println statements) and the log file, (using logfile.append statements).

Connect to Essbase and Open Outlines

The lines below initialize the Essbase Java API and connect to Essbase. Note the use of variables defined in the config file. Any variable prefixed with “config.” is defined in the config file.


Now that the two outlines are open, we can begin looping through the member names comparing properties.

Looping Through Members

To loop through the members in the outlines, we will create a function called checkMembers. What makes this function interesting is that it is recursive – it calls itself. The function starts by looping through the dimensions. It then loops through the children of each member until there are no more members to process, working its way through the entire tree.

We first loop through Outline #1, checking to see if each member exists in Outline #2. If the member exists in both outlines, we check to make sure the properties are the same. Anytime we run across a mismatch or missing member, we make a note in the log.

Next we loop through the members in Outline #2, checking to see if each member exists in Outline #1. But this time, if the member exists in both outlines, we don’t bother checking the properties, because we would have already done this in the first loop. Instead we’re simply looking for extra members that exist in Outline #2, but not in Outline #1. The function has a parameter that allows us to skip the member property check.

There is a significant amount of code in this section. Rather than list all of the code, readers can download the entire OtlCompare.groovy file here.

Close Outlines and Log Off

After we have looped through the members, we need to close the outlines and log off from Essbase.


In this example, I also write the ending time to the log and calculate the elapsed time for the procedure.

Final Thoughts

I’m really happy with the results of this script. It simplifies the process of comparing outlines, making it faster and more accurate. With that said, there’s always room for improvement. The script works well against small and medium sized outlines. But when I ran it against a pair of outlines with 100,000+ members each, it took almost 30 minutes. I’m sure I can make it faster. I’d also like to add support for checking all alias tables, attribute dimensions, non-unique member outlines, etc. Lastly, when the script flags members as missing, it then flags other members as “out of order”, when they really aren’t in the wrong order.  Their “previous member” or “next member” simply isn’t there. I should be able to clean this up.

In the end, it does exactly what I needed it to do – identify differences between outlines. And it does so in a tab-delimited file that can easily be sorted and filtered in Excel.


EPM Pro Tip: Build & Maintain Security in Excel

Agatha Christie once argued against the notion that necessity is the mother of invention. She said, “Invention, in my opinion, arises directly from idleness, possibly also from laziness – to save oneself trouble”. This is the guiding principle I follow when designing and building security models for Hyperion Planning and Essbase. So how can a bit of laziness take the “trouble” out of EPM security? By leveraging Excel and letting formulas do the heavy lifting. This is an approach I’ve been using for over a decade, and I’m certain I’m not the only one. Although some of the tools have changed (think LCM), the general approach has been the same.

Step 1: Create a user-friendly, table-like input worksheet where administrators can enter security information.

Step 2: Create separate “export” worksheets for each of the basic components of EPM security (Shared Services, Essbase, Planning, etc.).

Step 3: Take the content from the input worksheet(s) and link it to the “export” worksheets with formulas, placing everything into the correct format.

Step 4: Import security & check for errors.


Show & Tell

Let’s look at a simple example. Keep in mind that based upon your particular requirements, this approach can have a lot of variations. For example, you may not be able to create your own groups, because your organization prefers groups to be maintained in Active Directory. In our example, we’ll use native Shared Services users and groups. The same principles apply . . . you’ll just have to tweak things a bit.

In our example, our administrator has to maintain security in a Hyperion Planning application and a native Essbase reporting cube. The applications both have the same users, and they have similar security requirements. We’ll use my Starbucks Hyperion Planning demo. It contains all of the global Starbucks, Teavana and Seattle’s Best Coffee locations. (The locations are real, but everything else is made up . . .) We’ll create groups for the 12 main countries/regions in which Starbucks operates.

9-29-2015 9-00-25 PM

In the examples below, an Excel workbook will be highlighted.  The full workbook can be downloaded here.

 The Simple Input Worksheet

In our “simple input worksheet”, we will need some basic information. We’ll need to know:

  • What are the groups with access to the EPM products?
  • What members can these groups access?
  • What is the level of access?
  • Who is in the groups?

Here is what our simple input worksheets look like. In our example, we will create 24 groups (one read and one write for each country/region). We will also add one test user to each group. We will assume that all groups have access to both Planning and Essbase.

What are my groups and what can they access?

9-30-2015 12-48-33 AM

Who is in my groups?

9-29-2015 9-22-59 PM

The Lazy Bit

Now that we have collected the basic information we need, we can begin to create our outputs. These are the files that we will import into Shared Services, Essbase and Planning to actually build our EPM security.

We’ll start by creating an LCM import file that creates our groups. However, we may not know how to format this file. How do we get around this? Follow these steps:

  • Manually create one native group in Shared Services.
  • Manually add a user to the group.
  • Manually provision the group.

I provisioned my sample group like this:


Next we need to export the security we just created manually in Shared Services. This will give us an idea of what the Shared Services import files should look like.

Here’s what we will export from Shared Services in LCM:

9-29-2015 10-03-25 PM

Note that even groups and users that were created in Active Directory will be exported when the “Native Directory” above is selected, assuming they have been provisioned.

Select the Export button. Once the export is complete, browse to the file under the File System. Right-click on the file and select Download. Save the file to your desktop and unzip it. The following file structure will be available:

9-29-2015 10-12-00 PM

If we edit the Groups.csv file, we can immediately see the format of the file we need to create that will ultimately be used to create the new native groups in Shared Services. In this file we see the group that was previously created manually and we see the test user that was assigned to the group.

9-29-2015 10-14-08 PM

** Note - Leave the internal_id above blank when creating the new import file.

** Note - If you’re using Active Directory users or groups, your providers will be slightly different.

Create the Shared Services Group Import File

Using the same basic format as our exported Groups.csv file, we can create a new file (within our EPM Security Workbook). This file will create our groups and associate our users with the groups.

It’s important to note that many of the cells below are populated with simple formulas that reference the Groups_and_Entities or the Groups_and_Users worksheet. Note the formula circled below.

9-29-2015 11-01-23 PM

This file can either be saved as the Groups.csv file (in the previously unzipped folder structure), or the administrator can copy and paste the results into the Groups.csv file (the one previously exported from Shared Services). Then, simply zip the files back up (starting at the top of the LCM file structure).

9-29-2015 11-05-44 PM

Upload and Import the Zipped Shared Services Files

From within Shared Services, right-click on the File System. Select Upload.

9-29-2015 11-08-05 PM

Browse to the zipped Shared Services files. Select Finish, then OK.

Next, expand the file within the Shared Services File System. Select the Groups.

9-29-2015 11-11-55 PM

Select Import. Ensure that the groups were created successfully and that the users were added as expected.

9-29-2015 11-16-52 PM

Provision the Groups

Now that the groups are imported and the users have been added to the groups, it’s time to provision the groups. Navigate to the Assigned Roles directory in the previously exported LCM file.

2015_09_29_SS_Security\HSS-Shared Services\resource\Native Directory\Assigned Roles\Planning

Modify the Planning application file to provision each of your groups, again, using formulas to reference the groups in your Groups_and_Entities tab. Pay special attention to the role numbers. (This is why we create a Group and User manually first, and then export the file. This way, we can copy items such as the role number – something we otherwise would probably not know.)

9-29-2015 11-24-10 PM

Re-zip the Shared Services LCM files, upload the file and import the Planning Assigned Roles.

9-29-2015 11-29-49 PM

At this point, all of the groups should be provisioned as Planners.

9-29-2015 11-34-37 PM

Repeat this process and provision the groups for Essbase and FR (based upon the files previously exported containing our manually created test group). Groups must be provisioned in Shared Services for Essbase and Planning before application-specific security can be applied.

Create Essbase Filters and Grant Them to Groups

Woo hoo! At this point, we’re finished with Shared Services and LCM. All of the groups have been created. Users have been added to the groups. All of the groups have been provisioned. And all of this has been done by importing files built based upon simple formulas.

Next we need to create our Essbase filters. For this, we will use Excel formulas to craft MaxL statements. We can paste these statements directly from Excel into the MaxL command prompt.

Let’s start with the basic MaxL that we want in our Security workbook. The MaxL syntax can be found here.


Note the common practice of locking down a dimension with “no_access” and then selectively opening it up with read or write lines.

If we paste the MaxL into a MaxL command line, we’ll see that the filter is created and granted to the specified group. But what if we want to create these commands in bulk in Excel? We simply use formulas with the concatenate function.

Using Excel formulas, we can craft the following file:

9-30-2015 12-57-21 AM

The MaxL above is broken up into four columns. These columns have the following formulas:

Column A: =CONCATENATE(“create or replace filter SB_ESB.SB_ESB.f_”, Groups_and_Users!A3)

Column B: =CONCATENATE(” no_access on ‘@IDESCENDANTS(“”Location””)’, “)

Column C: =IF(RIGHT(A3, 4) = “Read”,CONCATENATE(” read on ‘@IDESCENDANTS(“””, Groups_and_Entities!B3, “””)'; “),CONCATENATE(” write on ‘@IDESCENDANTS(“””, Groups_and_Entities!B3, “””)'; “))

Column D: =CONCATENATE(“grant filter SB_ESB.SB_ESB.’f_”, Groups_and_Entities!A3, “‘ to ‘”, Groups_and_Entities!A3, “‘;”)

Note the use of spaces and quotes in the formulas above. Also note that by placing the words “read” or “write” in the actual group names, we can reference this in the formulas when building the filters, thus granting either read or write access.

Once our formulas are sorted, we can simply paste the MaxL directly from Excel into a command prompt. Our filters are immediately created and granted to all of our groups. Subsequent changes are as easy as a copy/paste into MaxL.

9-30-2015 12-54-20 AM

Create a Hyperion Planning Security Import File

The last step in our process is to create the file to import security into our Hyperion Planning application. Again, we will use Excel and basic formulas. Like LCM, it can be helpful to add a minimal amount of security to Hyperion Planning manually, and then export a sample secfile.txt file. This can be helpful when determining the format of the import file.

To export a Planning security file, navigate to the Planning directory on the server:


Execute the following command (update the app, user ID and file location):

ExportSecurity.cmd /A=SB_BSO,/U=admin,/TO_FILE=C:\\temp\\SecFile

Review the security export file.

Our test security export contains the one piece of security information we assigned manually in Hyperion Planning:


We can now craft our full import file using this same format.

In Excel, we can build a file like this:

9-30-2015 1-37-40 AM

Here is the Excel formula that concatenates this information from the Groups_and_Entities tab:

=CONCATENATE(Groups_and_Entities!A3, “,”, Groups_and_Entities!B3, “,”, Groups_and_Entities!D3, “,”, Groups_and_Entities!C3)

We can then copy the contents of this file into the secfile.txt in the planning folder. Unfortunately, the ImportSecurity utility must generally be run from the server, in the following directory:


Upload the security with the following command:

ImportSecurity.cmd “SB_BSO,admin,SL_COMMA,0,”

Check the ImportSecurity.log in the following directory:


Look for something like this:

9-30-2015 1-40-01 AM

In Summary . . .

Building security manually is a tedious, time-wasting exercise. And in the end, administrators are left with a manual process to maintain. By centralizing security development in an Excel workbook, developers can create exports that feed the various EPM components. While these exports require different formats for LCM, Essbase and Planning, the mechanics that produce the files are similar. Developers can use basic Excel formulas to craft files into the correct import file formats. Then import the files.

This approach has several benefits:

  1. EPM Security Maintenance is centralized in a single workbook.
  2. EPM Security is self-documenting. (It’s all in the Excel workbook.)
  3. The EPM Security Workbook makes a good security backup.
  4. The EPM Security Workbook can be versioned and archived.
  5. Security migrations are simplified because the same files can be imported across multiple environments.
  6. Importing files is infinitely faster than creating security manually.

A copy of the workbook used in these examples is available here.

Essbase Pro Tip: Build & Maintain Partitions in a Text Editor

Partitions in Essbase are a bit like views in a relational database. They allow the results of a query in one cube to be available in another cube. This data may be “materialized”, or physically stored in both the source and target. These are referred to as replicated partitions. Or the partition may simply provide a “window” from one cube into another. These are referred to as transparent partitions. A third type of partition, linked, isn’t really a partition at all . . . it’s more of a navigational aid. It’s important to note that linked partitions are deprecated as of Essbase That’s a nice way for Oracle to say “stop using linked partitions even though the option is still available”.

Embrace the Command-Line

Most Essbase professionals (myself included) learned to build partitions in EAS. It has a nice tabbed GUI that walks users through the various steps:

  • Type (Replicated, Transparent or Linked)
  • Connection (Source & Target Connection Info)
  • Areas (Source & Target Details)
  • Mappings (How to Handle Specific Differences Between Source & Target)
  • Validation (Is everything correct?)

There’s nothing “wrong” with this GUI . . . in the same way that there’s nothing “wrong” with training wheels. But for those of us that are comfortable in a command-line environment, a text file with good ole MaxL is just more straight-forward.

Why Ditch the GUI?

  1. MaxL is safer.
  2. MaxL is faster.
  3. MaxL is easier.
  4. MaxL is repeatable and migratable.

Have you ever “lost” a partition? Maybe it became orphaned and was no longer editable or repairable in EAS? It’s a frustrating experience, and unfortunately there are a variety of reasons partitions can head south. Building and maintaining partitions in a text editor using MaxL creates a tidy little backup. Re-creating the partition or migrating it to another environment is a snap. Simply paste some text into the MaxL command-line and you’ve got a partition!

And speaking of migrations, it’s important to note that LCM cannot currently migrate partitions (as of version The LCM Guide recommends using MaxL for partition migrations. I recommend taking this a step further. Create and maintain your partitions in MaxL in the first place.

Show me the MaxL!

The typical MaxL statement for creating a partition can be broken down into seven sections:

  1. Login (Source)
  2. Create Command
  3. Source Area Definition
  4. Login (Target)
  5. Target Area Definition
  6. Mappings
  7. Logout

In the example below, I’m partitioning data from a Planning application (BSO) called “SrcApp”, into an ASO reporting cube called “TgtApp”. The Planning app (source), has a Version dimension, but the target does not. The target has a “Profit Center” and “Cost Center” dimension, however the source does not.


You may have noticed that there are no area alias “nicknames” in the statement above, and there is only a single area definition on the source and target. Multiple areas are allowed, and mappings at the bottom of the statement can be area-specific. But I usually find that one area definition (and global mappings) gets the job done.

Here’s what you should expect to see when the partition is pasted into MaxL:


Review your results, looking for errors, warnings and in particular, cell-count mismatches.

Final Pointers

Here are some tips that may help when creating your partitions in MaxL:

  • Enclose area definitions in single quotes.
  • Separate members and functions within area definitions with commas.
  • Do not separate mapping lines with commas.
  • The first set of parenthesis in a mapping statement represents the source.
  • The second set of parenthesis in a mapping statement represents the target.
  • Map a member to or from a null value when a dimension in the source does not exist in the target (or vice-versa). In the example above, “Final Version” is mapped to “”, because there is no Version dimension in the target cube.
  • End the entire statement with a semi-colon.
  • Do not store passwords in MaxL files unless they’re encrypted.
  • If you’re using Essbase clustering, make sure you login to the cluster name when you create your partition, and not the server name.
  • Create partitions with service accounts, preferably ones with passwords that do not expire.

Hyperion Planning Patch

I’m on a team that recently decided to patch a client environment, upgrading Hyperion Planning from the .501 patch to the .700 patch.  We quickly started having deployment issues related to unknown members in member formulas.  This was with a “classic” Hyperion Planning application, not an EPMA app.

We soon realized that we had issues with member formulas that referenced members with names that included a “%” sign.  An example is below:


The screenshot above represents a brand-new formula.  It hadn’t been saved, and it hadn’t been validated.  (As you have probably guessed, these two actions led to issues.)

Upon validating (or saving) the formula, an error was introduced.  Interestingly, the validation returned with a “successful” message.


The number 25 was appended to the “%”.  Subsequent validations added another “25”.  The application repository was clean – formulas there were correct.  LCM exports also looked good.  But saving formulas suddenly got risky.  This application had some very, very complex member formulas.

My immediate reaction was to start updating member names and formulas, replacing “%” with “Pct”.  This would normally get the job done, however this particular application had a LOT of members with “%” in the name.  Some of these members were referenced in FDMEE maps . . . and reports . . . and business rules . . . and partitions.  Renaming was going to be a pain.  Luckily we had another unpatched environment where development could move forward while we worked the SR.  This leads us to a conversation regarding patch best practices:

  • Read the Readme file!  It details known issues and fixes addressed by the patch.
  • Patch a Development or Sandbox environment first.
  • Have a back-out plan.
  • If you have apps in Production, make sure you plan for a thorough regression test.
  • Schedule your patches for “low-disruption” time periods.  (Don’t patch right before year-end close or go-live.)
  • If you can, wait for others to patch first.  (The best testers are strangers who don’t bill your project.)

Technical Blog Posts

Excuses . . . Excuses . . .

I do my best to publish as least one meaty, technical EPM blog post per month. But that’s just not going to happen this month. I have three excuses of varying quality:

  • Excuse #1:  My current project is in “sprint” mode with some aggressive development timelines.
  • Excuse #2:  I’m working on my presentation for Top Gun 2015 US.
  • Excuse #3:  I have a new toy.

I wouldn’t normally blog about that last excuse (the toy), but I’m going to try to tie it in with Excuse #2.

On September 17th, I’ll be speaking at Infratects’ Top Gun US 2015 conference. I’ll be presenting on Essbase Hybrid Aggregation Mode, and I’m very excited about the opportunity.

Top Gun US 2015 BannerI have two rules for these types of presentations:

  • Rule #1: Show a live demo of something really interesting.
  • Rule #2: Have some “giveaways” for people who ask questions.

Because I needed some “giveaways”, and because Essbase databases are typically referred to as “cubes”, I obviously had to go and buy a MakerGear M2 3D Printer kit and make some 3D printed “Gear Cubes”. I had no choice, really . . . so if you attend my presentation and ask an engaging question, you could be the proud new owner of one of these:

GearCube_SmallTop Gun US 2015 has some amazing speakers and should be a great opportunity to network with Oracle product management and your EPM peers. I hope to see you there!



Essbase Hybrid Aggregation Mode & BSO Limits

There is currently a lot of excitement around the new Essbase Hybrid Aggregation Mode. As this new feature matures, clients are starting to ask about the types of cubes that are best suited for conversion to the new calculation engine. While Hybrid Aggregation Mode is pretty amazing, a side project of mine recently reminded me of a specific class of cubes that are not yet appropriate for conversion.

First, a bit about the side project . . . I’m working on building a set of “reference” cubes that I can easily deploy in various environments to benchmark physical (and virtual) infrastructure. Imagine a set of cubes that run through a standard series of data loads, calculations and retrieves where the performance is recorded for comparison against other environments. The idea is to arrive at a “score”, so I know early in an implementation when I’m working on suboptimal hardware. For this to work, I need some really big cubes (BSO, ASO and Hybrid). And when one builds really big cubes, they are sometimes reminded of Essbase’s size limitations, because when reached . . . dimension builds fail. These limits are very nicely documented here.

The basic Essbase size limitations are pretty straight-forward:

  • BSO cubes can have a maximum of roughly 1,000,000 members.
  • ASO cubes can have a maximum of roughly 10,000,000 – 20,000,000 members.

Hybrid cube limitations are not specifically called out in the documentation (yet), but we can make some assumptions:

  • We know that Hybrid cubes start out life as normal BSO cubes.
  • We assume that BSO limits apply to Hybrid cubes. (My limited testing appears to confirm this.)
  • We know that BSO cubes have lower size limitations than ASO cubes.
  • Therefore, certain large ASO cubes can not be converted to Hybrid.

In addition to the basic size limitations above, there is a slightly different limit that developers are more likely to encounter:

  • BSO cubes can have a maximum of 2104 stored sparse member combinations.

Here’s what makes this limit so interesting . . . the documentation (which is very, very good overall) is incorrect. BSO cubes are not limited to 2104 stored sparse member combinations. Instead, they are limited to “Two Groups of 252”stored sparse member combinations. What does that mean? How do we know the documentation is wrong? Let’s dig a little deeper. 

Over 20 Nonillion Stored Sparse Member Combinations!

2104 is a very large number. It’s a smidge over 20 nonillion. Don’t know what a “nonillion” is? I didn’t either.

2104 = 20,282,409,603,651,700,000,000,000,000,000

To understand this limit, we must first understand how to calculate the number of potential stored sparse member combinations. (These are “potential” combinations until there is data at a particular intersection of members . . . then they become “actual” combinations.) To arrive at the number of potential stored sparse member combinations, simply multiply the number of stored members from each sparse dimension together.

For example:

  • Sparse Dimension #1: 10 Stored Members
  • Sparse Dimension #2: 30 Stored Members
  • Sparse Dimension #3: 100 Stored Members
  • Sparse Dimension #4: 1000 Stored Members

10 * 30 * 100 * 1000 = 30,000,000 Potential Stored Sparse Member Combinations

In other words, there are 30 million unique possible combinations of sparse members if we take one member from each of the above four dimensions.

If we were in fact limited to 2104 potential stored sparse member combinations in a BSO cube, it is unlikely that anyone would hit this limit. This is because another limit would most likely kick in first. Remember, developers can have a maximum of roughly 1,000,000 members in a BSO cube. Try arriving at 2104 potential stored sparse member combinations when you only have a million total members to work with . . . it’s possible, however it requires an unusual number of sparse dimensions. 

Will the Real BSO Limit Please Stand Up?

If the 2104 limit is incorrect, what is the real limit? Luckily, Essbase returns the correct error message during a dimension build . . . it’s only the documentation that is incorrect. Here is what shows up in the Essbase application log after the dimension build fails:


It’s easy to see how the 2104 limit was incorrectly derived. 2 * 252 = 2104, right??? Wrong.

252 = 4,503,599,627,370,500

2 * 252 = 9,007,199,254,740,990   (Note that this number is much less than 20+ nonillion or 2104.)

However, don’t be fooled into thinking that the real limit for potential stored sparse member combinations in a BSO cube is 9,007,199,254,740,990. That’s not correct either. Remember, we get two groups of 252 stored sparse member combinations. How the dimensions fall into these two groups is very important. 

Show Me the Groups!

Here are the basic steps for determining whether or not you will exceed the “Two Groups of 252” limit:

  1. Locate the first sparse dimension (closest to the top of the outline).
  2. Multiply the number of stored members in this dimension with the number of stored members in the next dimension.
  3. Repeat until the product of stored sparse members exceeds 252 (4,503,599,627,370,500).
  4. Back up one dimension. The 1st sparse dimension down to this dimension makes up the first “group”. The idea is that a group’s stored member product cannot exceed 252.
  5. Start multiplying the stored members from each subsequent sparse dimension together. These dimensions represent the 2nd group. If the product exceeds 252 on the second group, the limit has been reached and the dimension build will fail.

Here is an example of two groups of sparse dimension members:


When building the dimensions in the BSO cube described above, Group 1 ends after the 6th sparse dimension (“Sparse_06”). This is because including the next dimension (“Sparse_07”) would cause the sparse member combinations in that group to exceed 252. As soon as Group 1 is as full as possible without exceeding this limit, Group 2 begins. Unfortunately, we can see that the dimension build fails at the 10th sparse dimension (“Sparse_10”), because the second group exceeds 252 and we are only allowed a maximum of two groups of 252.

Hybrid and Dynamic Sparse Members

Readers who are familiar with Essbase Hybrid Aggregation Mode may recall that one of the key design elements in a Hybrid cube involves leveraging dynamically calculated sparse parents (a general no-no in BSO, but required in Hybrid). However the “Two Groups of 252” limit is all based upon stored sparse member combinations. So could a BSO cube that failed due to this limit potentially work using Hybrid? Maybe. It all depends on how many sparse members are changed from “stored” to “dynamic” during the conversion to Hybrid Aggregation Mode.


If you’re thinking about taking the new Essbase Hybrid Aggregation Mode for a spin (and you should!), remember that size does matter. Some ASO cubes may not be suitable for conversion to Hybrid due to BSO size limitations. But remember that a key feature of Hybrid cubes involves setting upper-level sparse members to “Dynamic”, thus reducing the number “Stored” members in a sparse dimension. This change may create some additional “headroom” before you actually hit the “Two Groups of 252” limit with a Hybrid cube.

If you would like to hear more about this topic, please plan on attending Infratects’ Top Gun US conference on September 17-18. I’ll be presenting on Hybrid Aggregation Mode and will specifically address questions around conversions. As always, I will have a live demo.

Pop Quiz!

In the example BSO cube above, the dimension build fails at the 10th dimension. Assume the following:

  • If the cube remains a BSO cube . . .
  • If no members are deleted . . .
  • If all dense/sparse settings remain the same . . .
  • If all data storage settings remain the same . . .

What could be done to this cube to make the dimension build successful?

Tweet me the answer at @HyperionNerd.

The Ultimate EPM Demo Laptop

As part of my job, I’m often running demos for clients or presenting at conferences and user groups. As such, I need to run a good portion of the EPM stack on my laptop (running virtual machines). Because this requires a fairly beefy laptop, I typically gravitate towards “workstation replacement” type notebooks. I’ve had a Lenovo W520 for several years, and it has served me very well, but with the rapid pace of hardware improvements and my increasing needs, it was time to upgrade. Websites like Tom’s Hardware and AnandTech do a great job of highlighting the newest and fastest bits of hardware, and some are so new that you need an entirely new laptop to run them. I spent a fair amount of time selecting and configuring my demo laptop, and I thought others might benefit from my experience.

Calling a laptop the “ultimate” laptop is a risky proposition. Different people use laptops in different ways. Hardware that represents the epitome of high performance today can look embarrassingly low-tech tomorrow. The items below represent the values that guided my selection:

  • I need to run complex VM’s during demos and presentations.
  • I need a machine that’s portable, but just barely.
  • I need a machine that supports 32 GB of RAM.
  • I need a machine that will support the latest hard drives.
  • I value overall performance more than finding a good deal.

The search for a workstation-class laptop usually starts with either Lenovo’s “W” line of laptops or Dell’s “Precision” line. I’ve had both in the past, and I’ve been happy in each case. But neither line currently pushes the performance envelope. There are currently (as of May 2015) two big limitations with these lines:

  1. They are limited to mobile CPUs (currently the Intel Core i7 4940MX, up to 4.0 GHz).
  2. They do not support the latest PCIe 80mm M.2 SSDs (very fast hard drives).

Toyota vs Ferrari

Mainstream manufacturers need to supply laptops that can satisfy corporate purchasing departments that want reliable and affordable hardware . . . something akin to a Toyota. But what if you’re looking for more of a Ferrari? What if you want bleeding edge technology? The answer can be found in video games. Custom “gaming” laptop manufacturers often offer the latest, greatest components long before the mainstream manufacturers. And they also offer premium services (think monitor calibration, or custom paint jobs). If the idea of selecting the thermal paste used to mount your CPU to the motherboard excites you, this is an avenue you should explore.

As a khaki pants wearing, corporate type of guy, ordering a custom gaming laptop for work did not seem like a mature or even sensible idea. However, I found that most of these builders have options that look fairly tame on the outside. These are laptops that won’t stand out in the corporate world. I ended up looking at laptops from companies like XOTIC PC, Sager, Eurocom and Digital Storm. Several of these companies leverage a chassis and other components from Clevo, and to varying degrees, offer their own special software and internals. In other words, don’t be surprised if you see laptops that look similar on the outside from several of these companies. Pay attention to what’s being offered on the inside, because it can vary, and the internal components are what’s really important. Let’s discuss a couple of the more important components.

Go Big or Go Mobile?

The natural assumption is that laptops are limited to mobile processors, but this does not have to be the case. Several of the aforementioned companies will drop a desktop or even server CPU into a laptop. For example:

  • Sager NP9772-S (Desktop Processor: Intel Core i7 4970K, up to 4.4 GHz)
  • Eurocom Panther 5 SE (Server Processor: Intel XEON with 12 cores)

These processors can offer better performance than the highest-spec mobile processors – sometimes faster, sometimes with more cores. Of course, the consequence is usually a battery that doesn’t last as long. But then again, no one who drives a Ferrari is worried about gas mileage, and the same should be true for a high performance laptop.

Good Things Come in Small Packages

Most of us are familiar with the old 2.5” laptop hard drives (HDDs). These are quite slow by modern standards. If you still have one of these, you are overdue for an upgrade.


More recent drives look like the SSD below. Unlike the HDD above, these do not have physical disks that spin within the drive. Instead, they leverage flash memory for dramatically improved performance.


Unfortunately, the drives above both leverage the SATA standard for connecting storage devices within a computer. The SATA standard works with SSDs, but can’t fully exploit the potential speeds of newer hard drives. In other words, as drives have become faster, the software that connects those drives to the computer has become the limiting factor affecting performance. Most current laptops only support SATA drives. But certain newer ones support PCIe, which offers higher throughput than SATA.

The PCIe drive below is a 512GB Samsung SM951. Technically, this is also an SSD because it uses flash-based memory to store data, but it does not leverage the throughput-limited SATA interface like the Samsung 840 EVO above. It’s difficult to tell from the picture, but it’s about the size of a stick of Wrigley’s chewing gum. And it is very, very fast.


Note the dramatically higher sequential read/write speeds of the PCIe drive (right), compared to a SATA SSD (left).


Current PCIe laptop drives leverage the ACHI standard. ACHI was originally designed to work with older HDD’s (as opposed to flash-memory based drives). A newer standard called NVMe has been developed specifically for PCIe drives, however PCIe/NVMe drives are not currently available for laptops. Several have been announced, and should be available within months. These newer PCIe drives that leverage the newest standard (NVMe) should be even faster than the PCIe drive above, especially with regard to random input/output operations per second (IOPS).

The key takeaway is this . . . laptop hard drive performance has recently improved dramatically, but few laptops currently have the PCIe slots required for the new drives. In the next several months, PCIe drives that leverage the NVMe standard will further improve laptop hard drive performance. All of this will translate into better performing demos.

In summary . . .


The “Ultimate” Laptop

After a significant amount of research, I decided on the following specifications for my “Ultimate” EPM Demo Laptop:

  • Sager NP9772-S (17.3” IPS LED-Backlit Matt Finish Display)
  • Intel Core i7 4970K, up to 4.4 GHz (Desktop Processor)
  • 32GB Kingston HyperX RAM
  • Nvidia GeForce GTX 980M GPU w/ 8GB Video Memory
  • 330W Power Adapters (2)
  • Hard Drives:
    • C:\ (OS) Micron M600 512GB M.2 SSD (SATA)
    • D:\ (Data) Samsung 840 Evo 1TB SSD (SATA . . . from old laptop)
    • E:\ (VMs) Samsung SM951 80mm M.2 512GB SSD (PCIe)

To say this laptop is fast is an understatement. It screams. It’s an upgrade from my old laptop in so many ways that it’s difficult to tell exactly what components are contributing to the improved performance. Everything is faster.

Of course, there are a few downsides . . .

  • It’s big and heavy. I wasn’t worried about that . . . I have a roller bag for my laptop.
  • The keyboard is just “ok”. Sadly, nobody makes keyboards like the old Lenovos.
  • The AC Adapter is so big and heavy that I bought two, just so I wouldn’t have to lug it around. On a positive note, it can double as ship ballast or a blunt weapon.
  • The latest hard-drives aren’t always available directly from computer manufacturers. You will need to be comfortable opening up your laptop and swapping some components.
  • This laptop will definitely not improve your carbon footprint.


Putting it All Together

At the time I ordered my laptop, the Samsung SM951 wasn’t available from Sager, nor was it available from regular retail outlets. In fact, it’s currently an “OEM only” component, meaning it is only sold directly to computer manufacturers like Dell, Lenovo, etc. for use in a select few of their laptops. That meant I had to order it from Ram City . . . in Australia. On a positive note, the exchange rate is pretty good right now, and Ram City’s customer service is excellent. Everything else was purchased as part of a complete laptop directly from Sager Notebooks.



If you’re looking for a laptop that will present your EPM demos in a positive light, it’s worth it to color outside the lines, and to step away from the mainstream manufacturers. Purchasing a gaming laptop will allow you to use the fastest processors and the latest components in your pursuit of ultimate speed.


I have not been compensated by the companies discussed in this post in any way. All opinions expressed are my own and are based upon my own research and experience.