Author: jturrell

Technical Blog Posts

Excuses . . . Excuses . . .

I do my best to publish as least one meaty, technical EPM blog post per month. But that’s just not going to happen this month. I have three excuses of varying quality:

  • Excuse #1:  My current project is in “sprint” mode with some aggressive development timelines.
  • Excuse #2:  I’m working on my presentation for Top Gun 2015 US.
  • Excuse #3:  I have a new toy.

I wouldn’t normally blog about that last excuse (the toy), but I’m going to try to tie it in with Excuse #2.

On September 17th, I’ll be speaking at Infratects’ Top Gun US 2015 conference. I’ll be presenting on Essbase Hybrid Aggregation Mode, and I’m very excited about the opportunity.

Top Gun US 2015 BannerI have two rules for these types of presentations:

  • Rule #1: Show a live demo of something really interesting.
  • Rule #2: Have some “giveaways” for people who ask questions.

Because I needed some “giveaways”, and because Essbase databases are typically referred to as “cubes”, I obviously had to go and buy a MakerGear M2 3D Printer kit and make some 3D printed “Gear Cubes”. I had no choice, really . . . so if you attend my presentation and ask an engaging question, you could be the proud new owner of one of these:

GearCube_SmallTop Gun US 2015 has some amazing speakers and should be a great opportunity to network with Oracle product management and your EPM peers. I hope to see you there!



Essbase Hybrid Aggregation Mode & BSO Limits

There is currently a lot of excitement around the new Essbase Hybrid Aggregation Mode. As this new feature matures, clients are starting to ask about the types of cubes that are best suited for conversion to the new calculation engine. While Hybrid Aggregation Mode is pretty amazing, a side project of mine recently reminded me of a specific class of cubes that are not yet appropriate for conversion.

First, a bit about the side project . . . I’m working on building a set of “reference” cubes that I can easily deploy in various environments to benchmark physical (and virtual) infrastructure. Imagine a set of cubes that run through a standard series of data loads, calculations and retrieves where the performance is recorded for comparison against other environments. The idea is to arrive at a “score”, so I know early in an implementation when I’m working on suboptimal hardware. For this to work, I need some really big cubes (BSO, ASO and Hybrid). And when one builds really big cubes, they are sometimes reminded of Essbase’s size limitations, because when reached . . . dimension builds fail. These limits are very nicely documented here.

The basic Essbase size limitations are pretty straight-forward:

  • BSO cubes can have a maximum of roughly 1,000,000 members.
  • ASO cubes can have a maximum of roughly 10,000,000 – 20,000,000 members.

Hybrid cube limitations are not specifically called out in the documentation (yet), but we can make some assumptions:

  • We know that Hybrid cubes start out life as normal BSO cubes.
  • We assume that BSO limits apply to Hybrid cubes. (My limited testing appears to confirm this.)
  • We know that BSO cubes have lower size limitations than ASO cubes.
  • Therefore, certain large ASO cubes can not be converted to Hybrid.

In addition to the basic size limitations above, there is a slightly different limit that developers are more likely to encounter:

  • BSO cubes can have a maximum of 2104 stored sparse member combinations.

Here’s what makes this limit so interesting . . . the documentation (which is very, very good overall) is incorrect. BSO cubes are not limited to 2104 stored sparse member combinations. Instead, they are limited to “Two Groups of 252”stored sparse member combinations. What does that mean? How do we know the documentation is wrong? Let’s dig a little deeper. 

Over 20 Nonillion Stored Sparse Member Combinations!

2104 is a very large number. It’s a smidge over 20 nonillion. Don’t know what a “nonillion” is? I didn’t either.

2104 = 20,282,409,603,651,700,000,000,000,000,000

To understand this limit, we must first understand how to calculate the number of potential stored sparse member combinations. (These are “potential” combinations until there is data at a particular intersection of members . . . then they become “actual” combinations.) To arrive at the number of potential stored sparse member combinations, simply multiply the number of stored members from each sparse dimension together.

For example:

  • Sparse Dimension #1: 10 Stored Members
  • Sparse Dimension #2: 30 Stored Members
  • Sparse Dimension #3: 100 Stored Members
  • Sparse Dimension #4: 1000 Stored Members

10 * 30 * 100 * 1000 = 30,000,000 Potential Stored Sparse Member Combinations

In other words, there are 30 million unique possible combinations of sparse members if we take one member from each of the above four dimensions.

If we were in fact limited to 2104 potential stored sparse member combinations in a BSO cube, it is unlikely that anyone would hit this limit. This is because another limit would most likely kick in first. Remember, developers can have a maximum of roughly 1,000,000 members in a BSO cube. Try arriving at 2104 potential stored sparse member combinations when you only have a million total members to work with . . . it’s possible, however it requires an unusual number of sparse dimensions. 

Will the Real BSO Limit Please Stand Up?

If the 2104 limit is incorrect, what is the real limit? Luckily, Essbase returns the correct error message during a dimension build . . . it’s only the documentation that is incorrect. Here is what shows up in the Essbase application log after the dimension build fails:


It’s easy to see how the 2104 limit was incorrectly derived. 2 * 252 = 2104, right??? Wrong.

252 = 4,503,599,627,370,500

2 * 252 = 9,007,199,254,740,990   (Note that this number is much less than 20+ nonillion or 2104.)

However, don’t be fooled into thinking that the real limit for potential stored sparse member combinations in a BSO cube is 9,007,199,254,740,990. That’s not correct either. Remember, we get two groups of 252 stored sparse member combinations. How the dimensions fall into these two groups is very important. 

Show Me the Groups!

Here are the basic steps for determining whether or not you will exceed the “Two Groups of 252” limit:

  1. Locate the first sparse dimension (closest to the top of the outline).
  2. Multiply the number of stored members in this dimension with the number of stored members in the next dimension.
  3. Repeat until the product of stored sparse members exceeds 252 (4,503,599,627,370,500).
  4. Back up one dimension. The 1st sparse dimension down to this dimension makes up the first “group”. The idea is that a group’s stored member product cannot exceed 252.
  5. Start multiplying the stored members from each subsequent sparse dimension together. These dimensions represent the 2nd group. If the product exceeds 252 on the second group, the limit has been reached and the dimension build will fail.

Here is an example of two groups of sparse dimension members:


When building the dimensions in the BSO cube described above, Group 1 ends after the 6th sparse dimension (“Sparse_06”). This is because including the next dimension (“Sparse_07”) would cause the sparse member combinations in that group to exceed 252. As soon as Group 1 is as full as possible without exceeding this limit, Group 2 begins. Unfortunately, we can see that the dimension build fails at the 10th sparse dimension (“Sparse_10”), because the second group exceeds 252 and we are only allowed a maximum of two groups of 252.

Hybrid and Dynamic Sparse Members

Readers who are familiar with Essbase Hybrid Aggregation Mode may recall that one of the key design elements in a Hybrid cube involves leveraging dynamically calculated sparse parents (a general no-no in BSO, but required in Hybrid). However the “Two Groups of 252” limit is all based upon stored sparse member combinations. So could a BSO cube that failed due to this limit potentially work using Hybrid? Maybe. It all depends on how many sparse members are changed from “stored” to “dynamic” during the conversion to Hybrid Aggregation Mode.


If you’re thinking about taking the new Essbase Hybrid Aggregation Mode for a spin (and you should!), remember that size does matter. Some ASO cubes may not be suitable for conversion to Hybrid due to BSO size limitations. But remember that a key feature of Hybrid cubes involves setting upper-level sparse members to “Dynamic”, thus reducing the number “Stored” members in a sparse dimension. This change may create some additional “headroom” before you actually hit the “Two Groups of 252” limit with a Hybrid cube.

If you would like to hear more about this topic, please plan on attending Infratects’ Top Gun US conference on September 17-18. I’ll be presenting on Hybrid Aggregation Mode and will specifically address questions around conversions. As always, I will have a live demo.

Pop Quiz!

In the example BSO cube above, the dimension build fails at the 10th dimension. Assume the following:

  • If the cube remains a BSO cube . . .
  • If no members are deleted . . .
  • If all dense/sparse settings remain the same . . .
  • If all data storage settings remain the same . . .

What could be done to this cube to make the dimension build successful?

Tweet me the answer at @HyperionNerd.

The Ultimate EPM Demo Laptop

As part of my job, I’m often running demos for clients or presenting at conferences and user groups. As such, I need to run a good portion of the EPM stack on my laptop (running virtual machines). Because this requires a fairly beefy laptop, I typically gravitate towards “workstation replacement” type notebooks. I’ve had a Lenovo W520 for several years, and it has served me very well, but with the rapid pace of hardware improvements and my increasing needs, it was time to upgrade. Websites like Tom’s Hardware and AnandTech do a great job of highlighting the newest and fastest bits of hardware, and some are so new that you need an entirely new laptop to run them. I spent a fair amount of time selecting and configuring my demo laptop, and I thought others might benefit from my experience.

Calling a laptop the “ultimate” laptop is a risky proposition. Different people use laptops in different ways. Hardware that represents the epitome of high performance today can look embarrassingly low-tech tomorrow. The items below represent the values that guided my selection:

  • I need to run complex VM’s during demos and presentations.
  • I need a machine that’s portable, but just barely.
  • I need a machine that supports 32 GB of RAM.
  • I need a machine that will support the latest hard drives.
  • I value overall performance more than finding a good deal.

The search for a workstation-class laptop usually starts with either Lenovo’s “W” line of laptops or Dell’s “Precision” line. I’ve had both in the past, and I’ve been happy in each case. But neither line currently pushes the performance envelope. There are currently (as of May 2015) two big limitations with these lines:

  1. They are limited to mobile CPUs (currently the Intel Core i7 4940MX, up to 4.0 GHz).
  2. They do not support the latest PCIe 80mm M.2 SSDs (very fast hard drives).

Toyota vs Ferrari

Mainstream manufacturers need to supply laptops that can satisfy corporate purchasing departments that want reliable and affordable hardware . . . something akin to a Toyota. But what if you’re looking for more of a Ferrari? What if you want bleeding edge technology? The answer can be found in video games. Custom “gaming” laptop manufacturers often offer the latest, greatest components long before the mainstream manufacturers. And they also offer premium services (think monitor calibration, or custom paint jobs). If the idea of selecting the thermal paste used to mount your CPU to the motherboard excites you, this is an avenue you should explore.

As a khaki pants wearing, corporate type of guy, ordering a custom gaming laptop for work did not seem like a mature or even sensible idea. However, I found that most of these builders have options that look fairly tame on the outside. These are laptops that won’t stand out in the corporate world. I ended up looking at laptops from companies like XOTIC PC, Sager, Eurocom and Digital Storm. Several of these companies leverage a chassis and other components from Clevo, and to varying degrees, offer their own special software and internals. In other words, don’t be surprised if you see laptops that look similar on the outside from several of these companies. Pay attention to what’s being offered on the inside, because it can vary, and the internal components are what’s really important. Let’s discuss a couple of the more important components.

Go Big or Go Mobile?

The natural assumption is that laptops are limited to mobile processors, but this does not have to be the case. Several of the aforementioned companies will drop a desktop or even server CPU into a laptop. For example:

  • Sager NP9772-S (Desktop Processor: Intel Core i7 4970K, up to 4.4 GHz)
  • Eurocom Panther 5 SE (Server Processor: Intel XEON with 12 cores)

These processors can offer better performance than the highest-spec mobile processors – sometimes faster, sometimes with more cores. Of course, the consequence is usually a battery that doesn’t last as long. But then again, no one who drives a Ferrari is worried about gas mileage, and the same should be true for a high performance laptop.

Good Things Come in Small Packages

Most of us are familiar with the old 2.5” laptop hard drives (HDDs). These are quite slow by modern standards. If you still have one of these, you are overdue for an upgrade.


More recent drives look like the SSD below. Unlike the HDD above, these do not have physical disks that spin within the drive. Instead, they leverage flash memory for dramatically improved performance.


Unfortunately, the drives above both leverage the SATA standard for connecting storage devices within a computer. The SATA standard works with SSDs, but can’t fully exploit the potential speeds of newer hard drives. In other words, as drives have become faster, the software that connects those drives to the computer has become the limiting factor affecting performance. Most current laptops only support SATA drives. But certain newer ones support PCIe, which offers higher throughput than SATA.

The PCIe drive below is a 512GB Samsung SM951. Technically, this is also an SSD because it uses flash-based memory to store data, but it does not leverage the throughput-limited SATA interface like the Samsung 840 EVO above. It’s difficult to tell from the picture, but it’s about the size of a stick of Wrigley’s chewing gum. And it is very, very fast.


Note the dramatically higher sequential read/write speeds of the PCIe drive (right), compared to a SATA SSD (left).


Current PCIe laptop drives leverage the ACHI standard. ACHI was originally designed to work with older HDD’s (as opposed to flash-memory based drives). A newer standard called NVMe has been developed specifically for PCIe drives, however PCIe/NVMe drives are not currently available for laptops. Several have been announced, and should be available within months. These newer PCIe drives that leverage the newest standard (NVMe) should be even faster than the PCIe drive above, especially with regard to random input/output operations per second (IOPS).

The key takeaway is this . . . laptop hard drive performance has recently improved dramatically, but few laptops currently have the PCIe slots required for the new drives. In the next several months, PCIe drives that leverage the NVMe standard will further improve laptop hard drive performance. All of this will translate into better performing demos.

In summary . . .


The “Ultimate” Laptop

After a significant amount of research, I decided on the following specifications for my “Ultimate” EPM Demo Laptop:

  • Sager NP9772-S (17.3” IPS LED-Backlit Matt Finish Display)
  • Intel Core i7 4970K, up to 4.4 GHz (Desktop Processor)
  • 32GB Kingston HyperX RAM
  • Nvidia GeForce GTX 980M GPU w/ 8GB Video Memory
  • 330W Power Adapters (2)
  • Hard Drives:
    • C:\ (OS) Micron M600 512GB M.2 SSD (SATA)
    • D:\ (Data) Samsung 840 Evo 1TB SSD (SATA . . . from old laptop)
    • E:\ (VMs) Samsung SM951 80mm M.2 512GB SSD (PCIe)

To say this laptop is fast is an understatement. It screams. It’s an upgrade from my old laptop in so many ways that it’s difficult to tell exactly what components are contributing to the improved performance. Everything is faster.

Of course, there are a few downsides . . .

  • It’s big and heavy. I wasn’t worried about that . . . I have a roller bag for my laptop.
  • The keyboard is just “ok”. Sadly, nobody makes keyboards like the old Lenovos.
  • The AC Adapter is so big and heavy that I bought two, just so I wouldn’t have to lug it around. On a positive note, it can double as ship ballast or a blunt weapon.
  • The latest hard-drives aren’t always available directly from computer manufacturers. You will need to be comfortable opening up your laptop and swapping some components.
  • This laptop will definitely not improve your carbon footprint.


Putting it All Together

At the time I ordered my laptop, the Samsung SM951 wasn’t available from Sager, nor was it available from regular retail outlets. In fact, it’s currently an “OEM only” component, meaning it is only sold directly to computer manufacturers like Dell, Lenovo, etc. for use in a select few of their laptops. That meant I had to order it from Ram City . . . in Australia. On a positive note, the exchange rate is pretty good right now, and Ram City’s customer service is excellent. Everything else was purchased as part of a complete laptop directly from Sager Notebooks.



If you’re looking for a laptop that will present your EPM demos in a positive light, it’s worth it to color outside the lines, and to step away from the mainstream manufacturers. Purchasing a gaming laptop will allow you to use the fastest processors and the latest components in your pursuit of ultimate speed.


I have not been compensated by the companies discussed in this post in any way. All opinions expressed are my own and are based upon my own research and experience.