Implementing Polaris: Lessons learned
Author: Philipp Erkinger, Certified Master Anaplanner and Principal Solution Architect at Bedford Consulting.
There are days in your professional career where you get to observe something special. October 20, 2022 was such a day, when I received access to a Polaris workspace to build a model proof of concept. Having worked with Anaplan for 8 years, I thought I knew it inside-out. A few hours into the adventure, I realized that the world of Polaris is very different than Classic but that I was about to experience something exciting and fascinating. I felt like an explorer, discovering the Amazonas for the first time. This article is a summary of lessons learned from the unique experience of implementing Polaris at one of the early-adopting customers.
Let’s start with the basics. How did I even get here? A Swedish Anaplan customer that I worked with for a long time was actively investigating different solutions to improve their existing Anaplan model for reporting and analysis. Their key challenges:
- >8 different operational lists required in operational reporting (Accounts / P&L, Cost Center, Business Region, Legal Entity, Production Entity, Products, Currency, version).
- Highly sparse data.
- Many combinations needed to analyze data from different angles.
- Complex consolidation logic for financial plan / actuals including eliminations, allocations, currency translations and much more.
- Many reporting and analysis add-ons that have been built over the years.
The choice fell ultimately towards Polaris, due to its unique calculation engine built to deal with highly sparse data sets and large dimensionality. The value expectations are:
- Better user experience with more dimensions available to the end users.
- Simplification of the calculation logic to ease data reviews.
- Leaner model administration processes.
- Jumping board for new use cases.
We came up with a solution architecture that would split the heavy calculation logic and admin processes from the end user reports by building two Polaris models instead of one.
- The Reporting Hub: A model designed to digest actuals and plan data through a complex consolidation logic that includes cost allocations, eliminations of inter-company accounts and a currency translation from all local currencies into the three main corporate currencies. All of course across hundreds of accounts, cost centers, products and much more.
- Reporting and analysis: A simple model that can be completely refreshed at any moment, containing only reporting modules designed to bring an exceptional user experience to the end-users. Calculation is limited to aggregation functionalities, variance reporting and KPI’s.
The project focused only on the replacement of the existing Reporting Model and didn’t include any major changes in other models. However, the Reporting Hub is designed as a platform to replace in the future the Data Hub and empower several new use cases currently under discussion, like workforce planning, investment planning and much more.
Learnings from the Implementation
We had a new experience with Polaris almost every day during the implementation. The following points are a summary of learnings that I consider critical knowledge, because Polaris will demand a whole new set of skills from model builders.
- It’s all about the ‘Fan-Out’.
This is probably the most important question that you can ask yourself when building an Anaplan model in Polaris. As a basic principle, the Polaris engine calculates only when it “sees” data. This means a “0” or “BLANK” won’t require any calculation performance. Yet, changing the value of a single cell from 0 to 1 can cause an explosion of real-time calculations. The “fan-out” describes how many cells will be impacted by a single value change. Having a large fan-out can grow a tiny model of 10 MB into a 10 GB model through a single cell change.
New skill: Learn to predict how end users in combination with your formula and module design can impact the performance of the entire model with a single cell change.
- Understanding calculation complexity.
A new feature in Polaris is “calculation complexity”. It tells you in the blueprint the impact relation of each line item, meaning the fan-out ratio. “One-to-one” is the simplest complexity, meaning, 1 cell change from the source line item will impact 1 cell in the calculation line item. The higher the number, the more complex the calculation logic becomes having a direct impact on performance. It’s recommended to keep calculations as close to 1 as possible. Everything above should be reviewed and potentially remodeled by splitting up line items into several steps.
New skill: Learn to read the calculation complexity of each line item, review regularly and optimize towards “one-to-one”.
- Flat lists — A new beginning.
The hierarchical depth stands in direct relation to the fan-out ratio. Calculation modules with many hierarchy lists can therefore become very performance heavy when the aggregation is turned on (SUM, FORMULA etc.). This can be solved by creating flat replicates for the purpose of calculations that have only one item as parent “Total”. The great advantage is that turning on SUM doesn’t add significant overhead calculations, but it makes it easy to move from modules with many dimensions to modules with much less dimensionality.
New skill: Build flexible calculation modules that easily allow moving between flat lists and hierarchies.
- The Master Switch-Board: Utilizing dynamic calculation performance.
The unique calculation method behind Polaris allows for something very special, calculation performance can be simply turned off. Imagine you have a year of actuals that is not actively used in the model and consumes unnecessary calculation performance (workspace size). In Classic, you would have to remove it from all modules by implementing time ranges, updating the model calendar etc. In Polaris, this can be done by changing a single Boolean from TRUE to FALSE. Even better, you can bring the data back at any time without updating the core model settings. The associated calculation performance can be managed dynamically. This can be very helpful in scenarios like:
- Optimizing model-to-model imports and exports.
- Testing new features with a limited subset of data.
- Turning on / off individual areas of the model, like heavy reports that are only used during month closure.
New skill: Learn to optimize calculation performance by administrating data utilization throughout the model. Test the data impact well!
- IF THEN ELSE… So different?
It has always been essential in Classic to optimize IF THEN ELSE statements towards early exits. Meaning, the most common rule must be applied first. Polaris can partially detect this now by itself. More important is to optimize towards sparsity and default values (ZERO, BLANK) to reduce calculation performance.
New skill: Optimize your IF THEN ELSE statements for data utilization instead of early exits.
- Impacting user experience with over-dimensioning.
Polaris allows for a great number of dimensions in one single module / line item. This is just fantastic to simplify key calculations and provide more information to the end user. The new Supress Zero feature allows also easy filtering of the visible grid to only columns and rows that have data, even when multiple dimensions are used. However, for an end-user it can become difficult to navigate and overwhelming. That’s why it is even more important in Polaris to assess the implications of dimensions to the end-user experience.
New skill: Pay attention to end-user requirements and limit modules with many dimensions and optimize (limit) views to their needs.
- Rethink filtering. Cancel your calculation.
Polaris is not just a new engine; it comes also with a couple new features that provide key improvements for model builders and end-users. One of them is “Zero Suppression”, which allows the end-user to filter the visible grid on only active values. Another one is the magical “Cancel Button”, which stops an ongoing model calculation mid-flight.
New skill: Train end-users in applying Zero Suppression to avoid filtering rows and columns by hand.
- Learn about the technical differences.
Ultimately, Polaris is a fantastic product but works from the ground up differently as the Classic calculation engine. There are best practices in classic that can get you into model builder ****. Therefore, it’s essential to familiarize yourself with the fundamental technical differences between the two engines before embarking on a Polaris endeavor. Here are some important ones highlighted.
It has been an incredible learning experience so far with Polaris. I hope that our knowledge build-up during the project will help to accelerate future implementations and guide model builders in the transition from Classic to Polaris.
Finally, I would also like to emphasize the great work of Niko Vilkko (@nvilkko) during this implementation. He has built a fantastic Polaris model and contributed a lot to the above learnings.
Questions? Leave a comment!