Moving a model from Classic to Polaris

Author: Mike Henderson is a Principal Solution Architect at Anaplan.

There is growing interest in taking existing models that operate in the classic (Hyperblock) engine to Polaris. There are a variety of reasons motivating the move such as workspace optimization, accommodating natural model growth over time, and futureproofing by adopting the next-generation engine. I am asked "how long does it take to move my model to Polaris?" with increasing frequency, so I thought I would share my thoughts on the matter with Community.

If your model is working in Classic, and you are happy with it, there's little reason to move it to Polaris until you need to change it. Anaplan has communicated a statement of direction concerning the two engines. The future is the Polaris engine.

In this article, I am referring to large and complex comprehensive application models, not the smaller and simpler models such as the one you built for Level 1 model builder training. A comprehensive Anaplan model was built by a team over months, not days, and contains several thousand configured objects. These objects are the lists, modules, line items, views, imports / exports / processes / other actions, UX boards, roles, line item subsets, etc. Can you — and should you — leverage that earlier investment by a “copy-and-modify" approach?

Migrate vs rebuild

Suppose a model has 6,000 configured objects. In my experience, a model builder configures about 30 objects per day. So, a clean sheet model build requires on the order of two hundred person-days of effort. This is a very rough ballpark estimate, and it does not include work efforts of subject matter experts or the teams that manage the systems that will integrate with the Anaplan application. This reality is one of the compelling reasons why Anaplan offers easily configured, standardized solutions for Supply Chain, Integrated Financial Planning, and Sales Performance Management use cases.

For migration of an existing application, we can (perhaps) save a lot of labor and reduce project risk by leveraging the existing body of work. By importing and modifying the model, many configured objects may be used as-is and with lower risk of misinterpretation of requirements.

Another consideration is that the user base already knows the existing model. If the Polaris application is outwardly similar (if not identical) to the existing application, then the upgrade should be nearly transparent to the user.

But be aware that migrating a poor model to a new engine is not a panacea for underlying issues — in fact, it might exacerbate them.

Look before you leap

Moving your existing model to Polaris may not be the appropriate course of action depending upon your circumstances. A vital first step is to evaluate the model you wish to migrate to Polaris. Is it worth migrating? Cast a critical eye and ask these questions:

Does the model meet primary business requirements?

Evaluate whether the existing model continues to deliver the necessary functionality to support the organization’s objectives. Outdated logic or unmet requirements may signal the need for significant changes.

Is the model in compliance with best practices?

Assess adherence to established frameworks such as:

  • PLANS: Performance, Logical Integrity, Adaptability, Necessary Complexity, and Sustainable Maintainability.
  • DISCO: Data, Inputs, System, Calculations, and Outputs.
  • Planual rules: Anaplan’s guide to standardized rules for organization, performance, and user interfaces.

Are the model’s dimensions and data granularity appropriate?

Review whether the model’s dimensional structure aligns with the organization’s reporting and analysis needs. Adding a missing dimension or expanding the depth of an insufficiently granular dimension will require rethinking during migration. The move to Polaris is often driven by the need for more detail and dimensionality. Probe to determine that the underlying data supports this objective and whether it will be practical to implement as a model modification.

Does the model rely on functions or features not supported in Polaris?

Identify any reliance on capabilities that are not yet available in Polaris, such as certain functions or complex modeling workarounds. (See this page for details.). These limitations could affect the feasibility of migration. At the time of this writing, Polaris does not support Optimizer or any of the finance or call center functions. The list of unsupported functions or features is actively being reduced by the Anaplan software engineering team.

Can compromises made for the Classic engine be addressed?

Determine whether classic engine workarounds — such as concatenated lists or multiple reporting modules — can be eliminated or restructured using Polaris’ capabilities. In order to remain within the constraint of workspace size, a classic model often takes advantage of several compromises: highly dimensional reporting in a separate system (not real time), reduced granularity, reduced time span, flattened data hub, fewer variances between versions, fewer what-if scenarios, etc. The Hypermodel option enables expanded workspaces up to 720 GB, and this capability gave us a "times 5" multiplier in space. A five-fold increase is nice, but the challenge in scaling is often "times 100,000". Polaris delivers exactly that realm of capacity and the performance to back it up. Polaris offers the opportunity to go back and question the compromises. Your objective is to identify those compromises and to determine a path forward. Can they be remediated by retrofitting an existing model, or should you rebuild from a clean slate?

Are there structural or architectural challenges inherent in the model?

Evaluate whether the model contains inherent complexities, redundancies, or inefficiencies that make it difficult to migrate. If a model performs poorly in the classic engine due to inefficient formulas or over-dimensioned line items, (i.e. not due to it simply being a large multi-dimensional space that is sparsely populated) those problems may be amplified by moving to Polaris and expanding its cell count by several orders of magnitude.

Are you prepared to shift your team’s mindset from “workspace is king” to “performance is the new boss”?

The move from billions (10⁹) of addressable cells to quadrillions (10¹⁵) puts you into a new universe. The landscape will seem so remarkably familiar, and yet that familiarity can be deceptive. You need to be ready to change your style. I strongly advise the build team to use a slim Dev model for syntax and a full Test model for size and performance feedback. And yet this advice is too often ignored with painful consequences. Understand what the new blueprint fields of Populated Cell Count, Calculation Type, and Calculation Effort % are trying to tell you. Abandon any hope of predicting how much workspace your model will require. Memory use is no longer the constraint it once, it depends upon the sparsity and "shape" of the data and your ability to avoid populating ultra-high cell count line items unnecessarily.

Make your move

To move a model from the classic Hyperblock engine to Polaris with minimal changes in the model is relatively easy and I have migrated highly complex models in as little as two to three weeks.

The steps are as follows:

  1. Make a copy of your Classic model in a classic workspace.
    The new copy of the model will be slimmed and cleared of formula & summary logic to enable import to Polaris. The intention here is to have a copy of the model in the classic engine that will import into Polaris without any error messages. Those errors are either because the model is too large to process and / or there are incompatible formulas or summary configurations.
  2. Cut the model size down by deleting most items from large lists.
    We only need the essential structure, not the full lists of employees, SKUs, cost centers, etc. These will be reloaded in Polaris after the core structure and logic are moved.
  3. Remove all summary methods, then remove all line item formulas.
    This is easily done on the Modules > Line Items tab. Summary methods must be removed first because Summary: Formula will throw an error if there is no formula.
  4. Import the model into Polaris using the Model Management feature.
    Go to Manage Models in the Polaris workspace, Import, enter the new model’s name and choose “Anaplan/Polaris” as the Source. In the Import Model dialog box, choose the source workspace and the model copy that was prepared for migration. The model should be imported cleanly. Note that there could be issues with formulas of list properties, but that is unlikely. Note that the import from classic to Polaris does not include any data values or model history. At this point, the copy of the classic model may be archived.
  5. Re-add each of the formulas first then summary methods that were removed.
    In my experience, well over 95% of the model’s formulas and summary methods will copy-paste cleanly from classic to Polaris. The small fraction that are incompatible (see this page on Anapedia) commonly fail because of differences in time functionality, formulas that are too long (Polaris enforces a maximum of 2,000 characters), use of certain summary configurations, and certain aggregation operations. This is the most time-consuming part of the conversion as the alternative approaches will require some creative thinking.
    Tip: Do not rename any modules, lists, line items or subsets until this step is done. Why? Once you change a name, any subsequent formulas that reference that name won’t match. Avoid the temptation to clean up naming until this activity is completed.
  6. Set up ALM and Make a “full test” deployed copy.
    In Polaris, it is recommended that you keep your dev model as light as possible ("slim dev") for formula/syntax development, then migrate frequently to a fuller ("full test") model to evaluate scaling and performance with full lists and data. If your model's cell count grows with additional scope over time, you will find that developer productivity is reduced in a "fat dev" model.
  7. Get the data.
    The model import step brought only the model's structure and none of its data in modules or list properties. Identify the line items and list properties with no formulas and go to work on moving values. You need to copy/paste or import the data values.
  8. Re-inflate the model's lists and load transactional data sets.
    Verify that actions to build lists, import data, etc. are mapped and working as expected. Run all integration processes to build lists, import properties, and load transactional data.
  9. Set up the UX application.
    Copy the existing UX app and redirect all boards to the new model. If all structural items (modules, line items, actions, lists, ...) are identical to the original model, then the UX boards can be re-pointed to use the Polaris model as a source. Validate that all is in order.
  10. Verify security.
    User security (roles, selective access, DCA) should be the same.
  11. Tune for performance and size.
    It is strongly recommended that you identify the line items in your model that consume that highest Calculation Effort % during model open. Also, Identify the line items that consume the most memory. This is easily achieved by exporting the Modules >> Line Items tab within 10 minutes after model open. The Calc Effort % field dynamically updates as the model is used. During model open, every line item is fully recalculated. As a result, we need to export the line item inventory immediately upon opening to get a valid snapshot of performance. Once identified, use the general principles spelled out in the Planual to tune your model.

Now the real work begins

During the pre-move analysis, you identified the modifications to best take advantage of the Polaris engine’s ability to handle sparse data sets. Now that you have an apples-to-apples copy of your model in the new engine, you can make the changes you identified. But that is a topic for another day.

Questions? Leave a comment!

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Comments

  • This is excellent, Mike! Thank you for sharing.

  • Great article @hendersonmj !

  • Thanks for sharing this Mike. Great to add your experience to the growing pool of Polaris knowledge

    Seb McMillan

    Principle Platform Adoption Specialist

  • Excellent write-up. Gives an idea for our upcoming Polaris initiatives

  • Excellent article, thanks for sharing, Mike!

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In