How to navigate a list index reset while preserving your planning data
Author: Evan Groetch is a Certified Master Anaplanner and Business Intelligence Manager at Fresenius Medical Care.
Seeing that your list import has failed due to the index hitting its maximum value can be a frustrating experience. Cursory research on Anapedia will show you there is a “Reset” button at the bottom of the Configure tab of each list. “Great!” However, you are soon dismayed to see that the button is grayed out and won’t be clickable until all the list members have been deleted.
“How can I delete the members of this list? Doing so will wipe out all the associated planning data!”
This article will outline the process used by the Fresenius Medical Care Center of Excellence (CoE) team to navigate this problem without permanently impacting planning data.
Why does list index need to be reset?
The maximum value of a list index is 999,999,999. In most cases, the number of list members is nowhere near this amount. The reason the index is so much higher than the number of list members is that the index only increases when list members are added. It is not reduced when list members are deleted. For this reason, lists that are routinely cleared and repopulated are liable to require an index reset at some point.
How do I perform the list index reset and preserve any associated planning data?
Note: the following instructions assume ALM is established (Development/Standard model with sync relationship to Production/Deployed model).
Before doing anything else, block out a timeframe during which you will perform the process. Send any necessary communications to end-users to notify them of some brief model downtime. This step is to avoid any conflicts with users planning in the model while completing this sensitive process.
In the Development model:
* ‘Copy & Archive’ the model to create a backup before making any changes.
* Create a dummy list and populate it with all the list members of the original list requiring the index reset.* Create an import action to populate Name and Code (plus any list subsets and properties) in the dummy list with values from the original list. Ensure you rename this import action so you can recognize and use it later in this process.
* Ensure the dummy list is set as a Production list (this setting can be found on the ‘Configure’ tab.
* Identify any module(s) that are both dimensioned by the original list and contain input line items. Any module containing only formula-based line items can be omitted from this step.* Create the copy module(s) and modify the ‘Applies To’ so that all dimensions match the original modules, except for the original list, which should be replaced with the dummy list.
* Populate the copy module(s) via import action from the original module(s). The only difference between the source and target should be the dummy list vs. original list, and this mapping can be done automatically based on code within the import mapping settings.
* Ensure you rename this import action so you can recognize and use it later in this process.
* Create a new SYS module with the original list as a dimension.* Include in this module a Boolean named “Delete?” and set the formula to TRUE. You can use this in a future step to create an action that deletes all the list members.
* Under Actions, click the ‘New Action’ dropdown and select ‘Delete from list using Selection’. Title this action appropriately and set it to Delete the original list members using the Boolean line item you just created in the previous step.
* Do not yet run this new Delete action.
* Create a new SYS module with the dummy list as a dimension.* Include in this module a Boolean named “Delete?” and set the formula to TRUE. You can use this in a future step to create an action that deletes all the list members.
* Under Actions, click the ‘New Action’ dropdown and select ‘Delete from list using Selection’. Title this action appropriately and set it to Delete the dummy list members using the Boolean line item you just created in the previous step.
* Do not yet run this new Delete action.
* Now that you have your data preserved in copy module(s), perform data validation. Ensure your copy module(s) tie exactly to the original module(s).
* Now we’re ready to reset the index.
Note: if the list that requires the index reset is not set up as a ‘Production’ list, you need to sync the above changes to the Production module before completing the following steps in the Development model. Otherwise, you run the risk of deleting the data in the Production module without having the structures in place to restore it afterwards. If this is the case, skip to step 8.* Run the ‘Delete’ action you created in step 4b
* Open the original list and go to the ‘Configure’ tab. Click the ‘Reset’ button (it should no longer be grayed out).
* Confirm the list index has been reset to 0.
* Now, we need to restore our original list and module(s).* Create import action from the dummy list to the original list. Be sure to include all list properties and subsets in the import, as these will have been cleared when the list members were deleted.
* Create import action(s) to load data from the copy module(s) created in step 3a back to the original modules.
* Perform data validation to ensure data matches exactly between the copy and original modules.
* Create a revision tag.
In the Prod model:
* Copy and archive the model to keep a backup before making any changes.
* Compare & Sync the revision tag from the Development model you created in Step 9.
* You should have everything you need now to perform the list index reset and restore planning data.* Run the import to populate the dummy list
* Run the import(s) to populate the copy module(s)
* Validate data between original and copy modules
* Run the delete action to delete the original list members
* Perform the list index reset in the original list
* Run the import to re-populate the original list (using the dummy list as the source)
* Run the import to re-populate the original module(s) (using the copy module(s) as the source)
* Run the delete action to delete the dummy list members
How to build index reset into routine model administration
The model now has all the required structures in place to handle index resets in the future. By running the delete action to clear the dummy list at the end of the process, we can ensure no additional workspace is wasted on the copy module(s). Now, the next time a reset is required, we can run through the various actions outlined in Step 12 above for a streamlined process. This process should always be bookended with model ‘Copy and archive’ as well as communications to model owners and relevant end users. This process can be completed at regular intervals (i.e. Annual Model Administration to update Model Calendar) or on an as-needed basis.
Questions? Leave a comment!
Moving a model from Classic to Polaris
Author: Mike Henderson is a Principal Solution Architect at Anaplan.
There is growing interest in taking existing models that operate in the classic (Hyperblock) engine to Polaris. There are a variety of reasons motivating the move such as workspace optimization, accommodating natural model growth over time, and futureproofing by adopting the next-generation engine. I am asked "how long does it take to move my model to Polaris?" with increasing frequency, so I thought I would share my thoughts on the matter with Community.
If your model is working in Classic, and you are happy with it, there's little reason to move it to Polaris until you need to change it. Anaplan has communicated a statement of direction concerning the two engines. The future is the Polaris engine.
In this article, I am referring to large and complex comprehensive application models, not the smaller and simpler models such as the one you built for Level 1 model builder training. A comprehensive Anaplan model was built by a team over months, not days, and contains several thousand configured objects. These objects are the lists, modules, line items, views, imports / exports / processes / other actions, UX boards, roles, line item subsets, etc. Can you — and should you — leverage that earlier investment by a “copy-and-modify" approach?
Migrate vs rebuild
Suppose a model has 6,000 configured objects. In my experience, a model builder configures about 30 objects per day. So, a clean sheet model build requires on the order of two hundred person-days of effort. This is a very rough ballpark estimate, and it does not include work efforts of subject matter experts or the teams that manage the systems that will integrate with the Anaplan application. This reality is one of the compelling reasons why Anaplan offers easily configured, standardized solutions for Supply Chain, Integrated Financial Planning, and Sales Performance Management use cases.
For migration of an existing application, we can (perhaps) save a lot of labor and reduce project risk by leveraging the existing body of work. By importing and modifying the model, many configured objects may be used as-is and with lower risk of misinterpretation of requirements.
Another consideration is that the user base already knows the existing model. If the Polaris application is outwardly similar (if not identical) to the existing application, then the upgrade should be nearly transparent to the user.
But be aware that migrating a poor model to a new engine is not a panacea for underlying issues — in fact, it might exacerbate them.
Look before you leap
Moving your existing model to Polaris may not be the appropriate course of action depending upon your circumstances. A vital first step is to evaluate the model you wish to migrate to Polaris. Is it worth migrating? Cast a critical eye and ask these questions:
Does the model meet primary business requirements?
Evaluate whether the existing model continues to deliver the necessary functionality to support the organization’s objectives. Outdated logic or unmet requirements may signal the need for significant changes.
Is the model in compliance with best practices?
Assess adherence to established frameworks such as:
* PLANS: Performance, Logical Integrity, Adaptability, Necessary Complexity, and Sustainable Maintainability.
* DISCO: Data, Inputs, System, Calculations, and Outputs.
* Planual rules: Anaplan’s guide to standardized rules for organization, performance, and user interfaces.
Are the model’s dimensions and data granularity appropriate?
Review whether the model’s dimensional structure aligns with the organization’s reporting and analysis needs. Adding a missing dimension or expanding the depth of an insufficiently granular dimension will require rethinking during migration. The move to Polaris is often driven by the need for more detail and dimensionality. Probe to determine that the underlying data supports this objective and whether it will be practical to implement as a model modification.
Does the model rely on functions or features not supported in Polaris?
Identify any reliance on capabilities that are not yet available in Polaris, such as certain functions or complex modeling workarounds. (See this page for details.). These limitations could affect the feasibility of migration. At the time of this writing, Polaris does not support Optimizer or any of the finance or call center functions. The list of unsupported functions or features is actively being reduced by the Anaplan software engineering team.
Can compromises made for the Classic engine be addressed?
Determine whether classic engine workarounds — such as concatenated lists or multiple reporting modules — can be eliminated or restructured using Polaris’ capabilities. In order to remain within the constraint of workspace size, a classic model often takes advantage of several compromises: highly dimensional reporting in a separate system (not real time), reduced granularity, reduced time span, flattened data hub, fewer variances between versions, fewer what-if scenarios, etc. The Hypermodel option enables expanded workspaces up to 720 GB, and this capability gave us a "times 5" multiplier in space. A five-fold increase is nice, but the challenge in scaling is often "times 100,000". Polaris delivers exactly that realm of capacity and the performance to back it up. Polaris offers the opportunity to go back and question the compromises. Your objective is to identify those compromises and to determine a path forward. Can they be remediated by retrofitting an existing model, or should you rebuild from a clean slate?
Are there structural or architectural challenges inherent in the model?
Evaluate whether the model contains inherent complexities, redundancies, or inefficiencies that make it difficult to migrate. If a model performs poorly in the classic engine due to inefficient formulas or over-dimensioned line items, (i.e. not due to it simply being a large multi-dimensional space that is sparsely populated) those problems may be amplified by moving to Polaris and expanding its cell count by several orders of magnitude.
Are you prepared to shift your team’s mindset from “workspace is king” to “performance is the new boss”?
The move from billions (10⁹) of addressable cells to quadrillions (10¹⁵) puts you into a new universe. The landscape will seem so remarkably familiar, and yet that familiarity can be deceptive. You need to be ready to change your style. I strongly advise the build team to use a slim Dev model for syntax and a full Test model for size and performance feedback. And yet this advice is too often ignored with painful consequences. Understand what the new blueprint fields of Populated Cell Count, Calculation Type, and Calculation Effort % are trying to tell you. Abandon any hope of predicting how much workspace your model will require. Memory use is no longer the constraint it once, it depends upon the sparsity and "shape" of the data and your ability to avoid populating ultra-high cell count line items unnecessarily.
Make your move
To move a model from the classic Hyperblock engine to Polaris with minimal changes in the model is relatively easy and I have migrated highly complex models in as little as two to three weeks.
The steps are as follows:
* Make a copy of your Classic model in a classic workspace.
The new copy of the model will be slimmed and cleared of formula & summary logic to enable import to Polaris. The intention here is to have a copy of the model in the classic engine that will import into Polaris without any error messages. Those errors are either because the model is too large to process and / or there are incompatible formulas or summary configurations.
* Cut the model size down by deleting most items from large lists.
We only need the essential structure, not the full lists of employees, SKUs, cost centers, etc. These will be reloaded in Polaris after the core structure and logic are moved.
* Remove all summary methods, then remove all line item formulas.
This is easily done on the Modules > Line Items tab. Summary methods must be removed first because Summary: Formula will throw an error if there is no formula.
* Import the model into Polaris using the Model Management feature.
Go to Manage Models in the Polaris workspace, Import, enter the new model’s name and choose “Anaplan/Polaris” as the Source. In the Import Model dialog box, choose the source workspace and the model copy that was prepared for migration. The model should be imported cleanly. Note that there could be issues with formulas of list properties, but that is unlikely. Note that the import from classic to Polaris does not include any data values or model history. At this point, the copy of the classic model may be archived.
* Re-add each of the formulas first then summary methods that were removed.
In my experience, well over 95% of the model’s formulas and summary methods will copy-paste cleanly from classic to Polaris. The small fraction that are incompatible (see this page on Anapedia) commonly fail because of differences in time functionality, formulas that are too long (Polaris enforces a maximum of 2,000 characters), use of certain summary configurations, and certain aggregation operations. This is the most time-consuming part of the conversion as the alternative approaches will require some creative thinking.
Tip: Do not rename any modules, lists, line items or subsets until this step is done. Why? Once you change a name, any subsequent formulas that reference that name won’t match. Avoid the temptation to clean up naming until this activity is completed.
* Set up ALM and Make a “full test” deployed copy.
In Polaris, it is recommended that you keep your dev model as light as possible ("slim dev") for formula/syntax development, then migrate frequently to a fuller ("full test") model to evaluate scaling and performance with full lists and data. If your model's cell count grows with additional scope over time, you will find that developer productivity is reduced in a "fat dev" model.
* Get the data.
The model import step brought only the model's structure and none of its data in modules or list properties. Identify the line items and list properties with no formulas and go to work on moving values. You need to copy/paste or import the data values.
* Re-inflate the model's lists and load transactional data sets.
Verify that actions to build lists, import data, etc. are mapped and working as expected. Run all integration processes to build lists, import properties, and load transactional data.
* Set up the UX application.
Copy the existing UX app and redirect all boards to the new model. If all structural items (modules, line items, actions, lists, ...) are identical to the original model, then the UX boards can be re-pointed to use the Polaris model as a source. Validate that all is in order.
* Verify security.
User security (roles, selective access, DCA) should be the same.
* Tune for performance and size.
It is strongly recommended that you identify the line items in your model that consume that highest Calculation Effort % during model open. Also, Identify the line items that consume the most memory. This is easily achieved by exporting the Modules >> Line Items tab within 10 minutes after model open. The Calc Effort % field dynamically updates as the model is used. During model open, every line item is fully recalculated. As a result, we need to export the line item inventory immediately upon opening to get a valid snapshot of performance. Once identified, use the general principles spelled out in the Planual to tune your model.
Now the real work begins
During the pre-move analysis, you identified the modifications to best take advantage of the Polaris engine’s ability to handle sparse data sets. Now that you have an apples-to-apples copy of your model in the new engine, you can make the changes you identified. But that is a topic for another day.
Questions? Leave a comment!