Pre-Allocation in Lists (and Impacts to Model Performance)
What is Pre-Allocation in Lists?
Pre-allocation in lists is a mechanism in Anaplan that adds a buffer to list lengths. It is not added by default for lists; it becomes enabled when a role is set on a list.
Please follow 1.03-01though. Only add roles when needed.
When it is enabled, a 2 percent buffer is added to the list, and this includes all line items where the list is used. This means we create extra space (in memory) for each line item so that when a new list item is added, the line item does not need to be expanded or restructured.
When the buffer is used up (the list has run out of free slots) another 2 percent buffer will be created and any line items using the list will be restructured.
This buffer is not shown in the list settings in Anaplan, meaning if we had a list with 1,000 items, that’s what Anaplan would show as the size. But in the background, that list has an extra 20 hidden and unused items.
Pre-allocation also applies to list deletions but allows for 10 percent of the list to be deleted before any line items using the list get restructured.
The purpose of pre-allocation in lists is to avoid restructuring line items that use frequently updated lists.
What Happens When We Restructure?
Restructuring the model is an expensive task in terms of performance and time. The Anaplan Hyperblock gets its efficiency by holding your data and multi-dimensional structures in memory—memory being the fastest storage space for a computer. Creating the model structures in memory—building the Hyperblock—does take a significant time to complete. But once it's in memory, access is quick.
The initial model opening is when we first build those structures in memory. Once in memory, any further model opens (by other users, for example) are quick.
Restructuring is the process of having to rebuild some parts of the model in memory. In the case of adding an item to a list, that means any line item that uses that list as a dimension.
When restructuring a line item, we have to recalculate it, and this is often where we see the performance hit. This is because line items have references, so there is a calculation chain from any line item changed by that restructuring.
Pre-allocation is there to reduce this extra calculation caused by restructuring.
An example of this was seen in a model that was adding to a list that contained trial products. These products would then have a set of forecasted data calculated from historical data from real products. The list of these new products was small and changed reasonably frequently; it contained around 100 items. Adding an item took around two seconds (except every third item took two minutes).
This happened because of the difference between adding to the pre-allocated buffer and when it had to do the full calculation (and re-adjust the buffer). Without pre-allocation, every list addition would have taken two minutes.
Fortunately, we managed to optimize that calculation down from two minutes to several seconds, so the difference between adding to the pre-allocation buffer and the full calculation was around five seconds, a much more acceptable difference.
In summary, pre-allocation on lists can give us a great performance boost, but it works better with larger lists than small lists.
Small, Frequently Updated Lists
As we’ve seen, the pre-allocation buffer size is 2 percent, so on a large list—say one million items—we have a decent-sized buffer and can add many items.
When we have a small list that is frequently used, then a performance characteristic that is seen is frequently changing calculation times. This is especially the case if that list is heavily used throughout the model. A list with 100 items will restructure and recalculate on every third addition.
This will continue to be noticeable for quite some time. Doubling the list size is still just adding four unused items (2 percent of 200). When we have a small list that is frequently used, you will see the calculation times change from fast to slow while the buffer is frequently filled. In cases like this, it is very important to reduce and optimize the calculations as much as possible.
What Can Be Done?
There are a few options. You could always make the list bigger and increase the buffer so that it restructures less. How?
Option 1: Create a subset of “active” items and ignoring the additional list items used to bulk out the list.
The problem with this would be the size of any line items using that list would increase and so would their calculations. Changing from a 100-item list to a 10,000- or even 1,000-item list (enough to give us a bigger buffer) could greatly increase the model size.
Option 2: Create a new list that is not used in any modules so we avoid any restructuring costs.
This would work but it adds a lot of extra manual steps. You would have this new list used in a single data entry module, which means this data is unconnected with the rest of the model. Being connected is what gives us value. You would then need to create a manual process to push data from this unconnected module to one that is connected to the rest of the model (this way all the changes will happen at once). We do lose the real-time updates and benefits of Connected Planning, though.
Option 3: Reduce the impact of restructuring by optimizing the model and the formulas.
Our best option is optimizing calculations. If we have quick calculations, the difference between buffered and unbuffered list additions could be small.
The best way to achieve this would be through a Model Optimization Success Accelerator. This is a tailored service delivered by Anaplan Professional Services experts who aim to improve model performance through calculation optimizations. Please discuss this service with your Anaplan Business Partner.
You can also follow our best practice advice and reference the Planual to find ways you can optimize your own models.
The content in this article has not been evaluated for all Anaplan implementations and may not be recommended for your specific situation.
Please consult your internal administrators prior to applying any of the ideas or steps in this article.