OEG Best Practice: Pre-allocation in lists (and impacts to model performance)

AnaplanOEG
edited February 2023 in Best Practices

What is pre-allocation in lists?

Pre-allocation in lists is a mechanism in Anaplan that adds a buffer to list lengths. It is not added by default for lists; it becomes enabled when a role is set on a list.

pre1.png
Please follow planual-icon.png1.03-01 though. Only add roles when needed.

When it is enabled, a 2 percent buffer is added to the list, and this includes all line items where the list or a subset of the list (regardless of its size) is used. This means we create extra space (in memory) for each line item so that when a new list item added to the list or a subset of the list, the line item does not need to be expanded or restructured.

When the buffer is used up (the list has run out of free slots) another 2 percent buffer will be created and any line items using the list will be restructured.

This buffer is not shown in the list settings in Anaplan, meaning if we had a list with 1,000 items, that’s what Anaplan would show as the size. But in the background, that list has an extra 20 hidden and unused items.

Pre-allocation also applies to list deletions but allows for 10 percent of the list to be deleted before any line items using the list get restructured.

The purpose of pre-allocation in lists is to avoid restructuring line items that use frequently updated lists.

What happens when we restructure?

Restructuring the model is an expensive task in terms of performance and time. The Anaplan Hyperblock gets its efficiency by holding your data and multi-dimensional structures in memorymemory being the fastest storage space for a computer. Creating the model structures in memorybuilding the Hyperblockdoes take a significant time to complete. But once it's in memory, access is quick.

The initial model opening is when we first build those structures in memory. Once in memory, any further model opens (by other users, for example) are quick.

Restructuring is the process of having to rebuild some parts of the model in memory. In the case of adding an item to a list, that means any line item that uses that list as a dimension.

When restructuring a line item, we have to recalculate it, and this is often where we see the performance hit. This is because line items have references, so there is a calculation chain from any line item changed by that restructuring.

Pre-allocation is there to reduce this extra calculation caused by restructuring.

An example of this was seen in a model that was adding to a list that contained trial products. These products would then have a set of forecasted data calculated from historical data from real products. The list of these new products was small and changed reasonably frequently; it contained around 100 items. Adding an item took around two seconds (except every third item took two minutes).

This happened because of the difference between adding to the pre-allocated buffer and when it had to do the full calculation (and re-adjust the buffer). Without pre-allocation, every list addition would have taken two minutes.

Fortunately, we managed to optimize that calculation down from two minutes to several seconds, so the difference between adding to the pre-allocation buffer and the full calculation was around five seconds, a much more acceptable difference.

In summary, pre-allocation on lists can give us a great performance boost, but it works better with larger lists than small lists.

Small, frequently updated lists

As we’ve seen, the pre-allocation buffer size is 2 percent, so on a large listsay one million itemswe have a decent-sized buffer and can add many items.

When we have a small list that is frequently used, then a performance characteristic that is seen is frequently changing calculation times. This is especially the case if that list is heavily used throughout the model. A list with 100 items will restructure and recalculate on every third addition.

This will continue to be noticeable for quite some time. Doubling the list size is still just adding four unused items (2 percent of 200). When we have a small list that is frequently used, you will see the calculation times change from fast to slow while the buffer is frequently filled. In cases like this, it is very important to reduce and optimize the calculations as much as possible.

What can be done?

There are a few options. You could always make the list bigger and increase the buffer so that it restructures less. How?

Option 1: Create a subset of “active” items and ignoring the additional list items used to bulk out the list.

The problem with this would be the size of any line items using that list would increase and so would their calculations. Changing from a 100-item list to a 10,000- or even 1,000-item list (enough to give us a bigger buffer) could greatly increase the model size.

Option 2: Create a new list that is not used in any modules so we avoid any restructuring costs.

This would work but it adds a lot of extra manual steps. You would have this new list used in a single data entry module, which means this data is unconnected with the rest of the model. Being connected is what gives us value. You would then need to create a manual process to push data from this unconnected module to one that is connected to the rest of the model (this way all the changes will happen at once). We do lose the real-time updates and benefits of Connected Planning, though.

Option 3: Reduce the impact of restructuring by optimizing the model and the formulas.

Our best option is optimizing calculations. If we have quick calculations, the difference between buffered and unbuffered list additions could be small.

The best way to achieve this would be through a Model Optimization Success Accelerator. This is a tailored service delivered by Anaplan Professional Services experts who aim to improve model performance through calculation optimizations. Please discuss this service with your Anaplan Business Partner.

You can also follow our best practice advice and reference the Planual to find ways you can optimize your own models.

Author Mark Warren.

Comments

  • Great article and an important thing to know, thanks!

     

    I have a question related to option 1. Suppose we have 2 line items - one of them applies to a small subset of a product list, another one applies to a full product list. Does it mean that these 2 line items will have same performance? I mean the calculation time.

     

    Thanks,

    Haik

  • @MarkWarren 

     

    Does it mean using this function can elevate concurrence issue when large number of users need to create new items to a numbered list?

     

    Really appreciate the solution here. 

  • @LilyLiuAnaplan It certainly helps, it lessens the impact of structural changes, but the buffer is only 2% of the list size.

    So if a large number of users are creating new items then very quickly that buffer will be used and the next change would see a larger impact; the buffer does get re-added in that change.


    A lot of list additions will have an impact on the user concurrency. The changes are larger and have more impact (compared to cell changes). One way to avoid this, if its going to be needed, is to pre-create a number of empty list items ahead of time, before the users need them. Potentially scheduled out of hours.
    The users can then add in the details to those line items as needed.

  • Thank you so much for the suggestions! 

  • rohan.deo
    edited July 7

    Thank you for sharing this article.

    I am confused about why list pre-allocation is done on the Roles → Lists tab in the Users section. My understanding was that this tab was more about list security rather than performance. If I wanted to control which users could access lists, this tab is how I would do it. I am not understanding how the model role functionality ties into list pre-allocation concept. Thank you!

  • @rohan.deo It's not an obvious way to set this feature. It was done so that it would be transparent for most people, they don't need to set it. The thinking was that you would normally set a role that has write access to a list, therefore the list may benefit from pre-allocation if it is going to be written to.

  • @MarkWarren - That logic makes sense! I appreciate the clarification.