Anaplan deployment at enterprise scale

Deployment_1000x500.jpg

Anaplan design decisions always revolve around optimizing and balancing four factors:

  1. Performance
  2. User interface
  3. Simplicity
  4. Administrative burden

All else equal, these should be given approximately equal weighting. When working with global deployments, large data sets, and diverse user groups, a vital mechanism to leverage is Application Lifecycle Management (ALM).

How ALM drives scale

While ALM is used ubiquitously to manage a standard development cycle of Dev > Test > Prod, there are lesser-known features that allow for a powerful degree of customization and balance while maintaining standardization. We will explore why ALM helps us in these situations, factors to consider, and which pieces exactly allow us to move at scale.

The fundamental challenge, which is the same across standard and enterprise deployments, is how to provide the right users with the right data, the right functionality, and the right interface with the right performance. The unique challenge to enterprise deployments is the proportionally larger volumes of data, users, and often, functions that need to be performed. While the standard design process dictates that functions are split at a macro level (e.g. financial planning and analysis happen in a different model than territory and quota management), this is often insufficient to achieve the desired user experience and performance level. This is where ALM comes in.

ALM allows for scale through the simple characteristic that a development model can support multiple deployed models. This provides automatic returns of reducing administrative burden; admins can support one development model instead of having one instance for each deployed model. Again, while this is commonly done to support testing and production models, this also allows for different data sets to be used among the same ‘test’ or ‘production’ workspaces — structurally-identical models can simply be populated with different data sets.

AaronOverfors_Pic1.png

Determining what is needed

Before we discuss what this looks like in practice, it is first important to evaluate if this is needed at all. Of course, the fewer models that are needed, the better. The more models, the greater the burden to maintain them. Our objective is to minimize the number of models while maximizing the user interface and performance. There are hard factors and soft factors to consider: hard factors would be things like data size; this alone may make the decision for you if you are not able to fit all the relevant data for the functionality into a single model. Other factors to consider involve the level of concurrency and the geographic distribution of your users, the frequency and size of data loads, and any future data growth and functionality.

Once you determine that a model split is needed, the next step is to determine how many are needed and what comprises each one. The vast majority of model splits will be by data set and/or user base. Before venturing too far down the split-model path, it is first important to determine that this model will work. There are some situations where it may make more sense to maintain separate development models due to the magnitude of difference in business processes. While it is very common for differences in business processes to arise between business units and regions of the same company, most of the time these can be reconciled within a single model. We will not explore the full extent of what is needed here but will summarize by saying that you should do everything you can to avoid having a model within a model (i.e. trying to maintain two conceptual models within the same Anaplan model).

Put it into practice

Once you have decided on the right determinant for splitting the model (by BU, region, etc.), the next step is to operationalize it. There are three main pieces that will drive this setup: utilizing an Import Base Model, production data sources, and toggle lists.

There are two ways Anaplan tracks and maps sources for data imports: at the model level (image i) and the data source level (image ii).

image iimage i

image iiimage ii

In reality, they are both at the data source level; the model level simply allows mapping to be done en masse. Data sources are anything that drives imports: files, saved views, modules, and lists. Here we are working only with saved views.

The most straightforward path is to simply utilize import data sources as ‘production,’ which allows the same import to be mapped to different sources in the deployed models. This means you could have a model that supports Region 1 and another that supports Region 2, with the first model pointing to a saved view in your data hub that is filtered just to Region 1 data while the second model is mapped to a different (but structurally identical) saved view that is filtered to just Region 2 data. If there are a smaller number of imports that require this, this will be the best path to follow.

However, if you have a greater number of models to support that require data transfer between them, a different strategy will allow for more rapid updates: utilizing an ‘Import Base Model’ in the development environment (as seen in image i). The ‘Source Models’ allows for quick mapping updates when there are different development models (e.g. importing into a planning model from the data hub), but there is no provision for re-mapping data sources from the same development model. There are several reasons this could be needed, but a simple example would be transferring customer accounts between models for assignment to sales representatives. Both models do the same function (assigning accounts to sales reps) and thus would have the same development model, but there are multiple deployed instances of the model.

Separately, there may be some instances where there are truly different functions that need to be performed in the separate deployed models; for example, one region may have an approval process where other regions do not. If this were the only difference, it would not make much sense to have a completely separate development model to support it, because then the administrators need to maintain all of the other modules, lists, imports, and so on in parallel. The primary way to achieve this is to be highly precise with the population of lists in the model. This will allow for the creation of functional modules that can be turned on and off based on the needs of the region.

This strategy can be used to do granular fine-tuning of functionality where needed, including turning it ‘off’ where the lists are not populated.

Woven throughout all of this is the criticality of a sound governance model. The above is only sustainable if it is well-organized and communicated clearly. Working with any of the above strategies demands greater rigor in the model update process, but if it is implemented successfully, it will provide your organization a greater capacity to scale while not over-burdening your team with model administration.


Spaulding Ridge has deployed Anaplan across sales, finance, supply chain, and more over several industries at the highest of scale. Reach out to Aaron Overfors (aoverfors@spauldingridge.com) for more information.

Comments

  • Trick is getting BUs to agree that they do actually have close enough processes or that makes no difference.

    How many different answers does 1+1 give you afterall??