-
How to maintain complex Anaplan environments — Part 2: Technical foundations for scale
Author: Piotr Weremczuk is a Certified Master Anaplanner and FinSys Application Specialist at EQT.
In the first part of this two-part article, I explored the non-technical foundations of maintaining complex Anaplan environments: leadership, governance, accountability, and the importance of building the right team. All of that came from my ten years of working with Anaplan.
Now, in this second part, I want to shift focus to the technical side: the tools, practices, and architectural decisions that make day-to-day maintenance smoother, more predictable, and far more scalable.
If the first part was about laying a stable foundation, this part is about the practical mechanics that solution architects and model builders rely on every day. These are the elements that turn a theoretically strong setup into a reliably functioning ecosystem.
Architecture starts early, and it starts from above
Even on the technical front, everything begins surprisingly early.
In Part 1, I wrote about the importance of having a leader with vision — someone who pushes the organization to evolve and sees beyond the first model. The same applies technically.
A skilled solution architect (or even better, a “Master Architect”), must look at the environment from above, not from within. Someone needs to own the blueprint: the data landscape, the model interconnections, the integration patterns, and the tools wrapped around Anaplan.
Personally, I’ve always found clarity through drawing.
Whether it's Lucidchart, Draw.io, or anything that lets you sketch system architecture, having a visual representation of your full ecosystem is invaluable. When you lay it all out — the current structure, the desired future state, and everything in between — gaps reveal themselves. Dependencies become clearer. Priorities almost arrange themselves.
My thinking shifted dramatically when I was first exposed to architectural frameworks, like TOGAF for example. You don’t need to become an enterprise architect, but a basic understanding of these methodologies teaches you to think differently: in layers, in transitions, in future states.
And in a complex Anaplan landscape, that “bird’s-eye view” is what keeps everything coherent and ensures the platform adheres to connected planning principles.
Automation: The great multiplier
If there is one technical topic I would emphasize above all else, it is automation.
Today’s Anaplan ecosystem is rich with tools that simplify orchestration, but it wasn’t always that way. I still remember the days before CloudWorks, before ADO integrations, even before ALM. We spent countless hours running manual imports, deploying changes manually, and tracking errors after the fact instead of as they happened.
Thankfully, those days are behind us.
ALM: the non-negotiable
If you follow the recommended Dev → Test → Prod setup, ALM is already at the heart of your process. If it isn’t — that’s your homework. Proper ALM structures are what make controlled development possible, especially in large environments with multiple parallel workstreams.
CloudWorks: simple, native
CloudWorks has become one of those indispensable tools even for organizations that don’t use AWS, Azure, or GCP.
Its value is in its simplicity: native scheduling, easy configuration, built-in monitoring, and the ability to push alerts through email or even a Slack channel. It immediately adds value and effortlessly enables automation in Anaplan.
External ETLs: essential for scale
Then there are the heavy-duty engines: native ADO or external ETLs; whatever the organization already owns.
A proper ETL layer is not a luxury; it is a necessity.
Yes, you can survive with Anaplan Connect or manual imports. But you will never scale with them.
Most delays and failures I have seen in Anaplan projects were rooted in data issues. A robust ETL not only moves data; it monitors, cleans, transforms, and audits it. That reliability is what allows Anaplan environments to grow without collapsing under their own weight.
Automation beyond imports
One recurring challenge I’ve encountered is giving business users the ability to trigger external processes without relying on IT each time. Exposing a simple webhook on a dashboard can fundamentally change how teams interact with the wider architecture. Suddenly, users can launch complex, multi-system workflows with a single click. Once this foundation is in place, integrations become far more accessible, connecting Anaplan to tools like Workato, for example, turns into a straightforward exercise. And from there, the automation possibilities across your tech stack expand rapidly.
When you integrate Anaplan this way, it stops being a standalone application and becomes the orchestrating center of your organization’s planning architecture.
Scaling access management through automation
As environments grow, so does the complexity of user management.
Hundreds, or even thousands, of users across multiple workspaces quickly turn into a labyrinth of manual checks, outdated permissions, and forgotten roles.
Automation is, once again, the solution.
Using the SCIM API to synchronize users, or creating a custom tool that consolidates exported user lists, makes license oversight far more manageable. Automated reporting of roles, workspace activity, and last login dates is essential.
Without these controls, organizations inevitably pay for unnecessary licenses or maintain old access assignments long after the users have stopped participating in the processes.
A well-designed access management automation not only protects the budget, it safeguards security, compliance, and operational clarity.
Best practices: Small habits with massive long-term impact
This chapter could easily be its own standalone guide. Best practices are often framed as something for junior Anaplanners, but the truth is that they protect senior teams just as much. They are the invisible scaffolding that keeps models maintainable years after they are built.
Clean builds, consistent naming conventions, and a logical DISCO structure all contribute to clarity. But there is something even more important: discipline.
Discipline to remove testing line items when you’re done.
Discipline to delete unused imports.
Discipline to keep your structure understandable not just today, but years from now.
Over time I’ve collected small tricks that may not appear in official documentation but make a huge difference in practice:
* Creating dummy actions to act as separators in the Actions tab
* Naming data sources to reflect actual integration names
* Using notes to document unexpected logic or hidden dependencies
* Marking certain backend elements (DCA, conditional formatting, filters, deletion logic) with subtle emoji identifiers
(Despite Anaplan’s caution about emojis, I’ve never found them problematic for backend work.)
These are small touches, but across dozens of models, they create an ecosystem that is intuitive, self-explanatory, and easy for new team members to adopt. And for leaders, enforcing these practices is one of the simplest ways to reduce long-term maintenance risks.
Bringing it all together
There is no single trick that magically makes an Anaplan environment easy to maintain. Instead, it is a combination of structural thinking, strategic automation, disciplined development, and architectural clarity.
The list in this article is not exhaustive — Anaplan evolves too quickly for any list to stay complete for long — but these are the elements I’ve consistently found to have the greatest impact.
And they work.
Today, together with my colleague, we maintain an environment with seven workspaces, more than ten use cases, dozens of active models, and hundreds of users. Not only do we keep it stable — we have enough capacity to expand into new processes at the same time.
That is the power of thoughtful setup, automation, and discipline.
Thank you for reading.
And if you missed it, Part 1 explores the organizational and human aspects of maintaining complex Anaplan environments; the foundations that make all of this technical work truly effective.
Questions or comments?
-
How I Built It: Number format converter (thousands and millions)
Hello Anaplanner Community! I’m excited to participate in another ‘How I Built It’ video with a Number Format Converter (thousands and millions) tutorial.
This video walks you through how to dynamically update UX grid number formats on all tables and charts. This enables users to have different number formats enabled for each line item.
Key features:
* Users can choose which line items they would like to see numbers in thousands or millions.
* Allow users to easily see larger values in small Anaplan UX grids.
* You can design for each line item you would like to have this for.
* Check out my idea in the Idea Exchange and upvote: Number scale line item format.
https://play.vidyard.com/dDXJDFqms1HSkfNC2aG7Jc
I have another ‘How I Built It’ tutorial on dynamic month, quarter, and year filters here.
All the 'How I Built It' tutorials can be found here.
…….About the Author: Arjun Gandhi is a Co-Founder and Certified Master Anaplanner at Tekplanit and has been in the Anaplan ecosystem for 8+ years. He has deployed hundreds of applications across 16+ industries for finance, supply chain, and sales use cases.
-
Interviewing and onboarding Anaplanners
Author: Andrew Barnett is a Certified Master Anaplanner and Vice President at PJT Partners.
Having worked at several firms in the Anaplan ecosystem, both on the partner side and as a customer, I’ve seen firsthand how critical it is to hire and develop the right Anaplan talent. Bringing an experienced Anaplanner onto your team and successfully onboarding new model builders are crucial steps in growing an Anaplan capability. In this post, I’ll share personal insights on what I’ve seen work (and not) in interviewing experienced Anaplanners and in training up new ones from scratch.
When interviewing candidates with Anaplan experience, I focus on three key areas: technical skills, relevant experience, and personality/culture fit. Covering all three gives a more accurate view of the candidate’s suitability for the role and the team.
Technical assessment: In my experience, technical interviews for Anaplan roles usually take one of three forms: a take-home modeling exercise, a knowledge test (written or verbal Q&A), or a live problem-solving session. Each has pros and cons, but the live exercise tends to be the most revealing.
Experience: Beyond technical ability, I ask about the candidate’s Anaplan project experience. What types of models have they built, and in what business areas? What was their role in those projects? This helps me gauge depth of practical knowledge and whether their background aligns with our needs.
Personality/team fit: Anaplan modeling is collaborative — model builders work closely with end users, stakeholders, and other Anaplanners. I look for strong communication skills, a problem-solving mindset, and a constructive, low-ego approach. A few targeted behavioral questions often provide a clear signal on how they’ll show up day-to-day.
Of the technical assessment methods listed above, the live problem-solving exercise has given me the best insight into a candidate’s capabilities. There’s nothing like watching someone tackle an Anaplan problem in real time to reveal their true skill level.
For this, I’ll prepare a simplified real-world scenario and ask the candidate to troubleshoot it with me live. As they work through it, I observe how they navigate the model, isolate the issue, and explain the reasoning behind each step.
This approach shows how a candidate thinks on their feet. Strong candidates will methodically identify assumptions, test hypotheses quickly, and keep the end-user outcome in mind. I’ve seen highly certified candidates struggle in a hands-on test, while others with fewer credentials excel, reinforcing my belief that performance in a live exercise matters more than badges alone. If you can include a live exercise in your hiring process, I highly recommend it; it’s the closest proxy for real work you’ll find in an interview.
Skilled Anaplanners are in high demand, so many teams will need to grow their own talent. Whether you’re upskilling an internal employee or hiring someone new to Anaplan, a structured onboarding program is critical. The best approaches I’ve seen combine Anaplan’s learning resources with realistic internal simulations.
I’ve seen two firms handle it particularly well:
* “Basics + Project” approach (Akili): Early in my career, before today’s structured training ecosystem existed, new model builders started with foundational Anaplan training to cover the essentials followed by a sample project. In this sample project, new hires received data files and business requirements that resembled a client use case and were asked to build a simple model to meet those needs. After a short build period, they presented their solution to the team, walking through why they made the design decisions they did. This was an incredibly effective way to accelerate learning and build confidence. It also gave managers a practical view of who was ready for more complex work and who needed additional support.
* Comprehensive blended program (Allitix): Years later, I saw an even stronger approach that intentionally fused Anaplan’s structured learning path with internal simulations. The agenda included the formal Anaplan certification track alongside other important Anaplan courses, followed by a sample project. What I appreciated most was that this program wasn’t just for entry-level model builders. It also included more advanced sample projects for experienced hires and people looking to move into more senior roles. That type of tiered development is rare, and it’s a powerful way to create a consistent bar for progression while keeping high performers engaged.
The common thread between these successful programs is the marriage of theory and practice. Formal training gives you the vocabulary, patterns, and best practices. Hands-on simulation make you apply that knowledge.
This mirrors how people learn to code: the fastest growth happens when you build something real that matters. The same is true in Anaplan. You can understand model design principles conceptually, but you only internalize them when you wrestle with real data, tradeoffs, and stakeholder expectations.
Investing in thoughtful interviewing and onboarding for Anaplanners pays off. When hiring experienced talent, go beyond standard Q&A and check how they solve problems in the moment. When building new talent, pair Anaplan’s learning resources with structured, real-world simulations that reflect the work your team actually does.
In my experience, teams that get these two processes right build stronger models, earn trust faster, and scale their Anaplan capabilities with far less friction.
Good luck and happy planning!
-
3.4.3 Activity: Create Country Summary Module
Hi,
I have a doubt in level 2 sprint 3.4.3 Activity: Create Country Summary Module
I did the following :-
Step 1 → Created a new column "Country Made in" in SYS08 SKU details
Step 2 → Wrote a formula Parent(Supplied By) to bring parent of the location i.e. country in "Country Made in" column
Step 3 → I have written the formula for safety stock flag count as 'INV01 Inventory Ordering'.Safety Stock Exception Count[SUM: 'SYS08 SKU Details'.Product Family, SUM: 'SYS08 SKU Details'.Country Made in]
My doubt is that in below image P3SKU → P1 Product family says auto aggregation so should i use SUM to aggregate in my formula or not ?
if SUM is not to be used, i have doubt that i have learnt that if there is many to one relationship and source and target dimension is not same then sum should be used.
is there any problem in my understanding somewhere ? Pls guide.
Thanks in advance.
-
How I Built It: User filters with variable hierarchy properties
Author: Erik Svensson is a Certified Master Anaplanner and a Principal Solution Architect at Anaplan.
Hello Anaplan Community!
Thank you for checking out my ‘How I Built It’ tutorial. In this video, I demonstrate a powerful technique for creating dynamic user filters. This solution gives your end-users the flexibility to filter a dimension by different attributes on the fly.
A great example is fashion assortment planning. In the "Tops” category a planner needs to filter by Neckline, while in "Footwear” a planner needs to filter by Upper Material. This model allows each user to select the specific attributes they want to filter by, providing a customized and highly flexible experience.
This is especially important in fast-moving industries where trends change quickly, and a one-size-fits-all filtering approach is too restrictive.
Key features:
* User-specific dynamic filtering
* Flexible attribute selection per user
Check it out and drop in a comment if you have any questions!
https://play.vidyard.com/ErQqG4YvqLwZTVhXm1uHYS
-
Modulation in Anaplan
Author: Arun Thakar, Vice President in the banking industry.
In cases where you have a single DEV model and multiple TEST and PROD models, the situation may arise where one user group asks for a feature that is not required for other users in different models. All too often the answer is to build modules, lists, and logic to support the requesting group which wastes memory in models not using the feature. What if I told you that there is way to turn on or off modules and save space and prevent your model from turning into a Frankenstein?
That method is called “Modulation”.
The premise of modulation is that cell count in modules can collapse to zero for features not in use, while features in use can be enabled to calculate in a model. This article depicts how to set up modulation in your Anaplan models.
How does Modulation work?
Modulation uses production lists to manage cell size in an Anaplan model. Groups of modules that make up a feature would all be dimension by an additional production list, Modulator List A in the example below. If there is a second feature that the architect of a model would like turned off, the modules associated with Feature B would all be dimensioned by Modulator List B and there would zero list items in this list, which would cause the cell count in the Feature B to be zero.
Because the modules in Feature A or B all have an additional dimension, a simple data transformation using a LOOKUP formula, can be leveraged to pull data out of the enabled feature and feed downstream modules.
Using a UX wizard to enable or disable features
To set up this architecture it may make sense to build a quick UX where an administrator enables the feature for the first time. On a UX page an admin can select which features they wish to turn on or off and create a process which imports unique values into one or more production lists.
Now that you have an idea of how modulator works, feel free to give it a try in your model. The use case of one cluttered DEV model that serves multiple TEST and PROD models is a great place to start. Also please remember that if you employ this in an established deployed model, there may be some data loss because you are changing the dimensionality of modules.
Questions? Leave a comment!
-
How to be frugal when building inventory rollover
Author: Vinay Varadaraj Mirajkar is a Certified Master Anaplanner and Senior Solution Architect at Anaplan.
Imagine you are modeling inventory projection in the classic engine with below assumptions:
* Large number of products and locations
* Planning horizon with daily buckets (let’s say 60 days)
This is very common in use cases such as production planning where planning at daily granularity is necessary, albeit for a short planning horizon.
Let’s consider a simple example to illustrate the use case:
* There would be ‘On Hand’ inventory of a certain combination of Product x Location (could be raw material, finished good etc). This would be fetched in Anaplan as part of source data.
* Consumption would be projected usage of that item for the future planning horizon. This would come from a previous planning step in the overall process.
Now, our goal is to project the inventory into the future periods by considering the On Hand inventory and consumption of each day.
Easy, quick, but dirty solution
A very easy approach that could be as taken is as shown below:
Here, the opening inventory for 1st Nov comes from source data (On Hand inventory) as shown below:
Closing inventory = Opening Inventory – Consumption
Opening inventory for 2nd Nov = PREVIOUS(Closing Inventory)
However, given the large number of products and locations with daily granularity, the model size could become very high, due to the fact that this construct needs at least two years of timescale in daily bucket (imagine you are standing on 31st Dec 2025, and you would need to do the projection for 1st Jan 2026. This amounts to 730 days, excluding any summaries).
The alternative
We can achieve the same results using a custom timescale (with some intermediate transformation with native time) using just the number of planning buckets needed, which in this case is 60 days.
Now, let’s explore the steps needed to build it.
Step 1: As a first step, we need a system module which maintains current date, which should be updated on a daily basis:
Step 2: In this step, we build a custom time list with below configuration:
* Numbered list with just the number of periods needed for planning (in this case 60)
* Display Name as a date, which is connected with current date so it is dynamic
This list will represent our planning horizon required for the use case:
Step 3: The next step is to create a time range which can have 60 periods (5 yrs x 12 months), as shown below. The purpose of this time-range will be to use it in an intermediate module where we can apply the PREVIOUS function.
Step 4: We then need to create mappings between custom time and Anaplan months so we can use these to perform LOOKUP operations when necessary:
Mapping 1: Anaplan months to custom time
Mapping 2: Custom time to Anaplan months
Step 5: Now, Let’s create the Inventory rollover module using custom time with the respective line-items as shown below:
Note:
* We import the consumption data directly into this module
* Also, opening inventory and closing inventory line-items are blank at this stage
Step 6: We then create another similar looking module, but with the time range ‘Anaplan months’
. This is the place where we do the magic:
* Consumption: We fetch this from CAL06 using the time mapping we created.
* Next, we know what is the starting period in this timescale, so that helps us bring the On Hand data in the Opening inventory line-item using the DAT04 module.
* We then subtract the consumption to get closing inventory.
* Closing inventory then becomes ‘opening inventory’ of the next day.
* In order to do this, we use the PREVIOUS function as this module has a native timescale.
The blueprint for this module is as shown below:
Note: Observe the Opening Inventory formula
.
And below is the rolling inventory we were looking for:
Step 7: The last step is to take these results back to the initial inventory module (CAL06) using the mappings as follows…
Below is the blueprint for reference:
And below is the result what we had been looking for:
In this way we could build an inventory rollover model which does not consume huge size by avoiding large number of unnecessary cells.
Questions? Leave a comment!
-
How I Built It: Dynamic driver-based planning
Author: Chris Allen is a Certified Master Anaplanner and Manager at Allitix part of Accenture.
Hi Anaplan Community!
This ‘How I Built It’ video shares a dynamic feature to toggle between different planning methodologies corresponding to different lines on the P&L for driver-based planning.
It's a feature that can be applied to many different scenarios where a list item has unique line items related to it than its peers in the same list — to only show the related line items per the list item selected. The upside is decluttering a dashboard and allowing a good fit on one screen without much searching or scrolling.
Check it out and leave a comment with questions!
https://play.vidyard.com/qoByxyE7BZKSVJJzr62yEN
……………
Check out my other ‘How I Built It’ videos:
* How I Built It: Replacing list items
* How I Built It: Flagging new list items