-
How to maintain complex Anaplan environments — Part 2: Technical foundations for scale
Author: Piotr Weremczuk is a Certified Master Anaplanner and FinSys Application Specialist at EQT.
In the first part of this two-part article, I explored the non-technical foundations of maintaining complex Anaplan environments: leadership, governance, accountability, and the importance of building the right team. All of that came from my ten years of working with Anaplan.
Now, in this second part, I want to shift focus to the technical side: the tools, practices, and architectural decisions that make day-to-day maintenance smoother, more predictable, and far more scalable.
If the first part was about laying a stable foundation, this part is about the practical mechanics that solution architects and model builders rely on every day. These are the elements that turn a theoretically strong setup into a reliably functioning ecosystem.
Architecture starts early, and it starts from above
Even on the technical front, everything begins surprisingly early.
In Part 1, I wrote about the importance of having a leader with vision — someone who pushes the organization to evolve and sees beyond the first model. The same applies technically.
A skilled solution architect (or even better, a “Master Architect”), must look at the environment from above, not from within. Someone needs to own the blueprint: the data landscape, the model interconnections, the integration patterns, and the tools wrapped around Anaplan.
Personally, I’ve always found clarity through drawing.
Whether it's Lucidchart, Draw.io, or anything that lets you sketch system architecture, having a visual representation of your full ecosystem is invaluable. When you lay it all out — the current structure, the desired future state, and everything in between — gaps reveal themselves. Dependencies become clearer. Priorities almost arrange themselves.
My thinking shifted dramatically when I was first exposed to architectural frameworks, like TOGAF for example. You don’t need to become an enterprise architect, but a basic understanding of these methodologies teaches you to think differently: in layers, in transitions, in future states.
And in a complex Anaplan landscape, that “bird’s-eye view” is what keeps everything coherent and ensures the platform adheres to connected planning principles.
Automation: The great multiplier
If there is one technical topic I would emphasize above all else, it is automation.
Today’s Anaplan ecosystem is rich with tools that simplify orchestration, but it wasn’t always that way. I still remember the days before CloudWorks, before ADO integrations, even before ALM. We spent countless hours running manual imports, deploying changes manually, and tracking errors after the fact instead of as they happened.
Thankfully, those days are behind us.
ALM: the non-negotiable
If you follow the recommended Dev → Test → Prod setup, ALM is already at the heart of your process. If it isn’t — that’s your homework. Proper ALM structures are what make controlled development possible, especially in large environments with multiple parallel workstreams.
CloudWorks: simple, native
CloudWorks has become one of those indispensable tools even for organizations that don’t use AWS, Azure, or GCP.
Its value is in its simplicity: native scheduling, easy configuration, built-in monitoring, and the ability to push alerts through email or even a Slack channel. It immediately adds value and effortlessly enables automation in Anaplan.
External ETLs: essential for scale
Then there are the heavy-duty engines: native ADO or external ETLs; whatever the organization already owns.
A proper ETL layer is not a luxury; it is a necessity.
Yes, you can survive with Anaplan Connect or manual imports. But you will never scale with them.
Most delays and failures I have seen in Anaplan projects were rooted in data issues. A robust ETL not only moves data; it monitors, cleans, transforms, and audits it. That reliability is what allows Anaplan environments to grow without collapsing under their own weight.
Automation beyond imports
One recurring challenge I’ve encountered is giving business users the ability to trigger external processes without relying on IT each time. Exposing a simple webhook on a dashboard can fundamentally change how teams interact with the wider architecture. Suddenly, users can launch complex, multi-system workflows with a single click. Once this foundation is in place, integrations become far more accessible, connecting Anaplan to tools like Workato, for example, turns into a straightforward exercise. And from there, the automation possibilities across your tech stack expand rapidly.
When you integrate Anaplan this way, it stops being a standalone application and becomes the orchestrating center of your organization’s planning architecture.
Scaling access management through automation
As environments grow, so does the complexity of user management.
Hundreds, or even thousands, of users across multiple workspaces quickly turn into a labyrinth of manual checks, outdated permissions, and forgotten roles.
Automation is, once again, the solution.
Using the SCIM API to synchronize users, or creating a custom tool that consolidates exported user lists, makes license oversight far more manageable. Automated reporting of roles, workspace activity, and last login dates is essential.
Without these controls, organizations inevitably pay for unnecessary licenses or maintain old access assignments long after the users have stopped participating in the processes.
A well-designed access management automation not only protects the budget, it safeguards security, compliance, and operational clarity.
Best practices: Small habits with massive long-term impact
This chapter could easily be its own standalone guide. Best practices are often framed as something for junior Anaplanners, but the truth is that they protect senior teams just as much. They are the invisible scaffolding that keeps models maintainable years after they are built.
Clean builds, consistent naming conventions, and a logical DISCO structure all contribute to clarity. But there is something even more important: discipline.
Discipline to remove testing line items when you’re done.
Discipline to delete unused imports.
Discipline to keep your structure understandable not just today, but years from now.
Over time I’ve collected small tricks that may not appear in official documentation but make a huge difference in practice:
* Creating dummy actions to act as separators in the Actions tab
* Naming data sources to reflect actual integration names
* Using notes to document unexpected logic or hidden dependencies
* Marking certain backend elements (DCA, conditional formatting, filters, deletion logic) with subtle emoji identifiers
(Despite Anaplan’s caution about emojis, I’ve never found them problematic for backend work.)
These are small touches, but across dozens of models, they create an ecosystem that is intuitive, self-explanatory, and easy for new team members to adopt. And for leaders, enforcing these practices is one of the simplest ways to reduce long-term maintenance risks.
Bringing it all together
There is no single trick that magically makes an Anaplan environment easy to maintain. Instead, it is a combination of structural thinking, strategic automation, disciplined development, and architectural clarity.
The list in this article is not exhaustive — Anaplan evolves too quickly for any list to stay complete for long — but these are the elements I’ve consistently found to have the greatest impact.
And they work.
Today, together with my colleague, we maintain an environment with seven workspaces, more than ten use cases, dozens of active models, and hundreds of users. Not only do we keep it stable — we have enough capacity to expand into new processes at the same time.
That is the power of thoughtful setup, automation, and discipline.
Thank you for reading.
And if you missed it, Part 1 explores the organizational and human aspects of maintaining complex Anaplan environments; the foundations that make all of this technical work truly effective.
Questions or comments?
-
How I Built It: Dynamic driver-based planning
Author: Chris Allen is a Certified Master Anaplanner and Manager at Allitix part of Accenture.
Hi Anaplan Community!
This ‘How I Built It’ video shares a dynamic feature to toggle between different planning methodologies corresponding to different lines on the P&L for driver-based planning.
It's a feature that can be applied to many different scenarios where a list item has unique line items related to it than its peers in the same list — to only show the related line items per the list item selected. The upside is decluttering a dashboard and allowing a good fit on one screen without much searching or scrolling.
Check it out and leave a comment with questions!
https://play.vidyard.com/qoByxyE7BZKSVJJzr62yEN
……………
Check out my other ‘How I Built It’ videos:
* How I Built It: Replacing list items
* How I Built It: Flagging new list items
-
Interviewing and onboarding Anaplanners
Author: Andrew Barnett is a Certified Master Anaplanner and Vice President at PJT Partners.
Having worked at several firms in the Anaplan ecosystem, both on the partner side and as a customer, I’ve seen firsthand how critical it is to hire and develop the right Anaplan talent. Bringing an experienced Anaplanner onto your team and successfully onboarding new model builders are crucial steps in growing an Anaplan capability. In this post, I’ll share personal insights on what I’ve seen work (and not) in interviewing experienced Anaplanners and in training up new ones from scratch.
When interviewing candidates with Anaplan experience, I focus on three key areas: technical skills, relevant experience, and personality/culture fit. Covering all three gives a more accurate view of the candidate’s suitability for the role and the team.
Technical assessment: In my experience, technical interviews for Anaplan roles usually take one of three forms: a take-home modeling exercise, a knowledge test (written or verbal Q&A), or a live problem-solving session. Each has pros and cons, but the live exercise tends to be the most revealing.
Experience: Beyond technical ability, I ask about the candidate’s Anaplan project experience. What types of models have they built, and in what business areas? What was their role in those projects? This helps me gauge depth of practical knowledge and whether their background aligns with our needs.
Personality/team fit: Anaplan modeling is collaborative — model builders work closely with end users, stakeholders, and other Anaplanners. I look for strong communication skills, a problem-solving mindset, and a constructive, low-ego approach. A few targeted behavioral questions often provide a clear signal on how they’ll show up day-to-day.
Of the technical assessment methods listed above, the live problem-solving exercise has given me the best insight into a candidate’s capabilities. There’s nothing like watching someone tackle an Anaplan problem in real time to reveal their true skill level.
For this, I’ll prepare a simplified real-world scenario and ask the candidate to troubleshoot it with me live. As they work through it, I observe how they navigate the model, isolate the issue, and explain the reasoning behind each step.
This approach shows how a candidate thinks on their feet. Strong candidates will methodically identify assumptions, test hypotheses quickly, and keep the end-user outcome in mind. I’ve seen highly certified candidates struggle in a hands-on test, while others with fewer credentials excel, reinforcing my belief that performance in a live exercise matters more than badges alone. If you can include a live exercise in your hiring process, I highly recommend it; it’s the closest proxy for real work you’ll find in an interview.
Skilled Anaplanners are in high demand, so many teams will need to grow their own talent. Whether you’re upskilling an internal employee or hiring someone new to Anaplan, a structured onboarding program is critical. The best approaches I’ve seen combine Anaplan’s learning resources with realistic internal simulations.
I’ve seen two firms handle it particularly well:
* “Basics + Project” approach (Akili): Early in my career, before today’s structured training ecosystem existed, new model builders started with foundational Anaplan training to cover the essentials followed by a sample project. In this sample project, new hires received data files and business requirements that resembled a client use case and were asked to build a simple model to meet those needs. After a short build period, they presented their solution to the team, walking through why they made the design decisions they did. This was an incredibly effective way to accelerate learning and build confidence. It also gave managers a practical view of who was ready for more complex work and who needed additional support.
* Comprehensive blended program (Allitix): Years later, I saw an even stronger approach that intentionally fused Anaplan’s structured learning path with internal simulations. The agenda included the formal Anaplan certification track alongside other important Anaplan courses, followed by a sample project. What I appreciated most was that this program wasn’t just for entry-level model builders. It also included more advanced sample projects for experienced hires and people looking to move into more senior roles. That type of tiered development is rare, and it’s a powerful way to create a consistent bar for progression while keeping high performers engaged.
The common thread between these successful programs is the marriage of theory and practice. Formal training gives you the vocabulary, patterns, and best practices. Hands-on simulation make you apply that knowledge.
This mirrors how people learn to code: the fastest growth happens when you build something real that matters. The same is true in Anaplan. You can understand model design principles conceptually, but you only internalize them when you wrestle with real data, tradeoffs, and stakeholder expectations.
Investing in thoughtful interviewing and onboarding for Anaplanners pays off. When hiring experienced talent, go beyond standard Q&A and check how they solve problems in the moment. When building new talent, pair Anaplan’s learning resources with structured, real-world simulations that reflect the work your team actually does.
In my experience, teams that get these two processes right build stronger models, earn trust faster, and scale their Anaplan capabilities with far less friction.
Good luck and happy planning!
-
Best Finding for Truly Understanding Optimizer (Conceptually & Technically)
🚀 Best Finding for Truly Understanding Optimizer (Conceptually & Technically)
One of the most effective ways I’ve found to deeply understand Anaplan optimizer both conceptually and technically is by working hands‑on with Excel Solver.
Why? Because Solver forces clarity.
It makes you explicitly define:
🔹 Decision variables (what can change)
🔹 Objective function (what you want to optimize)
🔹 Constraints (real-world limits like capacity, demand, cost, or service levels)
This disciplined structure is exactly how Anaplan Optimizer, are designed and implemented.
💡 Key realization:
Optimization is tool-agnostic.
The mathematics and mindset remain the same only the scale and integration change.
📌 Excel Solver → Anaplan Optimizer
Excel Solver helps you learn and internalize optimization logic
Anaplan Optimizer applies the same logic across large, connected planning models
Both rely on Linear Programming & Mixed-Integer Optimization
Both convert planning from “what-if” to “what’s optimal”
As also emphasized in Anaplan Optimizer best practices, success in optimization depends less on the platform and more on:
✅ Clean model design
✅ Clear separation of objectives, constraints, and decisions
✅ Explainable and repeatable optimization logic
📚 Learning Roadmap for Optimization:
1️⃣ Logistics Optimization using Excel Solver
This course builds a strong foundation in optimization by working hands‑on with decision variables, objective functions, and constraints crucial for understanding how any optimizer works.
🔗 https://www.udemy.com/course/logistics-optimization-using-excel-solver/
2️⃣ Anaplan Optimizer – Anaplan Academy
This helps translate Solver concepts into Anaplan’s enterprise planning environment, covering optimizer setup, model design, and integration within Anaplan.
🔗
https://academy.anaplan.com/learn/global-search/optimizer
3️⃣ Anaplan Community – Optimizer Best Practices & Use Cases
Deepens understanding through real-world use cases, performance considerations, and community-driven best practices for scalable optimization models.
🔗
https://community.anaplan.com/categories/best-practices?tagID=1149
Bonus:
https://community.anaplan.com/discussion/108592/start-here-anaplan-optimizer
~ Bhumit
-
Modulation in Anaplan
Author: Arun Thakar, Vice President in the banking industry.
In cases where you have a single DEV model and multiple TEST and PROD models, the situation may arise where one user group asks for a feature that is not required for other users in different models. All too often the answer is to build modules, lists, and logic to support the requesting group which wastes memory in models not using the feature. What if I told you that there is way to turn on or off modules and save space and prevent your model from turning into a Frankenstein?
That method is called “Modulation”.
The premise of modulation is that cell count in modules can collapse to zero for features not in use, while features in use can be enabled to calculate in a model. This article depicts how to set up modulation in your Anaplan models.
How does Modulation work?
Modulation uses production lists to manage cell size in an Anaplan model. Groups of modules that make up a feature would all be dimension by an additional production list, Modulator List A in the example below. If there is a second feature that the architect of a model would like turned off, the modules associated with Feature B would all be dimensioned by Modulator List B and there would zero list items in this list, which would cause the cell count in the Feature B to be zero.
Because the modules in Feature A or B all have an additional dimension, a simple data transformation using a LOOKUP formula, can be leveraged to pull data out of the enabled feature and feed downstream modules.
Using a UX wizard to enable or disable features
To set up this architecture it may make sense to build a quick UX where an administrator enables the feature for the first time. On a UX page an admin can select which features they wish to turn on or off and create a process which imports unique values into one or more production lists.
Now that you have an idea of how modulator works, feel free to give it a try in your model. The use case of one cluttered DEV model that serves multiple TEST and PROD models is a great place to start. Also please remember that if you employ this in an established deployed model, there may be some data loss because you are changing the dimensionality of modules.
Questions? Leave a comment!
-
How to build a Polaris reporting model in less than two weeks
Author: Hanwen Chen is a Certified Master Anaplanner and Professional Services Sr. Manager at Anaplan.
Over the past nine months, I have been involved in multiple Classic-to-Polaris conversion projects. One consistent requirement across these engagements is the need for scalable reporting solutions that support multiple natural dimensionalities. Customers are increasingly looking to Polaris to enable this type of reporting capability at scale.
This article demonstrates how you can quickly build a Polaris reporting model in less than two weeks by leveraging existing data from Data Hubs and Classic models. By reusing structured data and applying a streamlined setup approach, teams can rapidly enable scalable, multi-dimensional reporting in Polaris without rebuilding the entire model from scratch.
Common patterns in Classic models
From my experience, when reviewing existing Classic models that were not originally designed for reporting with multiple natural dimensions, two common patterns typically emerge:
* Flat data structures with additional attributes.
Data is often stored in a flat structure with additional attributes that describe the elements. It may also include dimensions such as Time. The flat structure typically serves as the data key and may be a concatenated list of multiple dimensions, such as project–department–account. Additional attributes describe other aspects of the dimension, for example, the region associated with a department or the category associated with a project.
* Incomplete or inconsistent dimension structures in Data Hub.
The Data Hub often lacks well-defined hierarchies or dimension structures that can support reporting directly. Without these, it becomes difficult to enable flexible multi-dimensional reporting.
If you observe these patterns in your Classic models, the following approach can help you implement a Polaris reporting model efficiently.
Solution configuration
* Report dimensions & data sources.
Start by identifying the dimensions required for reporting and the sources that provide the necessary data elements. For example, a report might include Time, Version, Cost Center, Product, and Region as key dimensions. These dimensions determine the structure of the reporting model.
Next, determine which systems, models, or module views will provide these dimension structures and data elements. Typically, this includes the Data Hub and existing Classic planning models. Clearly identifying dimensions and sources upfront ensures a smooth and streamlined setup process.
* Data Hub configuration.
The Data Hub serves as the central repository for master data and actuals. To prepare the Data Hub for Polaris reporting:* Configure dimension structures: Ensure flat lists exist to support the required reporting dimensionalities.
* Create output views: Build export views that structure the data for loading into Polaris. Well-designed export views minimize transformation work, simplify integration, and improve data load performance.
The Data Hub is critical because it standardizes dimensional structures and reduces complexity in the Polaris reporting model.
* Classic model configuration
Classic planning models provide plan and forecast version data. Before integrating with Polaris:* Prepare plan/forecast data: Ensure version data is structured and ready for export.
* Validate data elements: Confirm that all dimensions required for reporting are included in the Classic model and align with the Data Hub structures.
Proper preparation ensures the Polaris reporting model can consume version data efficiently without extensive transformations.
* Polaris reporting model setup.
Once the Data Hub and Classic model are ready, configure the Polaris reporting model:* Set up flat lists and hierarchical structures.
Create the reporting dimensions required in Polaris.
* Build modules to receive actual and version data.
Design modules to store imported data from the Data Hub (actuals) and Classic models (plan/forecast versions).
* Create processes to populate dimension data from the Data Hub.
Set up imports and processes to load dimension structures into Polaris.
* Create processes to load actual data from the Data Hub.
Import actuals prepared in the Data Hub export views.
* Create processes to load version data from the Classic models.
Import plan and forecast versions from Classic models.
* Set up bulk upload processes.
Enable bulk upload processes to load multiple versions of data as needed.
* Configure mapping and validation processes.
Set up mapping logic and validation modules and pages to ensure correct dimensional mapping and data integrity.
* Create reporting modules and report pages.
Include multi-dimensional reports, variance reporting (e.g., Current Forecast vs. Plan), and other analytical views to provide meaningful insights from the data.
Final thoughts
By leveraging existing Data Hubs and Classic models, teams can significantly accelerate the implementation of a Polaris reporting model. Instead of rebuilding data structures from scratch, this approach focuses on reusing structured data and aligning it with Polaris’ scalable dimensional architecture.
With the right setup, it is entirely feasible to stand up a functional and production-ready Polaris reporting model in less than two weeks.
Additional tips and tricks in each configuration can further streamline building your Polaris reporting model. In a follow-up article, I will share these tips and tricks to help teams implement more efficiently.
Questions? Leave a comment!
……………
Other articles by Hanwen:
* The beauty of simplicity: any level of selection in a hierarchy
* The power of the ‘No’ version approach in Anaplan
* Data distribution design from Data Hub to multiple spoke production models
-
How to maintain complex Anaplan environments — Part 1: Leadership, governance & the human foundation
Author: Piotr Weremczuk is a Certified Master Anaplanner and FinSys Application Specialist at EQT.
February 2026 marked my tenth anniversary of logging into Anaplan. A full decade.
When I realized this, I decided it was finally time to write down some of the things I’ve learned about setting up and maintaining complex Anaplan environments — not just the technical tricks, but also sometimes neglected parts: governance, processes, team structure, and mindset.
I very quickly understood that it wouldn’t fit into a single article. There is simply too much to share.
So this first part focuses on the non-technical foundations — the elements that, in my experience, matter far more than any formula or model structure. Part two dives deeper into the technical side.
Anaplan as a strategic decision, not a quick fix
Something I’ve seen repeatedly over the years is organizations purchasing Anaplan to solve one isolated problem. A forecasting pain point here, a budgeting bottleneck there. They look for a robust tool to “replace Excel” in that one area.
And technically, Anaplan can do that.
But what often happens next is always the same: they end up with a powerful, premium platform being used like a very fancy spreadsheet. They never unlock connected planning. They don’t scale their use cases. They get the sportscar… and use it for grocery shopping.
It’s absolutely fine to start small — a trial workspace, a limited number of users — to test a use case and then, before committing long-term, asking themselves a much bigger question:
Are we willing to bet on the platform more broadly? Are we ready to transform the way we plan, operate, and collaborate?
The financial investment is one thing.
The organizational readiness is another.
And in many transformations I’ve witnessed, the second one is actually the harder hurdle.
But here’s the encouraging part:
when organisations do confront this question honestly — and decide to embrace Anaplan not as a tool for a single process but as their connected planning backbone — everything changes. Once they commit to rethinking how they plan, budget, and forecast, the platform’s value multiplies. They stop replacing Excel and start redesigning their planning landscape.
They gain alignment across functions, automation across processes, and insights that were never visible before. In other words, they get a return that goes far beyond solving one problem; they unlock the real promise of the platform.
That step — the decision to truly adopt connected planning — is what separates organizations that simply use Anaplan from those that benefit from it.
The human element: Leadership, vision, and mindset
No complex Anaplan environment can succeed without strong leadership and a team willing to approach planning differently. Governance frameworks, RACI matrices, delivery processes — all of that matters. But none of it works unless someone drives the change with conviction.
Across my projects, the presence (or absence) of a leader with a clear vision has consistently been the single biggest differentiator. When such a leader exists (someone who sees beyond the first model, who understands how connected planning can reshape the organisation), everything moves smoother. Teams are more open to exploring new solutions. Stakeholders align faster. People take ownership.
And the team’s mindset almost always starts with the leadership. When a leader embraces a forward-looking approach, pushes for transformation, and genuinely believes in the potential of connected planning, that mindset naturally radiates outwards. Their drive influences how the team thinks, how they solve problems, and how willing they are to take bold steps in unfamiliar territory. A strong leader doesn’t just set direction, they set tone, energy, and ambition.
Without that kind of leadership, I’ve seen environments drift into chaos: unclear responsibilities, duplicated work, unprioritized backlogs, and teams that don’t fully understand what they’re building or why. The absence of a capable leader — someone who is not only willing to take ownership of the Anaplan journey but also has the soft skills and authority to guide a team — introduces a significant risk right from the start. It becomes far harder to build momentum, align stakeholders, or maintain clarity.
Mindset matters.
But the mindset of the team is rarely independent; it is shaped, encouraged, and amplified by the person leading them. If the team is willing to jump into deep water, solve unfamiliar problems, and embrace new ways of working, the payoff is enormous, not only for them, but for the entire company.
Governance as the first layer of stability
Once leadership and mindset are in place, governance naturally becomes the next foundational block. You start with one strong leader and a motivated team… and very quickly realize that without structure, things get chaotic fast.
I’ve learned that defining roles early is essential. It doesn’t need to be heavy or bureaucratic — in fact, it absolutely shouldn’t be. But there should be clarity:
Who owns the model? Who gathers requirements? Who signs off? Who builds? Who tests? Who maintains?
Most teammates won’t instinctively know where their responsibilities begin and end unless you explicitly tell them.
This doesn’t mean wrapping the project in red tape; it means giving everyone a map so they can walk confidently in the same direction.
The same applies to your delivery flow.
Whether you choose agile, scrum, kanban or structured releases, it matters far less which methodology you select than whether everyone understands how the process works. I’ve seen all frameworks succeed, and all of them fail.
The deciding factor?
Communication.
In teams where stakeholders and business owners were consistently kept in the loop, through stand-ups, status updates, or even simple email checkpoints; everything ran smoother. Issues were spotted early. Surprises were fewer. People trusted each other more.
Sharing knowledge across all relevant parties is hugely underrated. It accelerates almost every phase of the development cycle and reduces disappointment later on.
After the first models: How should you structure your team?
Initial development is only the beginning.
Once the first use cases go live, the real question emerges:
What happens next? How should the team evolve? Should you move toward a Center of Excellence (CoE) model?
Looking back at my ten years in the ecosystem, the answer I keep coming back to is simple:
centralization helps — a lot.
When Anaplan expertise sits within a unified team:
* You can allocate model builders and solution architects more flexibly across projects.
* Knowledge flows naturally. People help each other, share best practices, and reduce dependency risks.
* Platform-wide improvements (new features, optimisation techniques, naming standards) spread much faster.
* The entire environment becomes more coherent and maintainable.
The only real question is team size, and that’s something every organization has to determine for itself.
It depends on use case volume, complexity, company size, delivery expectations, employee seniority, and growth rate.
But here’s a surprising truth:
Most organisations need fewer Anaplanners than they initially think, especially once they embrace the technical accelerators and efficiency principles I’ll describe in Part 2.
The three foundations that matter most
There are countless details that influence whether Anaplan thrives inside a company: processes, integrations, data models, documentation practices, and more.
But after ten years in the ecosystem, three things consistently stand out above everything else:
* Leadership
* Clear accountability
* A centralized, well-structured CoE
If those three foundations are in place, the technical part becomes vastly easier. The platform stops being “just a tool” and becomes a strategic asset; one that evolves with your organization and enables scale rather than restricting it.
In Part 2, I dive into the technical mechanics that make this possible: automation, ALM, integration strategies, and future-proof design. But none of that matters unless the human and organizational foundations are solid.
Check it out here: How to maintain complex Anaplan environments — Part 2: Technical foundations for scale.
Questions? Leave a comment!
-
Using territory maps with Anaplan
Author: Paola Malafaia is a Certified Master Anaplanner and Associate Consultant at Cornerstone Performance Management.
After setting up Territory Maps, I had a few learnings and I thought it would be helpful for other developers to put some notes about the basic settings of this mapping feature for dummies. The goal here is simple: turn complex geographical data into immediate, undeniable business insight.
The developer’s two map types
First of all, there are two types of map you can deploy within Anaplan to visualize data:
* Marker: This is the precise, point-based map. It relies entirely on latitudes and longitudes established for each element (e.g., placing a pin on every store). Great for density, but messy for large-area performance.
* Territory: This is the high-impact, strategic map. It uses the definition of an area (a state, a zip code, a region) to establish shades or colours over it. Its core purpose is to paint an entire region based on a single aggregated metric (e.g., painting a state green if sales are above $1m$). The key is referencing the standardized geocode provided by Anaplan.
Setting up a simple territory layer (the practical steps)
To get this powerful visual working, follow these three steps:
* Setup your geo list with the required geocode property.* You can setup this as the list code or some other property, but the code used must follow the Anaplan geo-mapping standards as in this link: Anapedia: Geo-mapping downloads.
* Example: we are setting up below the Brazilian states with the geocode assigned directly to the code property of the list. It links your internal data to the specific, drawable area on the map and this format must align with Anaplan's mapping engine standards.
* Create the map card and configure the territory layer.* Create the map card. The key is to select a module view where the geographical list (the one containing your geocode property) is positioned on the far left — note that although in the backed, the view below was created with conditional format which is to be used later in the settings.
* In the map's configuration pane, explicitly point the map to the geocode property you created.
* Setup the conditional formatting.* Setup the conditional formatting setting for the nice colour/territory relationship (note that the view above was created with the conditional formatting inside it which is to be used here).
And below we have the outcome:
Use cases: Where this functionality drives action
The strategic power of territory mapping shines brightest when performance is tied to geography.
* Sales and target management* Scenario: A national sales team needs real-time quota visibility across regions.
* Map configuration: Geocodes are tied to sales territories. Conditional formatting is set to the Target Attainment % metric (e.g. Red below 80%, Green above 100%).
* Actionable insight: The VP instantly spots a cluster of red territories in the Southeast, triggering an immediate reallocation of coaching resources or marketing spend to those specific areas.
* Supply chain and logistics optimization* Scenario: Tracking inventory risk and potential stock-outs across distribution centers (DCs).
* Map configuration: Geocodes are tied to DC service territories. Shading is based on the Days of Supply calculation (e.g., deep red for <15 days).
* Actionable insight: Logistics instantly sees that all DCs serving the West Coast are flashing red, indicating a regional systemic issue (like a transit blockage), allowing for immediate, targeted high-cost expediting to prevent stock-outs.
* Strategic resource allocation* Scenario: Planning where to open new branches or hire field service technicians based on demand.
* Map configuration: Geocodes are tied to zip codes or counties. Shading is based on a calculated Demand vs Capacity Gap (High demand, low capacity = Dark Red).
* Actionable insight: The planning team receives undeniable, geographically precise justification for allocating capital expenditure (CapEx) or hiring budgets to the areas displaying the most severe gap.
Questions? Leave a comment!