-
User Access Management in the UX
Author: Lis de Geus is a Certified Master Anaplanner and Customer Success Consultant at Bedford Consulting.
Setting the details of a user’s access is usually a job for the workspace admin: decide their role and selective access to lists members. However, the Users tab is not always the friendliest, especially when thinking about selective access in detailed lists. It’s hard to tell what exactly someone has access to when they have access to multiple members of the same list, but not necessarily a top level. Also, in certain models, it can be quite handy to have the same security settings reflected in a module, where you can refer to for creating more custom DCA controls.
With that in mind, I like to create an admin UX page that can be used for that purpose. You see the list of users with access to the model, you may chose their roles and maintain their selective access in a very visual way. The page then contains an action button which will import all that information into the users list. While the person who runs the action still needs to be a workspace admin, setting up the details and maintaining it is a task that can be shared with other trusted members of the team. As a bonus, we can use the same settings to reference in DCA for certain functionalities.
In this article, I’ll give you a detailed step-by-step on how to set up this functionality. I’ll use an example that you can easily reapply to the realities of your model.
What the solution does
* Lists users with access to the model.
* Allows choosing a role assignment for each user.
* Allows granting/revoking selective access to one or multiple lists.
* Contains a process button so these settings are loaded into the back-end User details.
With this solution in place, we of course recommend that user settings are no longer done straight in the back-end. We’d want any changes to go through the UX, and that way we can make sure that our module information will match the actual selective access and avoid any conflicts. It’s the responsibility of the workspace admins to comply.
Pros and cons
Pros
* A centralized administration UX — user access can viewed from an app without needing to toggle to back-end settings.
* Easy to audit user roles with flexible UX layout.
* Reduced risk of manual mistakes by using standardized saved views and processes.
* Responsibility of reviewing and maintaining detailed selective access can be shared with other key members of the team that may not have WSA rights.
* The information is kept in modules and therefore can be referenced for DCA controls.
* The process works well even if your selective access hierarchies are numbered list, since we use the codes for importing.
Cons and trade-offs
* Development is required if selective access is needed in a new list that is not yet set up through this process.
* WSAs must comply to update all access details via the UX; any changes done directly in the back-end would be overwritten when the process is run again.
* Brand new users do not appear in the UX list until they have an initial role assigned in that model.
Proposed mitigation: Create a model role such as “Default” that doesn’t have access to any specific module, list or action. When a new user is added to the workspace, the Default role must be granted in User settings. That user will then be displayed in the User Access Management UX page without any role selected yet (in the UX module) and the detailed access profile (role and selective access) can be set up via the regular process.
Let’s do it!
Okay, this sounds like a nice idea, but how do we actually make this work?
I’ll share an example with detailed instructions so that you can easily reapply to the requirements of your model.
For this example, let’s assume the following:
* We have three roles available that have previously been set up on the User settings: Full Access, FP&A User and Read-Only.
* The Full Access user should always have access to the full company (selective access) — we’ll add some extra conditions for this, but of course this is optional and you can leave those conditions out if preferred.
* The other two roles require selective access to the Department list. The list is a hierarchy of two levels: D1 Department and D2 Cost Center.
What we need to set up: a module selection of a user’s role; a module for selection of a user’s selective access in D1 Department, then in D2 Cost Center; saved views and corresponding actions to import all three into the User settings page; a UX page that ties it all together.
Object Blueprint: what we will need
Lists
Users (standard list)
* Roles: a technical list of model roles, which should be a mirror of the actual roles existing in User settings – make sure the names match exactly
* C1 Department
* C2 Cost Center
Modules
* SYS01 Role Assignment
* SYS02 C1 Selective Access
* SYS03 C2 Selective Access
Actions
1.1 UAM Import User Roles
1.2 UAM Import Selective Access Total
1.3 UAM Import Selective Access C1
1.4 UAM Import Selective Access C2
Step-by-step
Starting point: you already have a working model with lists C1 and C2, and you have already reviewed role security with the three roles discussed.
Let’s start our build of the User Access Management functionality:
* Create the technical lists for roles, matching the roles you have in the model
* Create module SYS01 Role Assignment
* Create the saved view UAM_Role Assignment
* Create module SYS02 C1 Selective Access
Note: the third user has no option to select read or write, because they are set as Full Access — we’ll automatically grant them access to all departments on the next stage.
* Create module SYS03 C2 Selective Access
* Create three saved views from module SYS03, one for each level of the hierarchy of selective access (Total, C1 and C2)
* Create the four necessary import actions into the Users list
* Create a UX page so that this can be maintained through and Administrative app
There you go!
Questions? Leave a comment!
-
How I Built It: User filters with variable hierarchy properties
Author: Erik Svensson is a Certified Master Anaplanner and a Principal Solution Architect at Anaplan.
Hello Anaplan Community!
Thank you for checking out my ‘How I Built It’ tutorial. In this video, I demonstrate a powerful technique for creating dynamic user filters. This solution gives your end-users the flexibility to filter a dimension by different attributes on the fly.
A great example is fashion assortment planning. In the "Tops” category a planner needs to filter by Neckline, while in "Footwear” a planner needs to filter by Upper Material. This model allows each user to select the specific attributes they want to filter by, providing a customized and highly flexible experience.
This is especially important in fast-moving industries where trends change quickly, and a one-size-fits-all filtering approach is too restrictive.
Key features:
* User-specific dynamic filtering
* Flexible attribute selection per user
Check it out and drop in a comment if you have any questions!
https://play.vidyard.com/ErQqG4YvqLwZTVhXm1uHYS
-
A quick reference guide to hierarchical structures in Anaplan
Author: Arun Thakar is a Certified Master Anaplanner and Vice President in the banking industry.
Lists and hierarchies are a key foundational aspect to model building. In Anaplan, lists are a critical building block of the model and depending on which structure one uses can greatly impact user experiences, model performance and architecture of a solution. This article is intended to help you, as an architect, to communicate the various types of lists and when they should be used for model builders, and end users.
There are several configuration options for lists in Anaplan, essentially you can create hierarchies in several different ways, let’s cover the main ones below.
Flat hierarchy
A single Anaplan list, can be named, or numbered. All levels of the hierarchy are present in one list with no parents in the list grid view menu, instead parent mapping is stored in a system metadata module.
A flat hierarchy has many uses in Anaplan models; the general idea is that these lists store all levels of the hierarchy and can be used to create saved views which build other hierarchies in the model. Using recursive lookups, a flat list’s system module can tie together all other hierarchies and generally modules dimensioned by flat lists are backend only. Good opportunity for creating list subsets for major levels.
Balanced, composite list, hierarchy
Multiple lists combined to form a hierarchy with parent lists configured in the general lists menu
.
A balance composite hierarchy has many uses in Anaplan models, the parent child relationship allows inputs to cascade, the ability to push down allocations and synchronize cards across a UX page. Enabling users to enter data at higher levels of the hierarchy makes the planning process simpler for end users. Also, the ability to synchronize cards across the UX is a key benefit of composite hierarchies. These hierarchies need to have the same number of levels for all branches and can sometimes cause issues for architecture who need to create dummy levels to balance the number of levels.
Ragged, single list, hierarchy
Single list wherein branches may end on different levels. For example, one branch of the hierarchy may end at level 7, while an adjacent branch may end at level 6. Creating and maintaining these lists is a little tricky. Parent-child relationships are managed within the list grid view menu. Must not be numbered.
This list is powerful but has tradeoffs in terms of technical debt. For many end users, seeing a ragged hierarchy is more intuitive than a hierarchy with dummy levels. For an architect there are multiple considerations associated with creating ragged single levels lists. Notably, if the list changes steadily over time, the list will need to be built one level at a time which will require multiple saved views. Also, stale node clearing is something that an architect should invest in for all their list maintenance. Selective access can be applied to any level of the ragged structure which makes this a powerful solution to creating better end user experiences.
Hybrid hierarchy
Broadly this can mean combining hierarchies together to enable planning, such as loading your workforce data as a child of the cost center hierarchy.
This concept can be helpful for applying selective access to data that is sensitive or needs to be entitled by parts of a hierarchy. The lowest level is its own list with the parent hierarchy configured in general lists. The bottom of this list only needs a mapping to the lowest level of the parent hierarchy. The example above of workforce as a child of cost center or concatenated summarized transaction lists are a couple examples of ways you can make hybrid lists work for your model.
Conclusion
There are dozens of combinations and permutations one can employ to create a hierarchy. When deciding on which archetype of hierarchy to employ, consider the purpose of what you are trying to accomplish. If a user is setting assumptions at a higher level, then perhaps a balanced composite hierarchy is the right solution. If your end users want to see intermediate aggregations and their hierarchy is ragged, then it may make sense to use a ragged, single list, structure.
-
Clay before carbon fibre, flat files before data full blown orchestration
Author: Alejandro Gomez Sanchez, Certified Master Anaplanner, and Principal Consultant.
Data orchestration is a remarkable capability. When done well, it is genuinely transformative: comprehensive, accurate and dependable data for decision-making is available instantly.
Beyond the satisfaction of well-orchestrated data, my own rough calculations suggest that at least twelve working days per year [1] are saved as part of the resulting ROI. As we all know, manually loading data is tedious, cumbersome, time-consuming, insecure, and highly prone to error. For these reasons — and several others — it should, wherever possible, be replaced with an automated, direct source-to-target (i.e., orchestrated) data feed. This is particularly important in today’s data-hungry, connected, collaborative, and increasingly AI-driven platforms, where more data is required, therefore more efficiencies can be realised, and more errors avoided through orchestration.
However, I have also witnessed less successful attempts at data orchestration. In these cases, while teams were determined to eliminate the well-known issues associated with manual processes, they inadvertently introduced a new set of frustrations. These challenges were rarely caused by the technology itself. More often, they stemmed from poor timing and insufficient preparation within the project team.
Technically speaking, data orchestration has become far easier in recent years, thanks to the proliferation of pre-configured, direct connectors. In many scenarios, it has become nearly “plug and play”. Yet just because vendors have simplified the technical aspects for business users does not mean that preparation or thoughtful attention to the data being transferred is no longer required.
Let’s consider the following analogy:
In the past, orchestrating data demanded far more effort, systems, and specialised roles. Today, many tasks have become simple and immediate. In the same way that our ancestors had to struggle hunting and gathering their food.
Whereas now, access to food has been facilitated to the point that we have any food we want at our fingertips. Orchestration tools with connectors and friendly visual interfaces have highly simplified the access of our planning tools to data marts.
Just as the availability of abundant food does not mean we should eat indiscriminately, the simplification of orchestration does not remove the need to consider the quality, quantity, and timing of the data flows.
Ignoring the food being fed to our bodies and the data being fed to our models lead to negative outcomes in both cases.
So how should organizations approach the process to determine the right quantity, quality, and timing of data flows? There is no single answer. Different teams will take different paths depending on their technical skills, their technology stack, the maturity of their data, and the leadership team’s risk appetite. So, the following approach is not the only viable one — but it is a method shaped by experience across several recent orchestration projects.
Here is the “unpopular opinion”: start with traditional, flat-file-based manual data loads. Just as elite athletes first learned to crawl, and the world’s most beautiful cars begin life carved out from simple clay, starting with a flat file does not mean you are not on the path towards a high-performance, state-of-the-art data flow. Means that you are taking an appropriate -albeit admittedly unglamorous- first step in developing a data orchestration system in a controlled, transparent, efficient and collaborative manner.
While is no debate that crawling before running is not a waste of time but an essential part of the learning curve, nor is it questioned why clay models remain invaluable for identifying design flaws early in automotive development. Yet, when manual and simplistic methods are proposed as the starting point for an orchestration journey, they are often poorly received by clients. Such approaches are frequently perceived either as a step in the wrong direction — manual ETLs appearing to contradict the very idea of automation, much as heavy clay seems the opposite of lightweight carbon fibre — or as an avoidable waste of time — manual data manipulation is viewed as an unnecessary interim step that will not form part of the final solution, the same way as one might incorrectly argue that crawling could be skipped simply because it is not used later in life as a means of getting around.
Four reasons to start with manual, file-based data load your orchestration journey
* It is agile: most business systems provide straightforward file-based export and import capabilities. As a result, flat-file data is typically the easiest, cheapest, and quickest way to obtain, share, analyse, and manipulate the foundational datasets required for orchestration. This approach does not demand advanced technical skills, specialist software, or additional licences.
Because of this, file-based loading is an excellent way to surface data quality issues early in the project, well before they become embedded within orchestration logic or evolve into costly problems later on. Identifying issues early prevents wasted engineering effort and ensures that automation is built on stable, predictable inputs.
In short, it helps avoid the classic “garbage in, garbage out” pitfall. It allows teams to obtain data rapidly and to identify inconsistencies, such as structural mismatches, varying formats, or missing values, at a stage when such issues are far easier and less expensive to correct.
* It is democratic: anyone involved in the project has the tools and the skills to work on said data sets. This democratises access to, and understanding of, the data across all project stakeholders, fostering conversations that early automation often obscures or restricts to the most technical members of the team.
This democratisation ensures that every stakeholder can see and “touch” the same data for themselves. It promotes a shared understanding of data quality, challenges, timelines, and required effort. It helps build trust, facilitates communication, and brings diverse perspectives into the conversation — while avoiding the classic “black box” effect.
In other words, it prevents situations where only a handful of technical specialists have visibility of the data, while others must rely on unfamiliar tools, specialist licences, or second-hand interpretations. Everyone gains direct, transparent access, improving alignment and overall project effectiveness.
* It is controllable: to a large extent, orchestration simply accelerates the data flows that already exist.
If your underlying process is unclear or your data is corrupted, automation will simply accelerate the chaos and amplify data-related issues. Even when the data is broadly accurate, introducing new automated feeds into an existing system can create unexpected consequences, unwanted changes, and confusion among users — particularly in the early stages of implementation or during development and testing. At these points, it becomes difficult to determine whether unexpected behaviour is caused by new automated data streams or by incorrect business logic.
By contrast, loading data from files that have been manually exported, reviewed, and imported significantly reduces the risk of unforeseen side effects. It allows full control over when data inputs are triggered and ensures a deeper understanding of what the dataset contains and which downstream elements may be affected. This real-world clarity is essential before writing a single line of orchestration code.
Because data requirements are driven by the purpose and expected outcomes of the target system, it makes little sense to enable orchestration capabilities during the early phases of system development. Automation should not be introduced until the system has been built, tested, and signed off using controlled manual data loads.
Ultimately, there are only two things worse than poor-quality data. The first is poor-quality data being orchestrated and fed automatically into your systems, increasing both the pace and volume of problems. The second is the time wasted trying to determine whether unclear or unexpected data behaviour is the result of poor-quality inputs, errors in the orchestration process, unintended data flows being triggered, or flaws in the underlying business logic.
* Improves business ownership: Building the orchestration process from the ground up using simple, easily understood data files enables change-management teams to develop comprehensive operational and governance documentation. This documentation can be consolidated into a data requirements register, capturing key elements such as:
a. source system and source query or saved search
b. load frequency or trigger
c. links to sample files
d. associated orchestration jobs
e. target system
f. data ownership
Remember that data orchestration is not solely a technical effort; it is also an organizational change. Business users must understand how it works and how to maintain it, particularly when third parties have been involved in implementing the orchestration.
If you have reached a clear understanding of your data requirements in your Anaplan model, the data sources are stable, the format of the data templates is not subject to changes and the impact of every new data load in your model is understood, its time to explore the many benefits of using Anaplan Data Orchestration.
Questions? Leave a comment!
……………
[1] On average, a planning solution relies on around eight separate data inputs. Assuming one ETL cycle per month to refresh forecasts or run new scenarios — and estimating one hour per input to extract, review, share, transform, and load the data — we arrive at:
8 inputs × 1 hour × 12 months = 96 hours per year, which equates to 12 working days.
For FP&A use cases, these inputs typically include:
* Chart of Accounts
* Ledgers and Trial Balances
* Employee data
* Clients list
* Vendor lists
* Cost centre structures
* Currency tables
* Sales pipeline information
For S&OP scenarios, common inputs include:
* Product catalogues
* Customer and channel data
* Stock-holding locations
* Stock-on-hand levels
* Actual sales
* Open orders
* Suppliers
* Costs and Prices
This estimation also excludes the often-substantial costs and time lost to data errors, duplications, data leaks, or other issues that frequently arise from manual data handling.
-
Stop duplicating modules: Meet Combined Grids
Author: Vijay Pasumarthy is a Certified Master Anaplanner and Sr. Principal Consultant at Genpact.
If you had two modules with a common dimension and wanted to show information from both on the same page, your options were limited:
* Build a new “combo” module and recreate all the line items or
* Drop two grid cards side by side and turn on synchronized scrolling on the common dimension.
Anaplan’s new Combined Grids functionality finally solves this. It’s one of those features you know you want the moment you hear about it. I already have clients going through their entire inventory of pages looking for places to use it.
In this post I’ll walk you through:
* How Combined Grids work with a concrete example
* How filters behave across sections
* Quirks, gotchas, and a few dos and don’ts from an architect’s lens
What Are Combined Grids?
Combined Grids (released in October 2025) let model builders merge multiple grids on an app page without merging the underlying modules in the model.
Key pre-condition:
All modules you want to ‘combine’ on the app page must share at least one common dimension. That shared dimension becomes the row axis of the combined grid.
To explore the feature further, I set up the following modules:
1
Time Settings
System module with line items to identify forecast/actuals periods
1
Product Attributes
Holds properties of Product Dimension, Boolean for high-volume products
2
Product Forecast
Holds forecast in units by Product, by Month
3
Annual Plan
Holds annual plan in units by Product, by Month
For years, if I wanted to show product attributes next to product forecast, my page would have Grid 1 for Product Attributes and Grid 2 for Product Forecast paced side by side, synced on Product.
With combined grids, I can turn this into a single, richer experience. Below are the steps I took towards that experience:
Step 1: Merge Product Attributes + Forecast
* Add a grid card to page.
In the card configuration, select the Product Attributes module and pivoted it to show Product on rows, relevant attributes on columns).
* Add a second section.
Click “Add Grid Sections” at the top of the card configuration.
* Choose the Product Forecast module.
In the Grid Sections popup, select the Product Forecast module.
Use Product as the common row dimension. Add Time to columns.
* Apply! Once settings are applied, data from both modules is combined into a single grid, with Product in Rows and Attributes (from the first section) + Time-based forecast values (from the second section) in Columns.
Step 2: Filter to show only forecast months
In my next experiment, I only wanted to show future periods in the forecast portion of the combined grid. What is different in a combined grid vs. a regular grid is that the filter options I have are different Here are the steps I took to apply filters as needed:
* Choose which section to filter.
First, specify whether the filter should apply to: Grid 1 (Product Attributes) or Grid 2 (Product Forecast).
* Apply the filter as usual.
After that, filter configuration behaves like on a regular grid, line items from the same module can be used to drive the filtering or a line item from another module can be selected (in my case, the Time Settings module).
Result
: One unified grid where Product attributes remain visible for all products; Forecast values only show for future months.
Step 3: Take it further — Add AOP to the same grid
Next, I wanted to show Annual Operating Plan (AOP) units for the same products and months — again, within the same combined grid.
* Go back to the grid’s card configuration and click “Add Grid Sections”.
* Select the Annual Plan (AOP) module and align dimensions: to ensure Products are on rows and Time is on columns.
* Apply the same filter (future months) as needed.
If desired, you can show both Time and line items in columns. The result is a single grid where, for each Product row, Product attributes, Forecast by month (filtered to future periods) and AOP values for the same months can be seen:
Architect’s notes: quirks, dos and don’ts
* Across all sections of settings including Show/Hide, Context Settings etc., dimensions of multiple grids appear in a repetitive fashion, without clear separation of which dimension is coming from which module.
Pro Tip: Remember the order of how grids were combined to understand the order of dimensions. For example, in columns section shown above, first ‘Line Items’ are from Product Attributes, second from Product Forecast and third are from AOP module.
I am hoping this is something Anaplan finds an improvement around over time.
* How conflicting filters behave.
A natural question: “What happens if one section’s filter tries to remove a product that another section’s filter wants to keep?”
Good news is Anaplan is letting the user choose how the filters should be applied in combination with each other so this won't be an issue:
* The “common dimension” has strict rules:* No common dimension = no combined grid.
Combined grids only work when the modules share an identical list on at least one axis.
* Main list vs subset does not count as common.
If one module uses the full Product list and the other uses a Product subset, Anaplan treats those as different dimensions. Those grids cannot be combined.
* Common dimension must be on rows in the first section.
When you start configuring the grid, put the shared dimension (e.g., Product) of the first module on Rows. If that dimension is only sitting in Columns or Context selectors, the “Combine Grids” option simply won’t be available.
What I didn’t cover in this post
To keep this post short (er), I haven’t gone deep into Sorting, Conditional formatting, Show/Hide
. The good news is that these behave largely like they do on regular grids — with the added benefit that you can control these settings independently for each merged section.
Thoughts or questions? Leave a comment!
-
How I Built It: Rethinking scenario planning with Polaris
Author: Philipp Erkinger, Certified Master Anaplanner and Principal Solution Architect at Bedford Consulting.
Dear Anaplan Community,
The Anaplan Polaris Engine has brought a significant transformation in the way I construct and design Anaplan models. Over the past three years, I have had the privilege of working on numerous projects utilizing Polaris, which has provided a platform for experimentation and the exploration of innovative working methods. My objective has been to extend the boundaries of what is achievable, not only from a technical perspective but also in terms of business impact. Polaris empowers us, as Solution Architects, to completely reconsider our approach to model design and development.
Overview of the solution concept
This ‘How I Built It’ video presents a walkthrough of my approach to designing and implementing a concept for advanced scenario planning using the Polaris engine. The proposed solution enables planners to seamlessly switch between various scenarios, turn on and off model features (like long term planning), harness the natural dimensionality of Polaris at full scale, and manage model performance dynamically, all while ensuring the administrative process remains straightforward.
Let’s dive into how this solution was built and the benefits it delivers.
P.S.: For those who are new to Polaris, I recommend familiarizing yourself with the basics before viewing the video. Below are some helpful articles:
* Anaplan Polaris: A deeper dive into the Polaris Engine and model building techniques
* Unlocking the power of Polaris: A guide to efficient model building
'How I Built It' video
https://play.vidyard.com/TgiaWcPkiBJVH4WvTUSkkj
Questions? Leave a comment!
-
How I Built It: Center of Excellence App
Author: Marina Ketelslegers is a Certified Master Anaplanner and FP&A voice for Anaplan and AI passionate about CoE support, solution architecture, and training.
Hello Anaplan Community!
I’m sharing a short walkthrough of my Anaplan Center of Excellence (CoE) App designed to help new CoE leads set up their CoE with clarity and structure.
I show how you can use it to define your CoE charter, roles, and roadmap, and then manage success through a practical four-lens metrics system (Adoption, Platform Health, Business Value, CoE Health).
I also demo the recommendations page that turns metrics and a light self-assessment into a focused improvement plan.
If you’re starting a CoE (or refreshing one), I hope this gives you a strong, reusable blueprint.
Also, if you missed it, I published an article going into details on Center of Excellence Metrics here: A four-lens metrics system for Anaplan CoEs.
How I Built It: CoE App
https://play.vidyard.com/GdYJrc8VSSeMoU7NKqd7KG
Feedback from the Community is very welcome.
-
A four-lens metrics system for Anaplan CoEs
Author: Marina Ketelslegers is a Certified Master Anaplanner and FP&A voice for Anaplan and AI passionate about CoE support, solution architecture, and training.
I’ve seen how Anaplan Centers of Excellence (CoEs) start with strong momentum. There’s enthusiasm, executive support, and a clear ambition to make connected planning work across the organization.
And yet, after the first implementations, many CoEs slow down. They become busy, but not necessarily more effective.
In my experience, the issue is rarely the platform or the people. More often, it’s the way metrics are used. Adoption numbers, incident counts, and delivery statistics are tracked, but they exist in isolation. They describe what happened, without clearly showing what the CoE can influence next or how to improve.
What’s usually missing is a consistent metrics system, one that distinguishes leading metrics from outcomes, and that makes the connections between them explicit. Metrics the CoE can actively impact, and that clearly show how design choices, enablement efforts, and prioritization decisions translate into adoption, platform health, and business value.
Anaplan makes this especially interesting. It is a software platform, but it’s also deeply business-owned. Many of the decisions that drive value, how models are designed, how users are enabled, how scenarios are used, sit much closer to the business than in traditional IT systems. That means metrics are not just something to report on; they are a practical tool to guide decisions that directly affect value and ROI.
What I recommend to my clients is not more metrics, but better-connected ones. A combination of quantitative and qualitative indicators, used as a system, that evolves as the CoE matures and helps teams move out of the “stuck” phase into sustained impact.
To support CoEs on this journey I personally think that there is no better tool then Anaplan itself. The four-lens CoE metrics framework is built around this idea. It looks at a CoE from four perspectives that naturally belong together: adoption and user experience, platform health, business value, and the operating health of the CoE itself. Each lens captures a different dimension of success, and none of them tells the full story on its own.
I break it down below, and also take a look at my video walk-through here — How I Built It: Center of Excellence App.
The four lenses of a CoE metrics system
* Adoption and user experience: Are people actually using Anaplan to run their planning process, not just logging in? This lens moves from surface metrics (“who logged in”) to signals that users can complete tasks, run scenarios, and work independently with confidence.
* Platform health: Can the platform reliably support that usage as it scales? Stability, performance, data quality, and integrations all shape user trust. When platform health erodes, adoption will follow, regardless of how good the original design or intent was.
* Business value: Is Anaplan changing how decisions are made? Shorter cycles, better forecast accuracy, faster scenarios, and reduced manual work confirm that the CoE is converting adoption into tangible impact, operational and financial.
* CoE operating health: Is the CoE itself set up to sustain and grow value? Capacity, prioritization, governance, enablement, and partner dependency determine whether the CoE can keep up with demand and invest proactively instead of firefighting.
No single lens tells the full story. Adoption, in my mind, is usually the primary driver of value, but it depends heavily on platform health and CoE operating health. In a good CoE, this forms a reinforcing loop: growing adoption → more pressure → robust platform and governance absorb that pressure → better experiences → deeper adoption → more value.
Maturity: How to read the same metrics over time
The same metric can mean very different things at different maturity stages, so it’s vital to pair the 4 lenses with a simple Foundational → Performance → Strategic maturity model:
* Foundational CoE: Focus on stability and clarity.
Metrics: “Are people using the platform at all?”, “Are high-severity incidents under control?”, “Is basic governance happening regularly?” Fluctuation is normal.
* Performance CoE: Focus on efficiency and scalability.
Metrics: “Where are bottlenecks in UX and delivery?”, “Is demand manageable?”, “Are we reducing dependency on the CoE through self-service and enablement?”
* Strategic CoE: Embedded in how the business plans and decides.
Broad adoption is assumed. The focus shifts to decision quality, scenario agility, realized value, and continuous evolution of planning capabilities.
Seen together, the lenses give you the structure, and maturity gives you the time dimension. This keeps metrics directional instead of becoming unrealistic targets that demotivate teams.
Adoption metrics: Beyond “who logged in”
First, get your technical foundation right: if you want to go beyond counts of logins, enable Anaplan Audit, assign Tenant Auditor access, and set up a way to store audit history (external store or reporting model) due to limited retention.
Read more here: Anapedia | Audit
* Active User Ratio (%) – Foundational* Question: Is the intended user base actually using Anaplan?
* Formula:
Unique active users in period ÷ Licensed users in scope × 100
* Why it matters: If the right personas aren’t active, planning continues in spreadsheets and shadow processes. As Active User Ratio rises, you typically see more process completion in Anaplan, fewer offline reconciliations, and the first visible cycle time and effort reductions.
* Data: Audit logs + user/license data.
* User Satisfaction Index (1–5) – Performance* Question: Do users trust Anaplan and find it useful?
* Method: Structured survey with a stable 1–5 scale; track by process/region/persona.
* Why it matters: Satisfaction is a leading indicator. Higher scores usually correlate with fewer workarounds, more consistent execution, and fewer repetitive support queries. Over time, that stability supports better forecast accuracy and scenario agility.
* Data: Survey results stored/visualized in Anaplan.
* UX Page Adoption Rate (%) – Strategic* Question: Are users following the UX journeys you designed?
* Formula:
Unique users opening a specific UX page ÷ Active users in scope × 100
* Why it matters: High page adoption means users follow standardized paths, which reduces variance, training effort, and friction. Low adoption explains why value stalls even when people “log in”: they avoid key pages and revert to offline steps. Improving page adoption is one of the most direct levers a mature CoE has to unlock more value.
* Data: Audit logs with UX page events.
Platform health metrics: Trust in the foundation
* Incident Volume (High Severity) – Foundational* Question: Is the platform safe to depend on during critical cycles?
* Definition: Count of incidents in period with “high” severity (e.g., Sev 1–2), optionally split by cause (integration/model/access/performance).
* Why it matters: High-severity incidents quickly destroy trust and drive users to offline workarounds. They typically show up as drops in satisfaction and UX Page Adoption.
* Data: ITSM/ticketing (ServiceNow, Jira, etc.), stored/visualized in Anaplan.
* Integration Reliability Score (%) – Performance* Question: Do data refreshes deliver on time and without errors?
* Formula:
Successful runs ÷ Total scheduled runs × 100
where “successful” = error-free and on-time.
* Why it matters: If data is late or wrong, the platform might be technically “up” but the process is broken. That erodes trust and adoption even if models are well designed.
* Data: Integration platform logs (Boomi, ADF, CloudWorks, etc.), stored/visualized in Anaplan.
* New Use Case Stabilization Time (days) – Strategic* Question: How fast do new models move from “launch turbulence” to steady state?
* Method:
* Define a platform baseline ticket rate (average weekly tickets for mature models over last N weeks).
* For a new use case, track weekly ticket rate from go-live.
* Metric: number of days/weeks until the new use case’s ticket rate returns to at or below the baseline.
* Why it matters: A shorter time-to-baseline protects trust and frees CoE capacity instead of creating long-term support drag.
CoE operating health metrics: Can the CoE keep up?
* Governance Compliance Score (%) – Foundational
* Question: Are key governance cadences happening as designed?
* Formula:
Governance activities completed ÷ Governance activities planned × 100
(e.g., CoE councils, steering, release notes, intake triage).
* Why it matters: Inconsistent governance leads to unpredictable releases, weak comms, and confused users. That erodes satisfaction and UX Page Adoption over time.
* Data: Calendars, minutes, intake/release records (or an Anaplan governance module).
* Backlog Size & Throughput (% closed) – Performance* Question: Is work flowing, or just piling up?
* Formula:
% Closed = Tickets closed in period ÷ Tickets opened in period × 100
optionally paired with average cycle time to close.
* Why it matters: A stuck backlog means pain points stay unresolved, UX improvements are delayed, and users lose patience.
* Data: Jira / ServiceNow / Azure DevOps, etc.
* CoE Self-Sufficiency Index (%) – Strategic* Question: How dependent is the CoE on partners to evolve the platform?
* Formula:
Work delivered by internal CoE ÷ Total work delivered × 100
where “work” = enhancements, releases, backlog items, etc.
* Why it matters: At strategic maturity, higher self-sufficiency usually means faster iteration, more responsive enablement, and a better fit with business needs. At foundational stage, a lower index is acceptable; you expect it to grow as the CoE matures.
* Data: Backlog/release logs with a “delivery owner” field.
Business value and ROI: Making the case with evidence
Value metrics often fail not because value is missing, but because the method is fuzzy or keeps changing. To build credibility, anchor value on stable, auditable definitions.
* Cycle Time Reduction (days) – Foundational* Question: Are planning cycles actually faster than before?
* Approach:* Define clear, fixed “cycle start” and “cycle end” events.
* Track timestamps for each cycle (e.g., Forecast Apr-2026).
* Compare pre-Anaplan vs post-Anaplan or year-on-year.
* Why it matters: This is a timestamp problem, not an estimation problem. Shorter cycles directly reduce coordination loops, rework, and effort per cycle.
* Implementation: Keep a simple “Cycle Log” in Anaplan as the system of record, with gating so cycles can’t close without log updates.
* Forecast Accuracy Improvement (p.p.) – Performance/Strategic* Question: Are forecasts objectively more accurate over time?
* Requirements:* A frozen forecast snapshot (e.g., via Anaplan snapshot/freeze).
* A stable horizon definition (M+1, quarter-end, etc.).
* A fixed accuracy formula and actuals source.
* Why it matters: Better accuracy reduces avoidable cost (expediting, stockouts, last-minute changes) and improves decision confidence. The main risk is comparing moving targets or changing the horizon silently; treat major method changes as a re-baseline, not “improvement.”
A Practical ROI Formula for Anaplan
Define Anaplan ROI as:
ROI = (Total Benefits − Total Costs) ÷ Total Costs
Costs (annualized and scoped to the planning footprint):
* Subscription / license and vendor fees
* Implementation / change / expansion costs
* Run / operate costs (CoE FTEs, support, partner retainers, integration ops, etc.)
For benefits, start with two streams that are relatively easy to measure and audit. You can extend later to revenue and risk benefits.
* € Benefit (Hours Saved)
Hours saved per cycle × #Cycles per Year × Fully Loaded Hourly Cost × Realization Factor* Fully loaded hourly cost: standard rate from Finance/HR.
* Realization Factor: conservative % (e.g., 30–70%) to reflect that not all saved time immediately converts to cash.
* € Benefit (Forecast Accuracy)
Baseline Value-at-Risk × (Accuracy Improvement in p.p. ÷ 100)* Value-at-Risk here is not total cost or total margin. It’s the portion of financial performance exposed to avoidable inefficiencies from forecast error and credibly influenced by better planning (e.g., inventory, variable costs linked to pricing, marketing; sometimes logistics/penalties).
Then:
Total Benefits (€) = Benefit (Hours Saved) + Benefit (Forecast Accuracy)
ROI = (Total Benefits − Total Costs) ÷ Total Costs
With this in place, you can show a clear line from CoE metrics → process improvements → financial outcomes.
In addition to quantitative metrics it makes sense to combine them with a qualitative CoE maturity Self assessment by collecting a Survey results for CoE members.
Here is an example of such a survey:
A wholistic view on both types of metrics brings us to a CoE Maturity Score.
Final thoughts
With a structured metrics system, even a busy CoE can avoid getting “stuck” and keep moving toward strategic impact. The four lenses show what to measure; the maturity view shows how to read those signals over time; the ROI method connects it all back to value.
If you’re building or maturing your CoE, use Community resources, adapt them to your context, and keep your metrics practical and decision-oriented. Speaking as a Certified Master Anaplanner, that’s where I see CoEs sustain momentum and turn Anaplan into a core planning capability rather than just another tool.