Best Of
Recording available: Tackling frequently missed certification exam topics
Thank you to those who attended our recent event. If you missed it or would like to re-watch, here is the recording and a few resources to help with your recertifications! Don't wait to get the process started — you'll be glad you took the time to do it now before the holidays.
Recording
Chapters:
0:00 Opening
0:13 Questions and answers
0:45 Agenda
1:45 Pass rates
2:25 Exam topics by certification
3:24 Key recertification info
7:51 Understanding for exams
12:23 Anaplan Data Orchestrator
13:54 Questions and answers
25:54 Associate Certification exams
29:05 Exam topics by Certification 2
Why your certification matters
A question that came up on the call was: "Why do I need to get recertified if it's not currently required by the client I'm working with?"
Think of your Anaplan recertification not as a client requirement, but as a career investment. The Anaplan platform is constantly evolving, and recertification ensures your skills remain sharp, relevant, and aligned with the latest best practices. This proactive step keeps you ahead of the curve, making you more valuable to your current client and more marketable for your next opportunity. It’s about future-proofing your expertise.
The cost of letting your certification lapse
Letting your certification expire means starting the entire process over from the beginning—redoing hours of work and paying the full certification fees. By recertifying now, you maintain your hard-earned status for free and avoid a significant investment of time and money down the road.
Recertify now
Certification resources
Thank you!
BeckyO
How I Built It: User Access Management
Author: Kevin Dale Bandelaria is a Certified Master Anaplanner and Solutions Delivery Head at OmniQuest, Inc.
Solution overview
The User Management process in Anaplan was developed to simplify how administrators set up and maintain model roles and selective access settings for users. Traditionally, configuring access in the backend can be tedious and error-prone, especially for large-scale implementations involving multiple regions or user groups. This solution brings that backend process into a structured, front-end experience, allowing administrators to manage user roles and data access through a single and intuitive interface within the app.
How it works
At the core of the setup are two modules: one for defining user-level configurations such as model access and hierarchy level, and another for managing the specific items to which users have selective access. Dynamic Cell Access (DCA) logic drives which parts of the input tables are editable based on user selections, ensuring consistency and control. A six-step process then streamlines the backend updates — resetting previous access, assigning new access, and syncing everything to the Anaplan Users tab with a single click.
Core benefits
This approach significantly reduces the time and effort needed for user onboarding and access maintenance. Instead of manually editing the Users tab, administrators can perform all actions from a guided interface, minimizing errors and removing the need for backend navigation. It also improves governance by enforcing structured inputs and ensuring that model roles and selective access levels follow the organization’s hierarchy and security design. Overall, it enhances scalability and provides a more user-friendly experience for workspace administrators.
Key system behaviors discovered
During development, several system behaviors were uncovered that are crucial for making this process work. For instance, when importing selective access data, Anaplan only accepts reference codes from numbered lists as text-formatted values — not display names or list codes. The process also relies on hidden “None” columns in the Users tab to properly reset user access. Another key finding was that save views must be flat, with all dimensions in rows; otherwise, imports won’t process correctly. Lastly, while there are displayed Write and Read columns inside lists that have selective access enabled, these are columns that cannot be imported into. These insights were instrumental in achieving a fully automated and reliable workflow.
The resulting framework provides a robust foundation for managing user access at scale, and it can easily be extended to handle additional logic such as read/write permissions or role derivations based on model selections. By moving complex backend processes into a guided front-end interface, this solution not only streamlines administration but also deepens understanding of how Anaplan handles user and access data under the hood. It’s a strong example of how automation and thoughtful model design can transform a common pain point into a seamless management experience.
Video
Questions? Leave a comment!
How to navigate a list index reset while preserving your planning data
Author: Evan Groetch is a Certified Master Anaplanner and Business Intelligence Manager at Fresenius Medical Care.
Seeing that your list import has failed due to the index hitting its maximum value can be a frustrating experience. Cursory research on Anapedia will show you there is a “Reset” button at the bottom of the Configure tab of each list. “Great!” However, you are soon dismayed to see that the button is grayed out and won’t be clickable until all the list members have been deleted.
“How can I delete the members of this list? Doing so will wipe out all the associated planning data!”
This article will outline the process used by the Fresenius Medical Care Center of Excellence (CoE) team to navigate this problem without permanently impacting planning data.
Why does list index need to be reset?
The maximum value of a list index is 999,999,999. In most cases, the number of list members is nowhere near this amount. The reason the index is so much higher than the number of list members is that the index only increases when list members are added. It is not reduced when list members are deleted. For this reason, lists that are routinely cleared and repopulated are liable to require an index reset at some point.
How do I perform the list index reset and preserve any associated planning data?
Note: the following instructions assume ALM is established (Development/Standard model with sync relationship to Production/Deployed model).
Before doing anything else, block out a timeframe during which you will perform the process. Send any necessary communications to end-users to notify them of some brief model downtime. This step is to avoid any conflicts with users planning in the model while completing this sensitive process.
In the Development model:
- ‘Copy & Archive’ the model to create a backup before making any changes.
- Create a dummy list and populate it with all the list members of the original list requiring the index reset.
- Create an import action to populate Name and Code (plus any list subsets and properties) in the dummy list with values from the original list. Ensure you rename this import action so you can recognize and use it later in this process.
- Ensure the dummy list is set as a Production list (this setting can be found on the ‘Configure’ tab.
- Create an import action to populate Name and Code (plus any list subsets and properties) in the dummy list with values from the original list. Ensure you rename this import action so you can recognize and use it later in this process.
- Identify any module(s) that are both dimensioned by the original list and contain input line items. Any module containing only formula-based line items can be omitted from this step.
- Create the copy module(s) and modify the ‘Applies To’ so that all dimensions match the original modules, except for the original list, which should be replaced with the dummy list.
- Populate the copy module(s) via import action from the original module(s). The only difference between the source and target should be the dummy list vs. original list, and this mapping can be done automatically based on code within the import mapping settings.
- Ensure you rename this import action so you can recognize and use it later in this process.
- Create the copy module(s) and modify the ‘Applies To’ so that all dimensions match the original modules, except for the original list, which should be replaced with the dummy list.
- Create a new SYS module with the original list as a dimension.
- Include in this module a Boolean named “Delete?” and set the formula to TRUE. You can use this in a future step to create an action that deletes all the list members.
- Under Actions, click the ‘New Action’ dropdown and select ‘Delete from list using Selection’. Title this action appropriately and set it to Delete the original list members using the Boolean line item you just created in the previous step.
- Do not yet run this new Delete action.
- Include in this module a Boolean named “Delete?” and set the formula to TRUE. You can use this in a future step to create an action that deletes all the list members.
- Create a new SYS module with the dummy list as a dimension.
- Include in this module a Boolean named “Delete?” and set the formula to TRUE. You can use this in a future step to create an action that deletes all the list members.
- Under Actions, click the ‘New Action’ dropdown and select ‘Delete from list using Selection’. Title this action appropriately and set it to Delete the dummy list members using the Boolean line item you just created in the previous step.
- Do not yet run this new Delete action.
- Include in this module a Boolean named “Delete?” and set the formula to TRUE. You can use this in a future step to create an action that deletes all the list members.
- Now that you have your data preserved in copy module(s), perform data validation. Ensure your copy module(s) tie exactly to the original module(s).
- Now we’re ready to reset the index.
Note: if the list that requires the index reset is not set up as a ‘Production’ list, you need to sync the above changes to the Production module before completing the following steps in the Development model. Otherwise, you run the risk of deleting the data in the Production module without having the structures in place to restore it afterwards. If this is the case, skip to step 8.- Run the ‘Delete’ action you created in step 4b
- Open the original list and go to the ‘Configure’ tab. Click the ‘Reset’ button (it should no longer be grayed out).
- Confirm the list index has been reset to 0.
- Run the ‘Delete’ action you created in step 4b
- Now, we need to restore our original list and module(s).
- Create import action from the dummy list to the original list. Be sure to include all list properties and subsets in the import, as these will have been cleared when the list members were deleted.
- Create import action(s) to load data from the copy module(s) created in step 3a back to the original modules.
- Perform data validation to ensure data matches exactly between the copy and original modules.
- Create import action from the dummy list to the original list. Be sure to include all list properties and subsets in the import, as these will have been cleared when the list members were deleted.
- Create a revision tag.
In the Prod model:
- Copy and archive the model to keep a backup before making any changes.
- Compare & Sync the revision tag from the Development model you created in Step 9.
- You should have everything you need now to perform the list index reset and restore planning data.
- Run the import to populate the dummy list
- Run the import(s) to populate the copy module(s)
- Validate data between original and copy modules
- Run the delete action to delete the original list members
- Perform the list index reset in the original list
- Run the import to re-populate the original list (using the dummy list as the source)
- Run the import to re-populate the original module(s) (using the copy module(s) as the source)
- Run the delete action to delete the dummy list members
- Run the import to populate the dummy list
How to build index reset into routine model administration
The model now has all the required structures in place to handle index resets in the future. By running the delete action to clear the dummy list at the end of the process, we can ensure no additional workspace is wasted on the copy module(s). Now, the next time a reset is required, we can run through the various actions outlined in Step 12 above for a streamlined process. This process should always be bookended with model ‘Copy and archive’ as well as communications to model owners and relevant end users. This process can be completed at regular intervals (i.e. Annual Model Administration to update Model Calendar) or on an as-needed basis.
Questions? Leave a comment!
egroetch
Less granular, more accurate: The "granularity = responsibility" principle in FP&A
Author: Taichi Amaya, Certified Master Anaplanner, and Financial Planning and Analysis Specialist at Pasona Group, Inc.
Reading time: approximately 5-6 minutes.
"We need more detail in our forecasts. Let's have each sales rep submit their numbers individually — that way, we'll be more accurate."
Sounds reasonable, right?
But here's what actually happens: Fifty sales reps, each second-guessing their pipeline, each hedging slightly on the conservative side. By the time these forecasts roll up, that collective caution becomes a systematic bias — one that no amount of detailed analysis can fix.
This isn't a hypothetical scenario. It's a pattern I've seen repeatedly in FP&A practice.
In this post, I'll challenge the "more detail = more accuracy" assumption and share a principle I've developed through years of FP&A practice: granularity = responsibility. You'll learn:
- Why coarser planning often produces statistically better forecasts
- How granularity amplifies bias in predictable ways
- A practical framework for determining the right level of detail
Actuals vs. plans: different purposes, different granularity
Let me be clear: I'm not arguing against detailed data in general.
For actuals and historical analysis, more granularity is almost always better:
- Enables deeper drill-down analysis
- Helps identify trends and anomalies early
- Supports advanced analytics and machine learning
But planning is fundamentally different. Planning involves human judgment, organizational accountability, and inherent uncertainty. And in this context, I've learned that intentionally choosing a coarser granularity than your actuals often leads to better outcomes.
Why? Three interconnected reasons.
Why coarser plans are more accurate
1. The law of large numbers: statistical stability through aggregation
The more granular your planning units, the more statistical noise you're trying to predict.
[Figure 1: The Law of Large Numbers in Action]
Figure 1 demonstrates this pattern using real data from our organization: at the Total level, outcomes consistently align with statistical models (R²=0.945). At the Detail level, predictability varies widely — some units maintain reasonable correlation (R²~0.80), but many show weak or unreliable patterns (R²=0.21-0.45).
Individual variations cancel out at higher levels of aggregation. This is precisely why driver-based planning — using ratios, trends, and relationships — works more reliably at coarser levels. The drivers themselves become more stable and predictable when applied to larger populations.
2. Bias accumulation: the "safety margin" effect
When you ask 50 people to forecast individually, each person makes a small, rational adjustment: "Better to be conservative — I don't want to miss my target."
Those individual safety margins don't stay individual. They compound.
When we consolidated input points — moving from individual contributors to team leads — the chronic conservative bias we'd been fighting largely disappeared. Not because team leads were better forecasters, but because there were fewer points where bias could accumulate.
While I don't have perfect before/after data to quantify this precisely, the pattern is consistent across organizations: more input points means more opportunities for systematic bias to creep in.
3. Information freshness: faster cycles, more relevant data
Even with Anaplan's powerful capabilities, our initial planning cycles were taking over a month. The bottleneck wasn't the tool, it was the hundreds of granular input points we'd designed into the process.
When we optimized granularity and reduced input points, we dramatically reduced input time: 2 weeks for Budget, 1 week for Forecast.
This isn't just about efficiency — it's about accuracy through timeliness.
A forecast based on week-old information is inherently more accurate than one based on month-old information. Market conditions change. Customer signals evolve. Competitive dynamics shift.
Fewer input points meant faster cycles. Faster cycles meant our plans could reflect current reality, not last month's reality. When market conditions change rapidly, this agility becomes a significant competitive advantage.
The granularity = responsibility principle
So how do you determine the right level of granularity?
My guiding principle: Plan at the level where accountability naturally sits.
If a Business Unit leader is responsible for BU performance, plan at the BU level — not by product, not by customer segment. If a Regional VP owns a region, that region should be your planning unit.
This alignment serves two purposes:
- Statistical: Matches the level where you have meaningful sample sizes and where biases are minimized.
- Organizational: Ensures the person inputting the plan can actually explain and defend the numbers.
Three questions to find the right granularity:
- Does the person have specific knowledge at this level?
If a sales manager is guessing at individual deal probability, the granularity is too fine. If they can speak credibly about team pipeline trends, that's the right level. - Can they explain the assumptions behind the number?
If the answer is "I just copied last year and adjusted by 5%," you're asking for too much detail. Good planning requires thoughtful assumptions — which requires appropriate scope. - Will decisions be made at this level?
If no one will ever look at "Product SKU 47829 in the Northeast," why are you planning it separately? Plan at the level where decisions actually happen.
When these three questions align with an organizational accountability level, you've found your optimal granularity.
In Anaplan terms: design your input modules at these accountability levels, not at the maximum granularity your data model can support. Let driver-based logic handle the detailed breakdowns for analysis — but keep human judgment at the level where it's most reliable.
Conclusion
Anaplan's flexibility allows us to design at any level of granularity, which makes choosing the right level even more critical.
Three interconnected forces — statistical stability, bias mitigation, and information freshness — all favor coarser planning granularity aligned with organizational accountability. I've seen this pattern hold across multiple implementations.
Apply the granularity = responsibility principle to your planning process. The improvements in forecast accuracy and planning agility are real and measurable.
I'd love to hear your experiences. Let's discuss in the comments below.
……………
About the Author:
With 13 years in FP&A and 9 years of hands-on Anaplan experience, Taichi Amaya has been a Master Anaplanner since 2019. He works on the customer side, designing and building enterprise-wide FP&A models and learning daily from the intersection of planning theory and business reality.
Amaya
Re: Share your favorite 2025 Anaplan features — November Community Challenge
Anaplan's 2025 platform enhancements focus on making the connected planning experience faster, simpler, and more cohesive for both end-users and model builders.
For model optimization, the platform introduced Calculation Effort Visibility in March 2025, which is like a built-in "speedometer" for formulas in classic models, helping builders see exactly which calculations are using the most processing power so they can be optimized for faster performance.
To improve reporting, the Combined Grids (Multi-Module Reporting) feature, released in October 2025, significantly simplified dashboard views by allowing users to pull data from several different planning tables (modules) and display them together in a single, unified report with a shared row axis, making side-by-side comparisons much cleaner.
Finally, dramatically improving visualization and scenario planning, the Network Charts feature was released in January 2025, which allows users to visually map relationships between different parts of their business, like a supply chain. This feature helps planners immediately identify critical connections and bottlenecks, and see the real-time impact of changes right on the visual chart.
Re: Share your favorite 2025 Anaplan features — November Community Challenge
There have been some great enhancements this year, but the biggest game-changer for me was the visibility of calculation effort, which was added to the classic engine in March. While we always endeavour to use best practice in our builds, being able to view calculation effort gives you great visibility over the optimization of your model. By using the export line items functionality, you can export the calculation effort values to an external tool (like Excel) and sort by calculation effort to quickly identify the line items taking the greatest effort.
It's a great metric to revisit systematically to ensure changes haven't created a sneaky bottleneck, and your model calculations are running as efficiently as possible.
Re: Share your favorite 2025 Anaplan features — November Community Challenge
Great topic, and perfect month to share what we are thankful for!
There have been so many great features this year!! A recent one that really helps those of us in the model builder persona is the "Used in pages" within Modules Beta, release notes here: August 2025 releases | Anapedia.
I can't tell you how many times I've been working on spring cleaning or model enhancements where I want to change or remove a module but I first need to understand the UX impacts. This could become a very time consuming exercise in large models with lots of App pages. With this handy new feature there is now enhanced visibility between the builder persona (modules) and the end user persona (pages). I'm thankful for this enhancement bringing us all closer together!!
LastUpdate Function to retrieve timestamp of Data entry
We need a function which can give us timestamp on last data entry for a specific cell. This will help us to compare that time stamp with our custom date in some specific use cases.
example:
Todays Date: Line item to capture today's date via daily data refresh
Order Date : User entry Line Item Time(Day) formatted
Ordered today? : Boolean Line item with possible formula:
lastupdate( 'Order Date') = Today's date
Re: Share your favorite 2025 Anaplan features — November Community Challenge
Anaplan introduced several exciting features this year. Here are some of the key futures and their benefits:
- Used in Pages
The Modules View Beta now enables model builders to identify which UX pages are linked to each module. This added visibility helps manage dependencies effectively and minimizes the risk of unintended changes.
2. Mobile - Grid conditional formatting
All types of grid conditional formatting are now supported on mobile — Background, Border, Font, and Morse. Page Builders will configure conditional formatting on Anaplan web as usual, and the applied formatting will show up in the Mobile app.
3. UX - Column width override settings
Page builders now have the flexibility to customize column widths within grids on a UX page, ensuring data is presented more clearly, improving readability and usability for end users.
Enable Selective Access in Manual Filtering data
Selective Access for list used as a format of line item works properly when given line item is published on dashboard or page.
Unfotunatly if user wants to filter data manually, he/she is able to see all items from the list. This behaviour was observed on dashboards and in NUX as well.
This issue is most troublesome for a large lists, where Selective Access should improve the ability to select correct data.
T_Caban

















