This article provides the steps needed to create a basic time filter module. This module can be used as a point of reference for time filters across all modules and dashboards within a given model.
The benefits of a centralized Time Filter module include:
One centralized governance of time filters.
Optimization of workspace, since the filters do not need to be re-created for each view. Instead, use the Time Filter module.
Conforms with the D.I.S.C.O. methodology as a 'System' module. More on D.I.S.C.O. can be found here.
Step 1: Create a new module with two dimensions—time and line items. The example below has simple examples for Weeks Only, Months Only, Quarters Only, and Years Only.
Step 2: Line items should be Boolean formatted and the time scale should be set in accordance to the scale identified in the line item name.
The example below also includes filters with and without summary methods, providing additional views depending on the level of aggregation desired.
Once your preliminary filters are set, your module will look something like the screenshot below.
Step 3: Use the pre-set Time Filters across various modules and dashboards. Simply click on the filters icon in the toolbar, navigate to the time tab, select your Time Filter module from the module selection screen, and select the line item of your choosing. Use multiple line items at a time to filter your module or dashboard view.
If you have a multi-year model where the data range for different parts of the model varies, (for example, history covering two years, current year forecast, and three planning years) then Time Ranges should be able to deliver significant gains in terms of model size and performance.
But, before you rush headlong into implementing Time Ranges across all of your models, let me share a few considerations to ensure you maximize the value of the feature and avoid any unwanted pitfalls.
Naming Convention Time Ranges
As with all Anaplan models, there is no set naming convention, however, we do advocate consistency and simplicity. As with lists and modules, short names are good. I like to describe the naming convention thus—as short as practical—meaning you need to understand what it means, but don’t write an essay!
We recommend using the following convention:
FYyy-FYyy. For example, FY16-FY18, or FY18 for a single year
Time Ranges available are from 1981 to 2079, so the “19” or the “20” prefixes are not strictly necessary. Keeping the name as short as this has a couple of advantages:
It has a clear indication of the boundaries for the Time Range
It is short enough to see the name of the Time Range in the module and line items blueprint
The aggregations available for Time Ranges can differ for each Time Range and also differ from the main model calendar. If you take advantage of this and have aggregations that differ from the model calendar, you should add a suffix to the description. For example:
FY16-FY19 Q (to signify Quarter totals)
FY16-FY19 QHY (Quarter and Half Year totals)
FY16-FY19 HY (Half Year totals only)
Time Ranges are Static
Time Ranges can span from 1981 to 2079. As a result, they can exist entirely outside, within, or overlap the model calendar. This means that there may likely be some additional manual maintenance to perform when the year changes. Let’s review a simple example:
Assume the model calendar is FY18 with two previous years and two future years; the model calendar spans FY16-FY20
We have set up Time Ranges for historic data (FY16-FY17) and plan data (FY19-FY20)
We also have modules that use the model calendar to pull all of the history, forecast, and plan data together, as seen below:
At year end when we “roll over the model,” we amend the model calendar simply by amending the current year. What we have now is as follows:
You see that the history and plan Time Ranges are now out of sync with the model calendar.
How you change the history Time Range will depend on how much historical data you need or want to keep. Assuming you don’t need more than two year’s history, the Time Range should be re-named FY17-FY18 and the start period advanced to FY17 (from FY16).
Similarly, the plan Time Range should be renamed FY20-FY21 and advanced to FY20 (from FY19). FY18 is then available for the history to be populated and FY21 is available for plan data entry.
Time Ranges Pitfalls
Potential Data Loss
Time Ranges can bring massive space and calculation savings to your model(s), but be careful. In our example above, changing the Start Period of FY16-FY17 to FY17 would result in the data for FY16 being deleted for all line items using FY16-FY17 as a Time Range.
Before you implement a Time Range that is shorter or lies outside the current model calendar, and especially when implementing Time Ranges for the first time, ensure that the current data stored in the model is not needed. If in doubt, do some or all of the suggestions below:
Export out the data to a file
Copy the existing data on the line item(s) to other line items that are using the model calendar
Back up the entire model
The majority of the formula will update automatically when updating Time Ranges. However, if you have any hard-coded SELECT statements referencing years or months within the Time Range, you will have to amend or remove the formula before amending the Time Range. Hard-coded SELECT statements go against best practice for exactly this reason; they cause additional maintenance. We recommend replacing the SELECT with a LOOKUP formula from a Time Settings module.
There are other examples where the formula may need to be removed/amended before the Time Range can be adjusted. See the Anapedia documentation for more details.
When to use the Model Calendar
This is a good question and one that we at Anaplan pondered during the development of the feature; Do Time Ranges make the model calendar redundant? Well, I think the answer is “no,” but as with so many constructs in Anaplan, the answer probably is, “it depends!” For me, a big advantage of using the model calendar is that it is dynamic for the current year and the +/- years on either side. Change the current year and the model updates automatically along with any filters and calculations you have set up to reference current year periods, historical periods, future periods, etc.
(You are using a central time settings module, aren’t you??)
Time ranges don’t have that dynamism, so any changes to the year will need to be made for each Time Range. So, our advice before implementing Time Ranges for the first time is to review each Module and:
Assess the scope of the calculations
Think about the reduction Time Ranges will give in terms of space and calculation savings, but compare that with annual maintenance. For example:
If you have a two-year model, with one history year (FY17) and the current year (FY18), you could set up a Time Range spanning one year for FY17 and another one year Time Range for FY18 and use these for the respective data sets. However, this would mean each year both Time Ranges would need to be updated.
We advocate building models logically, so it is likely that you will have groups of modules where Time Ranges will fall naturally. The majority of the modules should reflect the model calendar. Once Time Ranges are implemented, it may be that you can reduce the scope of the model calendar. If you have a potential Time Range that reflects either the current or future model calendar, leave the timescale as the default for those modules and line items; why make extra work?
As outlined above, we don’t advocate hard-coded time selects of the majority of time items because of the negative impact on maintenance (the exceptions being All Periods, YTD, YTG, and CurrentPeriod). When implementing Time Ranges for the first time, take the opportunity to review the line item formula with time selects. These formulae can be replaced with lookups using a Time Settings module.
Application Lifecycle Management (ALM) Considerations
As with the majority of the Time settings, Time Ranges are treated as structural data. If you are using ALM, all of the changes must be made in the Development model and synchronized to Production. This gives increased importance to refer to the pitfalls noted above to ensure data is not inadvertently deleted.
Best of luck! Refer to the Anapedia documentation for more detail. Please ask if you have any further questions and let us and your fellow Anaplanners know of the impact Time Ranges have had on your model(s).
The process of designing a model will help you:
Understand the customer’s problem more completely.
Bring to light any incorrect assumptions you may have made, allowing for correction before building begins.
Provide the big-picture view for building. (If you were working on an assembly line building fenders, wouldn’t it be helpful to see what the entire car looked like?)
Table of Contents:
Understand the Requirements and the Customer’s Technical Ecosystem when Designing a Model
When you begin a project, gather information and requirements using a number of tools. These include:
Statement of Work (SOW): Definition of the project scope and project objectives/high-level requirements.
Project Manifesto: Goal of the project – big-picture view of what needs to be accomplished.
IT ecosystem: Which systems will provide data to the model and which systems will receive data from the model? What is the Anaplan piece of the ecosystem?
Current business process: If the current process isn’t working, it needs to be fixed before design can start.
Business logic: What key pieces of business logic will be included in the model?
Is a distributed model needed?
High user concurrency.
Security where the need is a separate model.
Regional differences that are better handled by a separate model.
Is the organization using ALM, requiring split or similar models to effectively manage development, testing, deployment, and maintenance of applications? (This functionality requires a premium subscription or above.)
User stories: These have been written by the client—more specifically, by the subject matter experts (SMEs) who will be using the model.
Why do this step?
To solve a problem, you must completely understand the current situation. Performing this step provides this information and the first steps toward the solution.
Results of this step:
Understand the goal of the project.
Know the organizational structure and reporting relationships (hierarchies).
Know where data is coming from and have an idea of how much data clean-up might be needed.
If any of the data is organized into categories (for example, product families) or what data relationships exist that need to be carried through to the model (for example, salespeople only sell certain products).
What lists currently exist and where are they are housed.
Know which systems the model will either import from or export to.
Know what security measures are expected.
Know what time and version settings are needed.
Document the User Experience
Front-to-back design has been identified as the preferred method for model design. This approach puts the focus on the end-user experience. We want that experience to align with the process so users can easily adapt to the model. During this step focus on:
User roles. Who are the users?
Identifying the business process that will be done in Anaplan.
Reviewing and documenting the process for each role.
The main steps. If available, utilize user stories to map the process. You can document this in any way that works for you. Here is a step-by-step process you can try:
What are the start and end-points of the process?
What is the result or output of the process? What does each role need to see/do in the process?
What are the process inputs and where do they come from?
What are the activities the user needs to engage in? Verb/object—approve request, enter sales amount, etc. Do not organize during this step. Use post-its to capture them.
Take the activities from step 4 and put them in the correct sequence.
Are there different roles for any of these activities? If no, continue with step 8. If yes, assign a role to each activity.
Transcribe process using PowerPoint ® or Lucid charts. If there are multiple roles, use swim lanes to identify the roles.
Check with SMEs to ensure accuracy.
Once the user process has been mapped out, do a high-level design of the dashboards.
Include: Information needed.
What data does the user need to see?
What the user is expected to do or decisions that the user makes.
Share the dashboards with the SMEs. Does the process flow align?
Why do this step?
This is probably the most important step in the model-design process. It may seem as though it is too early to think about the user experience, but ultimately the information or data that the user needs to make a good business decision is what drives the entire structure of the model.
On some projects, you may be working with a project manager or a business consultant to flesh out the business process for the user. You may have user stories, or it may be that you are working on design earlier in the process and the user stories haven’t been written. In any case, identify the user roles, the business process that will be completed in Anaplan, and create a high-level design of the dashboards. Verify those dashboards with the users to ensure that you have the correct starting point for the next step.
Results of this step:
List of user roles.
Process steps for each user role.
High-level dashboard design for each user role.
Use the Designed Dashboards to Determine What Output Modules are Necessary
Here are some questions to help you think through the definition of your output modules:
What information (and in what format) does the user need to make a decision?
If the dashboard is for reporting purposes, what information is required?
If the module is to be used to add data, what data will be added and how will it be used?
Are there modules that will serve to move data to another system? What data and in what format is necessary?
Why do this step?
These modules are necessary for supporting the dashboards or exporting to another system. This is what should guide your design—all of the inputs and drivers added to the design are added with the purpose of providing these output modules with the information needed for the dashboards or export.
Results of this step:
List of outputs and desired format needed for each dashboard.
Determine What Modules are Needed to Transform Inputs to the Data Needed for Outputs
Typically, the data at the input stage requires some transformation. This is where business rules, logic, and/or formulas come into play:
Some modules will be used to translate data from the data hub. Data is imported into the data hub without properties, and modules are used to import the properties. Reconciliation of items takes place before importing the data into the spoke model.
These are driver modules that include business logic, rules.
Why do this step?
Your model must translate data from the input to what is needed for the output.
Results of this step:
Business rules/calculations needed.
Create a Model Schema
You can whiteboard your schema, but at some point in your design process, your schema must be captured in an electronic format. It is one of the required pieces of documentation for the project and is also used during the Model Design Check-in, where a peer checks over your model and provides feedback.
Identify the inputs, outputs, and drivers for each functional area.
Identify the lists used in each functional area.
Show the data flow between the functional areas.
Identify time and versions where appropriate.
Why do this step?
It is required as part of The Anaplan Way process. You will build your model design skills by participating in a Model Design Check-in, which allows you to talk through the tougher parts of design with a peer.
More importantly, designing your model using a schema means that you must think through all of the information you have about the current situation, how it all ties together, and how you will get to that experience that meets the exact needs of the end-user without fuss or bother.
Result of this step:
A model schema that provides the big-picture view of the solution. It should include imports from other systems or flat files, the modules or functional areas that are needed to take the data from current state to what is needed to support the dashboards that were identified in Step 2. Time and versions should be noted where required. Include the lists that will be used in the functional areas/modules.
Your schema will be used to communicate your design to the customer, model builders, and others. While you do not need to include calculations and business logic in the schema, it is important that you understand the state of the data going into a module, the changes or calculations that are performed in the module and the state of the data leaving the module, so that you can effectively explain the schema to others.
For more information, check out 351 Schemas. This 10-to-15-minute course provides basic information about creating a model schema.
Verify That the Schema Aligns with Basic Design Principles
When your schema is complete, give it a final check to ensure:
It is simple.
“Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius — and a lot of courage to move in the opposite direction.” ― Ernst F. Schumacher
“Design should be easy in the sense that every step should be obviously and clearly identifiable. Simplify elements to make change simple so you can manage the technical risk.” — Kent Beck
The model aligns with the manifesto.
The business process is defined and works well within the model.
The Anaplan Optimizer aids business planning and decision making by solving complex problems involving millions of combinations quickly to provide a feasible solution.
Optimization provides a solution for selected variables within your Anaplan model that matches your objective based on your defined constraints. The Anaplan model must be structured and formatted to enable Optimizer to produce the correct solution.
You are welcome to read through the materials and watch the videos on this page, but Optimizer is a premium service offered by Anaplan (Contact your Account Executive if you don't see Optimizer as an action on the settings tab). This means that you will not be able to actually do the training exercises until the feature is turned on in your system.
The training involves an exercise along with documentation and videos to help you complete it.
The goal of the exercise is to setup the optimization exercise for two use cases; network optimization and production optimization. To assist you in this process we have created an optimization exercise guide document which will walk you through each of the steps. To further help we have created three videos you can reference:
An exercise walk-through
A demo of each use case
A demo of setting up dynamic time
Follow the order of the items listed below to assist with understanding how Anaplan's optimization process works:
Watch the use case video which demos the Optimizer functionality in Anaplan
Watch the exercise walkthrough video
Review documentation about how Optimizer works within Anaplan
Attempt the Optimizer exercise
Download the exercise walkthrough document
Download the Optimizer model into your workspace
How to configure Dynamic Time within Optimizer
Download the Dynamic Time document
Watch the Dynamic Time video
Attempt Network Optimization exercise
Attempt Production Optimization exercise
Making sure that production data lists are correctly marked within a model is a key step to setting up and using ALM . This guide will provide a solution to how someone can make revisions to their model to allow for the tagging of a list as a production data list. Please note: this solution doesn’t work if there are hard-coded references on non-composite summary items. For more information on working with production lists and ragged hierarchies, please visit Production lists and ragged hierarchies logic.
The issue arises as a model administrator needs to tag a production data list, but there are hard-coded references in the model that won’t allow the person to do so. When this occurs and the model administrator tries to tag it as a production list, they will get a warning similar to this:
See Formula Protection for more details.
To fix this issue, all direct formula references to production data lists need to be changed to be indirect references to lists using either LOOKUPs or Boolean formatted conditional logic. Below, you will find a step-by-step guide to replacing these formulas.
Identify formulas with hard-coded references
There is now an easy way to identify all of the formulas which are hard-coded to production data lists.
Check the 'Referenced in Formula' column in the General Lists section.
This will show the line items where the list is used. Check the respective formula for hard-coded references. If there are no hard-coded references, then it is OK to check the list as a production data list. This is the recommended approach, as just setting the lists without prior checking may lead to a rollback error being generated, which could be time-consuming for large models (as well as frustrating).
It is possible to just export the General Lists grid to help where there are multiple references for the same list and then use formulas and filters to identify all offenders in the same effort. This option will save significant amounts of time if there are many line items that would need to be changed.
You are looking for direct references on the list members:
[SELECT: List Name.list member]
ITEM(List Name) =List Name.List member
The following constructs are valid, but not recommended, as any changes to the names or codes could change the result of calculations:
IF CODE(ITEM(List Name))=
IF NAME(ITEM(List Name))=
After following those steps, you should have a list of all of the line items that need to be changed in the model in order for production data list to be open to being checked. Please note: There may still be list properties that have hard-coded references to items. You will need to take note of these as well, but as per D.I.S.C.O., (Best practice for Module design) we recommend that List Properties are replaced with Line Items in System Modules.
Replacing model formulas:
The next step is to replace these formulas within the model. For this, there are two recommended options.
The first option (Option 1 below) is to replace your SELECT statements with a LOOKUP formula that is referencing a list drop-down. Use this option when there are 1:1 mappings between list items and your formula logic. For example, if you were building out a P&L variance report and needed to select from a specific revenue account, you might use this option.
The second option (Option 2 below) for replacing these formulas is to build a logic module that allows you to use Booleans to select list items and reference these Boolean fields in your formulas. Use this option when there is more complex modeling logic than a 1:1 mapping. For example, you might use this option if you are building a variance report by region and you have different logic for all items under Region 1 (ex: budget – actual) than the items under Region 2 (ex: budget – forecast).
(Option 1) Add List Selections module to be used in LOOKUPs for 1:1 mappings:
From here you should make a module called List Selections, with no lists applied to it and a line item for each list item reference that you previously used in the formulas that will be changed. Each of these line items will be formatted as the list that you are selecting to be production data.
Afterward, you should have a module that looks similar to this: An easy and effective way to stay organized is to partition and group your line items of similar list formats into the same sections with a section header line item formatted as No Data and a style of "Heading 1."
After the line items have been created, the model administrator should use the list drop-downs to select the appropriate items which are being referenced. As new line items are created in a standard mode model, the model administrator will need to open the deployed model downstream to reselect or copy and paste the list formatted values in this module since this is considered production data.
Remove hard-coding and replace with LOOKUPs:
Once you have created the List Selections module with all of the correct line items, you will begin replacing old formulas, which you’ve identified in Excel, with new references.
For formulas where there is a SELECT statement, you will replace the entire SELECT section of the formula with a LOOKUP to the correct line item in the list selections.
Old Formula = Full PL.Amount[SELECT: Accounts.Product Sales] New Formula = Full PL.Amount[LOOKUP: List Selections.Select Product Sales]
For formulas where there is an IF ITEM (List Name) = List Name Item, you will replace the second section of the formula after the ‘=’ to directly reference the correct line item in the list selections.
Old Formula = If ITEM(Accounts) = Accounts.Product Sales THEN Full PL.Amount ELSE 0 New Formula = IF ITEM(Accounts) = List Selections.Select Product Sales THEN Full PL.Amount ELSE 0
(Option 2) Modeling for complex logic and many to many relationship:
In the event that you are building more complex modeling logic in your model, you should start by building Boolean references that you can use in your formulas. To accomplish this, you will create a new module with Boolean line items for each logic type that you need. Sticking with the same example as above, if you need to build a variance report where you have different logic depending on the region, start by creating a module by region that has different line items for each different logic that you need similar to the view below:
Once you have the Boolean module set up, you can then change your hard-coded formulas to reference these Boolean formatted line items to write your logic. The formula may look similar to this:
IF Region Logic.Logic 1 THEN logic1 ELSE IF Region Logic.Logic 2 THEN logic2 ELSE IF Region Logic.Logic 3 THEN logic3 ELSE 0
Here is a screenshot of what the end result may look like:
This method can be used across many different use cases and will provide a more efficient way of writing complex formulas while avoiding hard-coding for production data lists.
Selecting production data list:
After all of the hard-coded formulas have been changed in the model, you can navigate back to the Settings tab, and open General Lists. In the Production Data column, check the box for the list that you want to set as a production data list.
Repeat for each list in the model that needs to be a production data list:
For each list in the model that you need to make a production data list, you can repeat the steps throughout this process to successfully remove all hard-coded list references.
PLANS is the new standard for Anaplan modeling—“the way we model.” This covers more than just the formulas and includes and evolves existing best practices around user experience and data hubs. It is a set of rules on the structure and detailed design of Anaplan models. This set of rules will provide both a clear route to good model design for the individual Anaplanner and common guidance on which Anaplanners and reviewers can rely when passing models amongst themselves.
In defining the standard, everything we do will consider or be based around:
Performance – Use the correct structures and formula to optimize the Hyperblock
Logical – Build the models and formula more logically – See D.I.S.C.O. below
Auditable – Break up the formula for better understanding, performance, and maintainability
Necessary – Don’t duplicate expressions. Store and calculate data and attributes once and reference them many times. Don't have calculations on more dimensions than needed
Sustainable – Build with the future in mind, thinking about process cycles and updates
The standards will be based around three axes:
Performance - How do the structures and formula impact the performance of the system?
Usability/Auditability - Is the user able to understand how to interact with the functionality?
Sustainability - Can the solution be easily maintained by model builders and support?
We will define the techniques to use that balance on the three areas to ensure the optimal design of Anaplan models and architecture.
As part of model and module design, we recommend categorizing modules as follows:
Data – Data hubs, transactional modules, source data; reference everywhere
Inputs – Design for user entry, minimize the mix of calculations and outputs
System – Time management, filters, list attributes modules, mappings, etc.; reference everywhere
Calculations – Optimize for performance (turn summaries off, combine structures)
Outputs - Reporting modules, minimize data flow out
Why build this way?
Fewer repeated calculations
Optimized structures and formulas
Data and calculations reside in logical places
Model data flows can be easily understood
Model structure can be easily understood
Simplified formula (no need for complex expressions)
Formulas and structures are not repeated
Data is stored and calculated once, referenced many times, leading to efficient calculations
Models can be adapted and maintained more easily
Expansion and scaling simplified
Formula Optimization in Anaplan
Formula Structure for Performance
The Truth About Sparsity: Part 1
The Truth About Sparsity: Part 2
Data Hubs: Purpose and Peak Performance
To Version or Not to Version?
Best Practices for Module Design
Data Hubs: Purpose and Peak Performance
Formula Structure for Performance
Reduce Calculations for Better Performance
Formula Optimization in Anaplan
Dynamic Cell Access Tips and Tricks
Dynamic Cell Access - Learning App
Personal Dashboards Tips and Tricks
Time Range Application
Ask Me Anything (AMA) sessions
The Planual Rises
Thinking through the results of a modeling decision is a key part of ensuring good model performance—in other words, making sure the calculation engine isn’t overtaxed. This article highlights some ideas for how to lessen the load on the calculation engine.
Formulas should be simple; a formula that is nested, or uses multiple combinations, uses valuable processing time. Writing a long, involved formula makes the engine work hard. Seconds count when the user is staring at the screen. Simple is better. Breaking up formulas and using other options helps keep processing speeds fast.
You must keep a balance when using these techniques in your models, so the guidance is as follows:
Break up the most commonly changed formula
Break up the most complex formula
Break up any formula you can’t explain the purpose of in one sentence
Formulas with many calculated components
The structure of a formula can have a significant bearing on the amount of calculation that happens when inputs in the model are changed. Consider the following example of a calculation for the Total Profit in an application. There are five elements that make up the calculation: Product Sales, Service Sales, Cost of Goods Sold (COGS), Operating Expenditure (Op EX), and Rent and Utilities. Each of the different elements is calculated in a separate module. A reporting module pulls the results together into the Total Profit line item, which is calculated using the formula shown below. What happens when one of the components of COGS changes? Since all the source components are included in the formula, when anything within any of the components changes, this formula is recalculated. If there are a significant number of component expressions, this can put a larger overhead on the calculation engine than is necessary.
There is a simple way to structure the module to lessen the demand on the calculation engine. You can separate the input lines in the reporting module by creating a line item for each of the components and adding the Total Profit formula as a separate line item. This way, changes to the source data only cause the relevant line item to recalculate.
For example, a change in the Product Sales calculation only affects the Product Sales and the Total Profit line items in the Reporting module; Services Sales, Op EX, COGS and Rent & Utilities are unchanged. Similarly, a change in COGS only affects COGS and Total Profit in the Reporting module.
Keep the general guidelines in mind. It is not practical to have every downstream formula broken out into individual line items.
Plan to provide early exits from formulas
Conditional formulas (IF/THEN) present a challenge for the model builder in terms of what is the optimal construction for the formula, without making it overly complicated and difficult to read or understand. The basic principle is to avoid making the calculation engine do more work than necessary. Try to set up the formula to finish the calculations as soon as possible.
Always put first the condition that is most likely to occur. That way the calculation engine can quit the processing of the expression at the earliest opportunity.
Here is an example that evaluates Seasonal Marketing Promotions:
The summer promotion runs for three months and the winter promotion for two months.
There are more months when there is no promotion, so this formula is not optimal and will take longer to calculate.
This is better, as the formula will exit after the first condition more frequently.
There is an even better way to do this. Following the principles from above, add another line item for no promotion.
And then the formula can become:
This is even better because the calculation for No Promo has already been calculated, and Summer Promo occurs more frequently than Winter Promo.
It is not always clear which condition will occur more frequently than others, but here are a few more examples of how to optimize formulas:
FINDITEM formula The Finditem element of a formula will work its way through the whole list looking for the text item, and if it does not find the referenced text, it will return blank. If the referenced text is blank, it will also return a blank. Inserting a conditional expression at the beginning of the formula keeps the calculation engine from being overtaxed.
IF ISNOTBLANK(TEXT) THEN FINDITEM(LIST,TEXT) ELSE BLANK
IF BLANK(TEXT) THEN BLANK ELSE FINDITEM(LIST,TEXT)
Use the first expression if most of the referenced text contains data and the second expression if there are more blanks than data.
LAG, OFFSET, POST, etc. If in some situations there is no need to lag or offset data, for example, if the lag or offset parameter is 0. The value of the calculation is the same as the period in question. Adding a conditional at the beginning of the formula will help eliminate unnecessary calculations:
IF lag_parameter = 0 THEN 0 ELSE LAG(Lineitem, lag_parameter, 0)
IF lag_parameter <> 0 THEN LAG(Lineitem, lag_parameter, 0) ELSE 0
The use of formula a or b will depend on the most likely occurrence of 0s in the lag parameter.
Booleans Avoid adding unnecessary clutter for line items formatted as BOOLEANS. There is no need to include the TRUE or FALSE expression, as the condition will evaluate to TRUE or FALSE.
IF Sales > 0 then TRUE ELSE FALSE
Dimension Order Affects Calculation Performance
Ensuring consistency in the order of dimensions will help improve the performance of your models. This consistency is relevant for modules and individual line items. Why does the order matter? Anaplan creates and uses indexes to perform calculations. Each cell in a module where dimensions intersect is given an index number.
Here are two simple modules dimensioned by Customer and Product. In the first module, Product comes first and Customer second, and in the second module, Customer is first and Product is second.
In this model, there is a third module that calculates revenue as Prices * Volumes.
Anaplan assigns indexes to the intersections in the module. Here are the index values for the two modules. Note that some of the intersections are indexed the same for both modules: Customer 1 and Product 1, Customer 2 and Product 2, and Customer 3 and Product 3, and that the remainder of the cells has a different index number. Customer 1 and Product 2 is indexed with the value of 4 in the top module and the value of 2 in the bottom module.
The calculation is Revenue = Price * Volume.
To run the calculation, Anaplan performs the following operations by matching the index values from the two modules.
Since the index values are not aligned, the processor scans the index values to find a match before performing the calculation.
When the dimensions in the module are reordered, these are the index values:
The index values for each of the modules are now aligned. As the line-items of the same dimensional structure have an identical layout, the data is laid out linearly in memory. The calculation process accesses memory in a completely linear and predictable way. Anaplan’s microprocessors and memory sub-systems are optimized to recognize this pattern of access and to pre-emptively fetch the required data.
How does the dimension order become different between modules? When you build a module, Anaplan uses the order that you drag the lists onto the Create Module dialog. The order is also dependent on where the lists are added. The lists that you add to the 'pages' area are first, then the lists that you add to the 'rows' area, and finally the lists added to the 'columns' area.
It is simple to re-order the lists and ensure consistency. Follow these steps:
On the Modules pane, (Model Settings>Modules) look for lists that are out of order in the Applies To column. Click the Applies To row that you want to re-order, then click the ellipsis.
In the Select Lists dialog, click OK.
In the Confirm dialog, click OK.
The lists will be in the order that they appear in General Lists.
When you have completed checking the list order in the modules, click the Line Items tab and check the line items. Follow steps 1 through 3 to re-order the lists.
Subsets and Line Item Subsets
One word of caution about Subsets and Line Item subsets. In the example below, we have added a subset and a Line Item Subset to the module:
The Applies To is as follows:
Clicking on the ellipsis, the dimensions are re-ordered to:
The general lists are listed in order first, followed by subsets and then line item subsets. You still can reorder the dimensions by double-clicking in the Applies to column and manually copying or typing the dimensions in the correct order.
Largest vs. Smallest?
This is the normal follow up question, and unfortunately, the answer is "it depends." Through research we have found that it all depends on the data within the module. Also, it can get very confusing if subsets are used; the Customer list might be bigger than the Products list, but if a subset of Customers is used that is smaller than Products, then what?
Also, we don't advocate ordering the lists in the General Lists setting in size order; the lists should be ordered in hierarchical order top to bottom, so, by definition, that will be smallest to largest. So our advice is be consistent. Think about how you describe the problem. Does the business talk about Customer by Product, or Products for Customers? Agree to a convention, and stick to it.
The calculation performance only relates to the common lists between the source(s) and the target. The order of separate lists in one or other doesn’t have any bearing on the calculation speed.
Personal dashboards are a great new feature that enables end users to save a personalized view of a dashboard. To get the most out of this feature, here are a few tips and tricks.
Tidy Up Dashboards
Any change to a master dashboard (using the Dashboard Designer) will reset all personal views of a dashboard, so before enabling personal dashboards, take some time to ensure that the current dashboards are up to date:
Implement any pending development changes (including menu options).
Turn on the Dashboard Quick Access toolbar (if applicable).
Check and amend all text box headings and comments for size, alignment, spelling, and grammar.
Delete or disable any redundant dashboards to ensure end users don’t create personal views of obsolete dashboards.
Use Filters R ather Th an Show/Hide
It’s best practice to use a filter rather than show and hide for the rows and/or columns on a grid.
This is now more beneficial because amending the items shown or hidden on a master dashboard will reset the personal views. For example, suppose you want to display just the current quarter of a timescale. You could manually show/hide the relevant periods, but, at quarter end when the Current Period is updated, the dashboard will need to be amended, and all those personal views will be reset. If you use a filter, referencing a time module, the filter criteria will update automatically, as will the dashboard. No changes are made to the master dashboard, and all the personal views are preserved.
Create a Communication and Migration Strategy
Inevitably there will be changes that must be made to master dashboards. To minimize the disruption for end users, create a communication plan and follow a structured development program . These can include the following:
Bundle up dashboard revisions into a logical set of changes.
Publish these changes at regular intervals (e.g., on a monthly cycle).
Create a regular communication channel to inform users of changes and the implications of those changes.
Create a new dashboard, and ask end users to migrate to the new dashboard over a period of time before switching off the old dashboard.
Application Lifecycle Management (ALM)
If you are using ALM: any structural changes to master dashboards will reset all personal views of dashboards.
Reducing the number of calculations will lead to quicker calculations and improve performance. However, this doesn’t mean combining all your calculations into fewer line items, as breaking calculations into smaller parts has major benefits for performance.
Learn more about this in the Formula Structure article.
How is it possible to reduce the number of calculations? Here are three easy methods:
Turn off unnecessary Summary method calculations.
Avoid formula repetition by creating modules to hold formulas that are used multiple times.
Ensure that you are not including more dimensions than necessary in your calculations.
Turn off Summary method calculations
Model builders often include summaries in a model without fully thinking through if they are necessary. In many cases, the summaries can be eliminated. Before we get to how to eliminate them, let’s recap on how the Anaplan engine calculates.
In the following example we have a Sales Volume line-item that varies by the following hierarchies:
This means that from the detail values at SKU, City, and Channel level, Anaplan calculates and holds all 23 of the aggregate combinations shown below—24 blocks in total.
With the Summary options set to Sum, when a detailed item is amended (represented in the grey block), all the other aggregations in the hierarchies are also re-calculated. Selecting the None summary option means that no calculations happen when the detail item changes. The varying levels of hierarchies are quite often only there to ease navigation, and the roll-up calculations are not actually needed, so there may be a number of redundant calculations being performed. The native summing of Anaplan is a faster option, but if all the levels are not needed it might be better to turn off the summary calculations and use a SUM formula instead.
For example, from the structure above, let’s assume that we have a detailed calculation for SKU, City, and Channel (SALES06.Final Volume). Let’s also assume we need a summary report by Region and Product, and we have a module (REP01) and a line item (Volume) dimensioned as such.
REP01.Volume = SALES06 Volume Calculation.Final Volume is replaced with REP01.Volume = SALES06.Final Volume[SUM:H01 SKU Details.Product, SUM:H02 City Details.Region]
The second formula replaces the native summing in Anaplan with only the required calculations in the hierarchy.
How do you know if you need the summary calculations? Look for the following:
Is the calculation or module user-facing?
If it is presented on a dashboard, then it is likely that the summaries will be needed. However, look at the dashboard views used. A summary module is often included on a dashboard with a detail module below; Effectively, the hierarchy sub-totals are shown in the summary module, so the detail module doesn’t need the sum or all the summary calculations.
Detail to Detail
Is the line item referenced by another detailed calculation line item? This is very common, and if the line item is referenced by another detailed calculation the summary option is usually not required. Check the Referenced by column and see if there is anything referencing the line item.
Calculation and staging modules
If you have used the D.I.S.C.O. module design, you should have calculation/staging modules. These are often not user-facing and have many detailed calculations included in them. They also often contain large cell counts, which will be reduced if the summary options are turned off.
Can you have different summaries for time and lists?
The default option for Time Summaries is to be the same as the lists. You may only need the totals for hierarchies, or just for the timescales. Again, look at the downstream formulas.
The best practice advice is to turn off the summaries when you create a line item, particularly if the line item is within a Calculation module (from the D.I.S.C.O. design principles).
Avoid Formula Repetition
An optimal model will only perform a specific calculation once. Repeating the same formula expression multiple times will mean that the calculation is performed multiple times. Model builders often repeat formulas related to time and hierarchies. To avoid this, refer to the module design principles (D.I.S.C.O.) and hold all the relevant calculations in a logical place. Then, if you need the calculation, you will know where to find it, rather than add another line item in several modules to perform the same calculation.
If a formula construct always starts with the same condition evaluation, evaluate it once and then refer to the result in the construct. This is especially true where the condition refers to a single dimension but is part of a line item that goes across multiple dimension intersections. A good example of this can be seen in the example below:
START() <= CURRENTPERIODSTART() appears five times and similarly START() > CURRENTPERIODSTART() appears twice.
To correct this, include these time-related formulas in their own module and then refer to them as needed in your modules.
Remember, calculate once; reference many times!
Taking a closer look at our example, not only is the condition evaluation repeated, but the dimensionality of the line items is also more than required. The calculation only changes by the day, as per the diagram below:
But the Applies To here also contains Organization, Hour Scale, and Call Center Type.
Because the formula expression is contained within the line item formula, for each day the following calculations are also being performed:
And, as above, it is repeated in many other line items.
Sometimes model builders use the same expression multiple times within the same line item. To reduce this overcalculation, reference the expression from a more appropriate module; for example, Days of Week (dimensioned solely by day) which was shown above. The blueprint is shown below, and you can see that the two different formula expressions are now contained in two line items and will only be calculated by day; the other dimensions that are not relevant are not calculated.
Substitute the expression by referencing the line items shown above.
In this example, making these changes to the remaining lines in this module reduces the calculation cell count from 1.5 million to 1500.
Check the Applies to for your formulas, and if there are extra dimensions, remove the formula and place it in a different module with the appropriate dimensionality .
Details of known issues
Performance issues with long nested formulas
Need to have a long formula on time as a result of nested intermediate calculations.
If the model size does not prevent from adding extra line items, it's a better practice to create multiple intermediate line items and reduce the size of the formula, as opposed to nesting all intermediate calculations into one gigantic formula.
This applies to summary formulae (SUM, LOOKUP, SELECT).
Combining SUM and LOOKUP in the same line item formula can cause performance issues in some cases. If you have noticed a drop in performance after adding a combined SUM and LOOKUP to a single line item, then split it into two line items.
RANKCUMULATE causes slowness
A current issue with the RANKCUMULATE formula can mean that the time to open the model, including rollback, can be up to five times slower than they should be.
There is currently no suitable workaround. Our recommendations are to stay within the constraints defined in Anapedia.
SUM/LOOKUP with large cell count
Separate formulas into different line items to reduce calculation time (fewer cells need to recalculate parts of a formula that would only affect a subset of the data).
A known issue with SUM/LOOKUP combinations within a formula can lead to slow model open and calculation times, particularly if the line item has a large cell count.
All line items do not apply to time or versions.
Y = X[SUM: R, LOOKUP: R]
Y Applies to [A,B]
X Applies to [A,B]
R Applies to [B] list formatted [C]
Add a new line item 'intermediate' that must have 'Applies To' set to the 'Format' of 'R'
intermediate = X[SUM: R]
Y = intermediate[LOOKUP: R]
This issue is currently being worked on by Development and a fix will be available in a future release
Calculations are over non-common dimensions
Anaplan calculates quicker if calculations are over common dimensions. Again, best seen in an example. If you have, List W, X Y = A + B Y Applies To W, X A Applies To W B Applies To W This performs slower than, Y = Intermediate Intermediate = A + B Intermediate Applies To W All other dimensions are the same as above. Similarly, you can substitute A & B above for a formula, e.g. SUM/LOOKUP calculations.
Cell history truncated
Currently, history generation has a time limit of 60 seconds set. The history generation is split into three stages with 1/3 of time allocated to each.
The first stage is to build a list of columns required for the grid. This involves reading all the history. If this takes more than 20 seconds, then the user receives the message "history truncated after x seconds - please modify the date range," where X is how many seconds it took. No history is generated.
If the first stage completes within 20 seconds, it goes on to generate the full list of history.
In the grid only the first 1000 rows are displayed; the user must Export history to get a full history. This can take significant time depending on volume.
The same steps are taken for model and cell history. The cell history is generated from loading the entire model history and searching through the history for the relevant cell information. When the model history gets too large then it is currently truncated to prevent performance issues. Unfortunately, this can make it impossible to retrieve the cell history that is needed.
Make it real time when needed
Do not make it real time unless it needs to be.
By this we mean, do not have line items where users input data being referenced by other line items unless they have to be. A way around this could be to have users have their data input sections, which is not referenced anywhere, or as little as possible, and, say, at the end of the day when no users are in the model, run an import which would update into cells where calculations are then done. This may not always be possible if the end user needs to see resulting calculations from his inputs, but if you can limit these to just do the calculations that he needs to see and use imports during quiet times then this will still help.
We see this often when not all reporting modules need to be recalculated real time. In many cases, many of these modules are good to be calculated the day after.
Don't have line items that are dependent on other line items unnecessarily.This can cause Anaplan to not utilize the maximum number of calculations it can do at once. This happens where a line items formula cannot be calculated because it is waiting on results of other line items. A basic example of this can be seen with line item's A, B, and C having the formulas: A - no formula B= A C = B Here B would be calculated, and then C would be calculated after this. Whereas if the setup was: A - no formula B = A C = A Here B and C can be calculated at the same time. This also helps if line item B is not needed it can then be removed, further reducing the number of calculations and the size of the model. This needs to considered on a case-by-case basis and is a tradeoff between duplicating calculations and utilizing as many threads as possible. If line item B was referenced by a few other line items, it may indeed be quicker to have this line item.
Summary cells often take processing time even if they are not actually recalculated because they must check all the lower level cells.
Reduce summaries to ‘None’ wherever possible. This not only reduces aggregations, but also the size of the model.
Dynamic Cell Access (DCA) controls the access levels for line items within modules. It is simple to implement and provides modelers with a flexible way of controlling user inputs. Here are a few tips and tricks to help you implement DCA effectively.
Access control modules
Any line item can be controlled by any other applicable Boolean line item. To avoid confusion over which line item(s) to use, it is recommended that you add a separate functional area and create specific modules to hold the driver line items. These modules should be named appropriately (e.g. Access – Customers > Products, or Access – Time etc.). The advantage of this approach is the access driver can be used for multiple line items or modules, and the calculation logic is in one place. In most cases, you will probably want read and write access. Therefore, within each module it is recommended that you add two line items (Write? and Read?). If the logic is being set for Write?, then set the formulas for the Read? line item to NOT WRITE? (or vice-versa). It may be necessary to add multiple line items to use for different target line items, but start with this a default.
You may not need to create a module that mirrors the dimensionality of the line item you wish to control. For example, if you have a line item dimensioned by customer, product, and time, and you wish to make actual months read-only, you can use an access module just dimensioned by time. Think about what dimension the control needs to apply to and create an access module accordingly.
What settings do I need?
There are three different states of access that can be applied: READ, WRITE, and INVISIBLE or hidden. There are two blueprint controls (read control and write control) and there are two states for a driver (TRUE or FALSE). The combination of these determines which state is applied to the line item. The following table illustrates the options:
Only the read access driver is set:
Read Access Driver
Target Line Item
Only the write access driver is set:
Write Access Driver
Target Line Item
Both read access and write access drivers are set:
Read Access Driver
Write Access Driver
Target Line Item
Revert to Read*
*When both access drivers are set, the write access driver takes precedence with write access granted if the status of the write access driver is true. If the status of the write access driver is false, the cell access is then taken from the read access driver status.
The settings can also be expressed in the following table:
WRITE ACCESS DRIVER
READ ACCESS DRIVER
Note: If you want to have read and write access, it is necessary to set both access drivers within the module blueprint.
Think about how you want the totals to appear. When you create a Boolean line item, the default summary option is NONE. This means that if you used this access driver line item, any totals within the target would be invisible. In most cases, you will probably want the totals to be read-only, so setting the access driver line item summary to ANY will provide this setting. If you are using the Invisible setting to “hide” certain items and you do not want the end user to compute hidden values, then it is best to use the ANY setting for the access driver line item. This means that only if all values in the list are visible then the totals show; otherwise, the totals are hidden from view.
Note that this article uses a planning dashboard as an example, but many of these principles apply to other types of dashboards as well.
Building a useful planning dashboard always starts with getting a set of very clear user stories, which describe how a user should interact with the system.
The user stories need to identify the following:
What the user wants to do.
What data the user needs to see to perform this action.
What data the user wants to change.
How the user will check that changes made have taken effect.
If one or more of the above is missing in a user story, ask the product owner to complete the description. Start the dashboard design, but use it to obtain the answers. It will likely change as more details arrive.
Product Owners Versus Designers
Modelers should align with product owners by defining concrete roles and responsibilities for each team member.
Product owners should provide what data users are expecting to see and how they wish to interact with the data, not ask for specific designs (this is the role of the modelers/designers).
Product owners are responsible for change management and should be extra careful when dashboard/navigation is significantly different than what is currently being used (i.e. Excel ® ).
Pre-Demo Peer Review
Have a usability committee that:
Is made up of modeling peers outside the project and/or project team members outside of modeling team.
Will host a mandatory gate-check meeting to review models before demos to product owners or users.
Committee is designed to ensure:
Best design by challenging modelers.
Consistency between models.
The function is clear.
Exceptions/calls to action are called out.
The best first impression.
Exception, Call to Action, Measure Impact
Building a useful planning dashboard will be successful if the dashboard allows users to highlight and analyze exceptions (issues, alerts, warning), take action and plan to solve these, and always visually check the impact of the change against a target.
Example: A dashboard is built for these two user stories that compliment each other.
Story 1: Review all of my accounts for a specific region, manually adjust the goals and enter comments.
Story 2: Edit my account by assigning direct and overlay reps.
The dashboard structure should be made of:
Dashboard header: Short name describing the purpose of the dashboard at the top of the page in "Heading 1."
Groupings: A collection of dashboard widgets.
Call to action.
Info grid(s) : Specific to one item of the main grid.
Info charts: Specific to one item of the main grid.
Specific action buttons: Specific to one item of the main grid.
Main charts: Covers more than one item of the main grid.
Individual line items: Specific to one item of the main grid, usually used for commentaries.
A dashboard can have more than one of these groupings, but all elements within a grouping need to answer the needs of the user story.
Use best judgements to determine the number of groupings added to one dashboard. A maximum of two-to-three groupings is reasonable. Past this range, consider building a new dashboard. Avoid having a "does it all" dashboard, where users keep scrolling up and down to find each section.
If users ask for a table of contents at the top of a dashboard, it's a sign that the dashboard has too much functionality and should be divided into multiple dashboards.
Call to Action
Write a short sentence describing the task to be completed within this grouping. Use the Heading 2 format.
The main grid is the central component of the dashboard, or of the grouping. It's where the user will spend most of their time.
This main grid will display the KPIs needed for the task (usually in the columns) and will display one or more other dimension in the rows.
Warning: Users may ask for 20+ KPIs and need these KPIs to be broken down by many dimensions, such as by product, actual/plan/variance, or by time. It's critical to have a main grid as simple and as decluttered as possible. Avoid the "data jungle" syndrome. Users are used to "data jungles" simply because that's what they are used to with Excel.
Tips to avoid data jungle syndrome:
Make a careful KPI election (KPIs are usually the line items of a module).
Display the most important KPIs ONLY, which are those needed for decision making. Hide the others for now.
A few criteria for electing a KPI in the main grid are:
The KPI is meant to be compared across the dimension items in the rows, or across other KPIs.
Viewing the KPI values for all of the rows is required to make the decision.
The KPI is needed for sorting the rows (except on row name).
A few criteria for not electing a KPI in the main grid are (besides not matching the above criteria) when we need these KPIs in more of a drill down mode; The KPI provides valid extra info, but just for the selected row of the Dashboard and does not need to be displayed for all rows.
These "extra info" KPIs should be displayed in a different grid, which will be referred to as "info grid" in this document. Take advantage of the row/column sync functionality to provide a ton of data in your dashboard but only display data when requested or required.
Design your main grid in such a way that it does not require the user to scroll left and right to view the KPIs:
Efficiently select KPIs.
Use the column header wrap.
Set the column size accordingly.
It is ok to have users scroll vertically on the main grid. Only display 15 to 20 rows at a time when there are numerous rows, as well as other groupings and action buttons, to display on the same dashboard.
Use sorts and a filter to display relevant data.
Sort Your Grid
Always sort your rows. Obtain the default sort criteria via user stories. If no special sort criteria is called out, use the alphanumeric sort on the row name. This will require a specific line item.
Train end users to use the sort functionality.
Filter Your Grid
Ask end users or product owners what criteria to use to display the most relevant rows. It could be:
Those that make 80 percent of a total. Use the RankCumulate function.
Those that have been modified lately. This requires a process to attach a last modified date to a list item, updated daily via a batch mode.
When the main grid allows item creation, always display the newly created first.
If end users need to apply their own filter values on some attributes of the list items, such as filter to show only those who belong to EMEA or those whose status is "in progress," build pre-set user-based filters.
Use the new Users list.
Create modules dimensioned by user with line items (formatted as lists) to hold the different criteria to be used.
Create a module dimensioned by Users and the list to be filtered. In this module resolve the filter criteria from above against the list attributes to a single Boolean.
Apply this filter in the target module.
Educate the users to use the refresh button, rather than create an "Open Dashboard" button.
Color Code Your Grid
Use colored cells to call attention to areas of a grid, such as green for positive and red for negative.
Color code cells that specifically require data entry.
Display the Full Details
If a large grid is required, something like 5k lines and 100 columns, then:
Make it available in a dedicated full-screen dashboard via a button available from the summary dashboard, such as an action button.
Do not add such a grid to a dashboard where KPIs, charts, or multiple grids are used for planning.
These dashboards are usually needed for ad-hoc analysis and data discovery, or random verification of changes, and can create a highly cluttered dashboard.
The main chart goes hand-in-hand with the main grid. Use it to compare one or more of the KPIs of the main grid across the different rows.
If the main grid contains hundreds or thousands of items, do not attempt to compare this in the main chart. Instead, identify the top 20 rows that really matter or that make most of the KPI value and compare these 20 rows for the selected KPI.
Location: Directly below or to the right of main display grid; should be at least partially visible with no scrolling.
Synchronization with a selection of KPI or row of the main display grid.
Should be used for:
Comparison between row values of the main display grid.
Displaying difference when the user makes change/restatement or inputs data.
In cases where a chart requires 2–3 additional modules to be created: Implement and test performance.
If no performance issues are identified, keep the chart.
If performance issues are identified, work with product owners to compromise.
These are the grids that will provide more details for an item selected on the main grid. If territories are displayed as rows, use an info grid to display as many line items as necessary for this territory. Avoid cluttering your main grid by displaying all of these line items for all territories at once. This is not necessary and will create extra clutter and scrolling issues for end users.
Location: Below or to the right of the main display grid.
Synced to selection of list item in the main display grid.
Should read vertically to display many metrics pertaining to list item selected.
Similar to info grids, an info chart is meant to compare one or more KPIs for a selected item in the rows of the main grid.
These should be used for:
Comparison of multiple KPIs for a single row.
Comparison or display of KPIs that are not present on the main grid, but are on info grid(s).
Comparing a single row's KPI(s) across time.
Place it on the right of the main grid, above or below an info grid.
Specific Action Buttons
Location: Below main grid; Below the KPI that the action is related to, OR to the far left/right - similar to "checkout."
Should be an action that is to be performed on the selected row of the main grid.
Can be used for navigation as a drill down to a detailed view of a selected row/list item.
Should NOT be used as lateral navigation between dashboards; Users should be trained to use the left panel for lateral navigation.
Individual Line Items
Serve as a call out of important KPIs or action opportunities (i.e., user setting container for explosion, Container Explosion status).
If actions taken by users require additional collaboration with other users, it should be published outside the main grid (giving particular emphasis by publishing the individual line item/s).
Call to action.
Serves as a header for a grouping.
Short sentence describing what the user should be performing within the grouping.
Formatted in "Heading 2."
Directly located next to a drop-down, input field, or button where the function is not quite clear.
No more than 5–6 words.
Formatted in "instructions."
Use Tooltips on modules and line items for more detailed instructions to avoid cluttering the dashboard.
A dashboard with grids that includes large lists that have been filtered and/or sorted can take time to open. The opening action can also become a blocking operation; when this happens, you'll see the blue toaster box showing "Processing....." when the dashboard is opening. This article includes some guidelines to help you avoid this situation.
Rule 1: Filter large lists by creating a Boolean line item.
Avoid the use of filters on text or non-Boolean formatted items for large lists on the dashboard. Instead, create a line item with the format type Boolean and add calculations to the line item so that the results return the same data set as the filter would. Combine multiple conditions into a single Boolean value for each axis.
This is especially helpful if you implement user-based filters, where the Boolean will be by the user and by the list to be filtered.
The memory footprint of a Boolean line item is 8x smaller than other types of line items.
Known issue: On an existing dashboard where a saved view is being modified by replacing the filters with a Boolean line item for filtering, you must republish it to the dashboard. Simply removing the filters from the published dashboard will not improve performance.
Rule 2: Use the default Sort.
Use sort carefully, especially on large lists. Opening a dashboard that has a grid where a large list is sorted on a text formatted line item will likely take 10 seconds or more and may be a blocking operation.
To avoid using the sort: Your list is (by default) sorted by the criteria you need. If it is not sorted, you can still make the grid usable by reducing the items using a user-based filter.
Rule 3: Reduce the number of dashboard components.
There are times when the dashboard includes too many components, which slows performance. Avoid horizontal scrolling and try and keep vertical scrolling to no more than three pages deep. Once you exceed these limits, consider moving the components into multiple dashboards. Doing so will help both performance and usability.
Rule 4: Avoid using large lists as page selectors.
If you have a large list and use it as a page selector on a dashboard, that dashboard will open slowly. It may take 10 seconds or more. The loading of the page selector takes more than 90% of the total time.
Known issue: If a dashboard grid contains list formatted line items, the contents of page selector drop-downs are automatically downloaded until the size of the list meets a certain threshold; once this size is exceeded, the download happens on demand, or in other words when a user clicks the drop down. The issue is that when Anaplan requests the contents of list formatted cell drop-downs, it also requests contents of ALL other drop-downs INCLUDING page selectors.
Recommendation: Limit the page selectors on medium to large lists using the following tips:
a) Make the page selector available in one grid and use the synchronized paging option for all other grids and charts. No need to allow users to edit the page in every dashboard grid or chart.
b) For multi-level hierarchies, consider creating a separate dashboard with multiple modules (just with the list entries) to enable the users to navigate to the desired level. They can then navigate back to the main planning dashboard. This approach also de-clutters the dashboards.
c) If the dashboard elements don't require the use of the list, you should publish them from a module that doesn't contain this list. For example, floating page selectors for time or versions, or grids that are displayed as rows/columns-only should be published from modules that do not include the list.
Why? The view definitions for these elements will contain all the source module's dimensions, even if they are not shown, and so will carry the overhead of populating the large page selector if it was present in the source.
Rule 5: Split formatted line items into separate modules.
Having many line items (that are formatted as lists) in a single module displayed on a dashboard can reduce performance as all of the lists are stored in memory (similar to Rule 4). It is better, if possible, to split the line items into separate modules. Remember from above, try not to have too many components on a dashboard; only include what the users really need and create more dashboards as needed.
Assume the following Non-Composite list, ragged hierarchy, needs to be set to Production Data.
We need to refer to the ultimate parent to define the logic calculation. In the example, we have assumed that children of Parent 1 and Parent 3 need to return the 'logic 1' value from the constants module below, and those under Parent 2 return 'logic 2,' and we apportion the results based on the initial data of the children.
Data / IF PARENT(ITEM('Non-Composite List')) = 'Non-Composite List'.'Parent 1' THEN Data[SELECT: 'Non-Composite List'.'Parent 1'] ELSE IF PARENT(ITEM('Non-Composite List')) = 'Non-Composite List'.'Parent 2' THEN Data[SELECT: 'Non-Composite List'.'Parent 2'] ELSE IF PARENT(ITEM('Non-Composite List')) = 'Non-Composite List'.'Parent 3' OR PARENT(ITEM('Non-Composite List')) = 'Non-Composite List'.'Child 3.1' THEN Data[SELECT: 'Non-Composite List'.'Parent 3'] ELSE 0
Select Proportion * IF PARENT(ITEM('Non-Composite List')) = 'Non-Composite List'.'Parent 1' OR PARENT(ITEM('Non-Composite List')) = 'Non-Composite List'.'Parent 3' OR PARENT(ITEM('Non-Composite List')) = 'Non-Composite List'.'Child 3.1' THEN Parent Logic Constants.'Logic 1' ELSE IF PARENT(ITEM('Non-Composite List')) = 'Non-Composite List'.'Parent 2' THEN Parent Logic Constants.'Logic 2' ELSE 0
These “hard references” will prevent the list from being set as a production list.
Create a Parents Only list (this could be imported from the Non-Composite list). As we don't need the sub-level parents, we do not need to include 'Child 3.1,' even though it is technically a parent.
To calculate the proportion calculation without the SELECT, a couple of intermediate modules are needed:
Parent Mapping module
This module maps the Non-Composite parents to the Parents Only list. Due to the different levels in the hierarchy, we need to check for sub levels and use the parent of Child 3.1. In this example, the mapping is automatic because the items in the Parents Only list have the same name as those in the Non-Composite list. The mapping could be a manual entry if needed.
The formula and “applies to” are:
Non Composite Parent: PARENT(ITEM('Non-Composite List')) Applies to: Non-Composite List
Parent of Non Composite Parent: PARENT(Non-Composite Parent) Applies to: Non-Composite List
Parent to Map: IF ISNOTBLANK(PARENT(Parent of Non Composite Parent)) THEN Parent of Non Composite Parent ELSE Non Composite Parent Applies to: Non-Composite List
Parents Only List FINDITEM(Parents Only List, NAME(Parent to Map)) Applies to: Parents Only List
Parents Only subtotals
An intermediary module is needed to hold the subtotals.
Parent Logic Calc.Data[SUM: Parent Mapping.Parents Only List]
Parent Logic? Module
We now define the logic for the parents in a separate module.
Add Boolean line items for each of the “logic” types.
Then you can refer to the logic above in the calculations.
Data / Parents Only Subtotals.Calculation[LOOKUP: Parent Mapping.Parents Only List]
Lookup Proportion * IF Parent Logic?.'Logic 1?'[LOOKUP: Parent Mapping.Parents Only List] THEN Parent Logic Constants.'Logic 1' ELSE IF Parent Logic?.'Logic 2?'[LOOKUP: Parent Mapping.Parents Only List] THEN Parent Logic Constants.'Logic 2' ELSE 0
The list can now be set as a production list as there are no “hard references”. Also, the formulas are smaller, simpler and now more flexible should the logic need to change. If Parent 3 needs to use Logic 2, it is a simple change to the checkbox.
Little and Often
Would you spend weeks on your budget submission spreadsheet or your college thesis without once saving it?
The same should apply to making developments and setting revision tags. Anaplan recommends that during the development cycle, you set revision tags at least once per day. We also advise testing the revision tags against a dummy model if possible.
The recommended procedure is as follows:
After a successful sync to your production model, create a dummy model using the ‘Create from Revision’ feature. This will create a small test model with no production list items.
At the end of each day (as a minimum), set a revision tag and attempt to synchronize the test model to this revision tag. The whole process should only take a couple of minutes.
Repeat step 2 until you are ready to promote the changes to your production model.
Why Do We Recommend This?
There are a very small number of cases where combinations of structural changes cause a synchronization error (99 percent of synchronizations are successful). The Anaplan team is actively working to provide a resolution within the product, but in most cases, splitting changes between revision tags allows the synchronization to complete. In order to understand the issue when a synchronization fails, our support team needs to analyze the structural changes between the revisions.
Setting revision tags frequently provides the following benefits:
The number of changes between revisions is reduced, resulting in easier and faster issue diagnosis.
It provides an early warning of any problems so that someone can investigate them before they become critical.
The last successful revision tag allows you to promote some, if not most, of the changes if appropriate.
In some cases, a synchronization may fail initially, but when applying the changes in sequence the synchronization completes. Using the example from above:
Synchronizations to the test model for R1, R2, and R3 were all successful, but R3 fails when trying to synchronize to production.
Since the test model successfully synchronized from R2 and then R3, you can repeat this process for the production model.
The new comparison report provides clear visibility of the changes between revision tags.
A large and complex model such as 10B cells can take up to 10 minutes to load the first time it's in use after a period of inactivity of 60 minutes.
The only way to reduce the load time is by identifying what formula takes most of the time. This requires the Anaplan L3 support (ask for a Model Opening Analysis), but you can reduce the time yourself by applying the formula best practices listed above.
One other possible leverage is on list setup: Text properties on a list can increase the load times, and subsets on lists can disproportionately increase load times. It is best practice not to use List Properties but house the attributes in a System model dimensioned by the list. See Best practice for Module design for more details.
A model will save when the amount of changes made by end-users exceeds a certain threshold. This action can take several minutes and will be a blocking operation. Administrators have no leverage on model save besides formula optimization and reducing model complexity. Using ALM and Deployed mode increases this threshold, so it is best to use Deployed mode whenever possible.
A model will roll back in some cases of an invalid formula, or when a model builder attempts to adjust a setting that would result in an invalid state. In some large models, the rollback takes approximately the time to open the model, and up to 10 minutes worth of accumulated changes, followed by a model save.
The recommendation is to use ALM and have a DEV model which size does not exceed 500M cells, with a production list limited to a few dozen items, and have TEST and PROD models with the full size and large lists. Since no formula editing will happen in TEST or PROD, the model will never rollback after a user action. It can roll back on the DEV model but will take a few seconds only if the model is small.
This article describes the technique to dynamically filter specific levels of a hierarchy on a dashboard and provides a method to select and visualize hierarchies on a dashboard.
This article explains how to configure the calculation of the level of a list in a hierarchy in order to apply specific calculations (custom summary) or filters by level on a dashboard.
In this example, we have an organized hierarchy of 4 levels (Org L1 to Org L4). For each item in the hierarchy, we want to calculate a module value that returns the associated level that is to be displayed on a dashboard.
Notes and Platform Context
The technique addresses a specific limitation within dashboards where a composite hierarchy's list level cannot be selected if the list is synchronized to module objects on the dashboard.
The technique uses a static module based on the levels of the composite structure used for filtering of the object on a dashboard.
The technique is based on utilizing the Summary Method "Ratio" on line items corresponding to the list levels of the composite hierarchy to define the values of the filtering line items. Note that this method is not a formula calculation, but rather a use of the Summary Method Ratio on each line item applied to the composite hierarchy.
In this example, a four-level list composite hierarchy list is used. The hierarchy in this example has asymmetrical leaf items per parent:
Defining the Level of Each List
In order to calculate the level of each item in each of the lists L1 - L4, we need to create a module that calculates the associated level of each member by this technique:
1) Create as many line items as levels of hierarchy, plus one technical line item.
2) Configure the settings in the blueprint of the line items of this filtering module, per this example and table:
Summary method Setting Ratio
Technical line item*
Level or L4 (lowest level)
L3 / Technical
L2 / Technical
L1 / Technical
L1 / Technical
When applying these settings, the filtering module looks like this:
*Note the Technical line item Summary method is using Formula. Alternatively, The Minimum Summary Method can be used but will return an error when a level of the hierarchy does not have any children and the level calculated is blank.
The filtering module with Summary method applied results:
Use the line item at the lowest level—Level (or L4) (LOWEST)—as the basis of filters or calculations.
Applying a Filter on Specific Levels in Case of Synchronization
When synchronization is enabled, the option “Select levels to show” is not available. Instead, a filter based on the level calculated can be used to show only specific levels.
In the example, we apply a filter which matches any of the level 4 and 1:
The following filtered dashboard result is achieved by using the composite hierarchy as a page selector:
Imagine This Scenario:
You are in the middle of making changes in your development model and have been doing so for the last few weeks. The changes are not complete and are not ready to synchronize. However, you just received a request for an urgent fix from the user community that is critical for the forthcoming monthly submission. What do you do?
What you don’t want to do is take the model out of deployed mode! You also don’t want to lose all the development work you have been doing.
Don’t worry! Following the procedure below will ensure you can apply the hotfix quickly and keep your development work.
The following diagram illustrates the procedure:
It’s a Two-Stage Process:
Roll the development model back to a version that doesn’t contain any changes (is the same as production), and apply the hotfix to that version.
Add a new revision tag to the development model as a temporary placeholder. (Note the History ID of the last structural change as you'll need it later.)
On the development model, use History to restore to a point where development and production were identical (before any changes were made in development).
Apply the hotfix.
Save a new revision of the development model.
Sync the development model with the production model.
Production now has its hotfix.
Restore the changes to development and apply the hotfix.
On the development model, use the History ID from Stage 1 – Step 1 to restore to the version containing all of the development work (minus the hotfix).
Reapply the hotfix to this version of development.
Create a new revision of the development model.
Development is now back to where it was, with the hotfix now applied.
When your development work is complete, you can promote the new version to production using ALM best practice.
The procedure is documented in the Fixing Production Issues Anapedia article.