Sort by:
There are several business use cases that require the ability to compute distances between pairs of locations. Optimizing sales territory realignment Logistics cost optimization Transportation industry passenger revenue or cost per mile Franchise territory design Brick-and-mortar market area analysis (stores, hotels, bank branches, …) Optimizing inventory among geographic Distribution Centers At their core, each of these requires knowing how far apart a pair of sites are positioned. This article provides step-by-step instructions for creating a dashboard where users select a location and set a market area radius, then the dashboard shows all population centers in that vicinity with some demographic information. Doing the Math: Trig functions in Anaplan  Distance between two latitude-longitude points (lat1, lon1) and (lat2, lon2) requires solving this equation: Radius of Earth *  ACOS(  COS(90 - lat1) * COS(90 - lat2)       + SIN(90 - lat1) * SIN(90 - lat2) * COS(lon1 - lon2)     )  This formula works quite well. We know the Earth isn’t flat, but it’s not a perfect sphere either. Our home world bulges a bit at the equator and is flattened a bit at the poles. But for most purposes other than true rocket science, this equation gives sufficiently accurate results.  Unfortunately, Anaplan doesn’t have the functions SIN, COS, or ACOS built in, and the usual workaround – lookup modules – simply won't do in this situation because we need much higher precision than lookups can practically handle. But don't despair, it is possible to calculate trig functions to 8 decimal point precision using nothing more sophisticated than Anaplan's POWER() function and some ingenuity. In the following demonstration model, the trig functions needed for distance calculation have been built for you using equations called Taylor Series expansions. Step-by-Step Construction  Here's a small educational project: In our example model, the user will select one post code, enter a market area radius value, and click a button. Changing the selected post code updates rows in a filtered module, so we need to refresh the dashboard to see the result. The dashboard will identify all post codes in the vicinity location and display their population, growth rate, median age, and distance. Step 1 Get U.S. postal code demographic and geolocation data. Our model will use Census Zip Code tabulation areas. ZCTAs are essentially postal Zip Codes adjusted to remove unpopulated Zip Codes that are only for PO Boxes and combining some codes where that solves practical census tallying problems. There are about 32,000 ZCTAs and 43,000 Zip Codes in the U.S. Download the US.zip file from http://download.geonames.org/export/zip/ That file provides a full list of US Zip Codes and their county, state, latitude, and longitude. Other countries post codes are also listed in that folder. Download demographic data by post code from the US Census Bureau report DP05, choose the 5-digit ZCTA geographic option for the entire US. To calculate growth rate, you will need datasets for both the most recent year available and for the fifth year prior to that. (2017 and 2012 at the time this was written.)  Notes: Import maps in the next two steps will need some manipulation by concatenating fields to get nice looking names (such as "Boston, MA 02134") and to get codes to match up among the lists. You'll need to either import to transaction modules or do this manipulation in Excel. Step 2 Create a list named "Loc 3 - Post Codes". Set a top level member with a name like “Total Population Centers”.  It is generally a best practice to create a Clear action for any list to be run before future list reloads. Notes: For the purposes of this demonstration, a flat list of 5-digit codes is sufficient. I found it helpful to roll up ZCTAs by state (Loc 1) and county (Loc 2). This is optional. I will leave “give friendly names to your list members and assign them to parents” as an exercise for the advanced reader. Step 3 Create a module named "DATA: Loc 3 - Post Codes" dimensionalized by the list "Loc 3 - Post Codes" (no time, no version). Notes: There are a LOT of data fields in the tables you downloaded, and much more data is available in other Census Bureau products (gender, households, age details, income, …). Feel free to add line items for any census fields that you find useful. I found it helpful to pull the data into Excel and keep only the fields of interest to streamline the mapping process in Anaplan. Expect a few rejects due to mismatches between Zip Code and ZCTA files. The geonames.org zip code list US.zip doesn't include Puerto Rico and other island territories. Census data does include them. As a result, Census ZCTAs that begin with 006## and 009## will report there is no matching list member. In a "real world" application, a significant effort goes into insuring that data "ties out" by addressing issues like this. You may either ignore the small percentage of rejects (my sincere apologies to the people of Puerto Rico) or you may find and add those missing zip codes to your list. Your choice.  For this exercise, the module must contain, at minimum, these line items:   Formula Format Applies To DATA: Loc 3 - Post Codes     Loc 3 - Post Codes Latitude   Number - Longitude   Number - Total Population   Number - Total Population 5 yr prior   Number - Growth Rate POWER(Total Population / 'Total Population 5 yr prior', 0.2) – 1 Number - Median Age   Number - Median Age * Tot Pop Median Age * Total Population Number - Set the Summary properties as follows: 'Total Population’, ‘Total Population 5 yr prior’, and ‘Median Age * Tot Pop’ aggregate by Sum. ‘Growth Rate’ aggregates by Formula. ‘Median Age’ aggregates by Ratio: ‘Median Age * Tot Pop’ / ‘Total Population’ Create import actions to load your downloaded data into “DATA: Loc 3 – Post Codes” Step 4 Create a module named "INPUT: Globals". It holds four constants and two inputs as line items. There is no List, Time or Version dimension. I put those line items’ values into the Formula so users cannot change them. Line Items are:   Formula Format Applies To INPUT: Globals      <none> UI   No Data   Select a Location   List, Loc 3 – Post Code - Market Area Radius (miles)   Number - Constants   No Data - Earth Radius (km) 6371 Number - Pi 3.141592654 Number - km / mi 1.609344 Number - ACOS(2/3) 0.588002604 Number - Publish the “Select a Location” and “Market Area Radius (miles) line items to a new dashboard with the name “Distance Demo”. Note: Distance calculations in kilometers are provided below. Feel free to adjust your model’s inputs, outputs, and filters to the needs of your locale. Step 5 Create a module named "CALC: Post Code - Nearby Population Centers" dimensionalized by only the list “Loc 3 - Post Codes”. There are no Time or Versions dimensions.   Formula Format Applies To CALC: Post Code - Nearby Population Centers     Loc 3 - Post Codes Origination Location:   No Data - Selected Post Code 'INPUT: Globals'.'Select a Location' List: Loc 3 - Post Codes <none> Selected Post Code Latitude 'DATA: Loc 3 - Post Codes'.Latitude[LOOKUP: Selected Post Code] Number <none>  Selected Post Code Longitude 'DATA: Loc 3 - Post Codes'.Longitude[LOOKUP: Selected Post Code] Number <none>  Destination Location:   No Data   Population Center ITEM('Loc 3 - Post Codes') List: Loc 3 - Post Codes - Population If 'In Market Area?' Then 'DATA: Loc 3 - Post Codes'.Total Population Else 0 Number - Population 5 yr prior IF In Market Area? THEN 'DATA: Loc 3 - Post Codes'.'Total Population 5 yr prior' ELSE 0 Number -  Growth Rate IF In Market Area? THEN POWER(Population / 'Population 5 yr prior', -2) - 1 ELSE 0 Number, Percent - Median Age If 'In Market Area?' Then  'DATA: Loc 3 - Post Codes'.Median Age Else 0 Number - Median Age * Pop If 'In Market Area?' Then  Median Age * Population Else 0 Number - Pop Center Latitude 'DATA: Loc 3 - Post Codes'.Latitude Number - Pop Center Longitude 'DATA: Loc 3 - Post Codes'.Longitude Number - Calculated Distance:   No Data   Distance (miles) 'EarthRadius (miles)' * 'ACOS(x)' Number - Distance (km) 'EarthRadius (km)' * 'ACOS(x)' Number - Staging   No Data   EarthRadius (km) 'INPUT: Globals'.'Earth Radius (km)' Number   EarthRadius (miles) 'EarthRadius (km)' / 'INPUT: Globals'.'km / mi' Number   Pi 'INPUT: Globals'.Pi Number   Radians(90 - Lat1) 2 * Pi * (90 - Selected Post Code Latitude) / 360 Number - COS(Radians(90 -  Lat1)) 1 - POWER('Radians(90 - Lat1)', 2) / 2 + POWER('Radians(90 - Lat1)', 4) / 24 - POWER('Radians(90 - Lat1)', 6) / 720 + POWER('Radians(90 - Lat1)', 8) / 40320 - POWER('Radians(90 - Lat1)', 10) / 3628800 + POWER('Radians(90 - Lat1)', 12) / 479001600 - POWER('Radians(90 - Lat1)', 14) / 87178291200 + POWER('Radians(90 - Lat1)', 16) / 20922789888000 - POWER('Radians(90 - Lat1)', 18) / 6402373705728000 + POWER('Radians(90 - Lat1)', 20) / 2432902008176640000 Number - SIN(Radians(90 - Lat1)) 'Radians(90 - Lat1)' - POWER('Radians(90 - Lat1)', 3) / 6 + POWER('Radians(90 - Lat1)', 5) / 120 - POWER('Radians(90 - Lat1)', 7) / 5040 + POWER('Radians(90 - Lat1)', 9) / 362880 - POWER('Radians(90 - Lat1)', 11) / 39916800 + POWER('Radians(90 - Lat1)', 13) / 6227020800 - POWER('Radians(90 - Lat1)', 15) / 1307674368000 + POWER('Radians(90 - Lat1)', 17) / 355687428096000 - POWER('Radians(90 - Lat1)', 19) / 121645100408832000 + POWER('Radians(90 - Lat1)', 21) / 51090942171709440000 Number - Radians(90 - Lat2) 2 * Pi * (90 - Pop Center Latitude) / 360 Number - COS(Radians(90 -  Lat2)) 1 - POWER('Radians(90 - Lat2)', 2) / 2 + POWER('Radians(90 - Lat2)', 4) / 24 - POWER('Radians(90 - Lat2)', 6) / 720 + POWER('Radians(90 - Lat2)', 8) / 40320 - POWER('Radians(90 - Lat2)', 10) / 3628800 + POWER('Radians(90 - Lat2)', 12) / 479001600 - POWER('Radians(90 - Lat2)', 14) / 87178291200 + POWER('Radians(90 - Lat2)', 16) / 20922789888000 - POWER('Radians(90 - Lat2)', 18) / 6402373705728000 + POWER('Radians(90 - Lat2)', 20) / 2432902008176640000 Number - SIN(Radians(90 - Lat2)) 'Radians(90 - Lat2)' - POWER('Radians(90 - Lat2)', 3) / 6 + POWER('Radians(90 - Lat2)', 5) / 120 - POWER('Radians(90 - Lat2)', 7) / 5040 + POWER('Radians(90 - Lat2)', 9) / 362880 - POWER('Radians(90 - Lat2)', 11) / 39916800 + POWER('Radians(90 - Lat2)', 13) / 6227020800 - POWER('Radians(90 - Lat2)', 15) / 1307674368000 + POWER('Radians(90 - Lat2)', 17) / 355687428096000 - POWER('Radians(90 - Lat2)', 19) / 121645100408832000 + POWER('Radians(90 - Lat2)', 21) / 51090942171709440000 Number - Radians(Long1-Long2) 2 * Pi * (Selected Post Code Longitude - Pop Center Longitude) / 360 Number - COS(RADIANS(Long1-Long2)) 1 - POWER('Radians(Long1-Long2)', 2) / 2 + POWER('Radians(Long1-Long2)', 4) / 24 - POWER('Radians(Long1-Long2)', 6) / 720 + POWER('Radians(Long1-Long2)', 8) / 40320 - POWER('Radians(Long1-Long2)', 10) / 3628800 + POWER('Radians(Long1-Long2)', 12) / 479001600 - POWER('Radians(Long1-Long2)', 14) / 87178291200 + POWER('Radians(Long1-Long2)', 16) / 20922789888000 - POWER('Radians(Long1-Long2)', 18) / 6402373705728000 + POWER('Radians(Long1-Long2)', 20) / 2432902008176640000 Number - X - pre adj 'COS(Radians(90 -  Lat1))' * 'COS(Radians(90 -  Lat2))' + 'SIN(Radians(90 - Lat1))' * 'SIN(Radians(90 - Lat2))' * 'COS(RADIANS(Long1-Long2))' Number - X IF ABS('X - pre adj') <= 1 / POWER(2, 0.5) THEN 'X - pre adj' ELSE IF ABS('X - pre adj') > 1 THEN SQRT(-1) ELSE POWER(1 - POWER('X - pre adj', 2), 0.5) Number - ASIN (Taylor Series) X + 1 / 6 * POWER(X, 3) + 3 / 40 * POWER(X, 5) + 5 / 112 * POWER(X, 7) + 35 / 1152 * POWER(X, 9) + 63 / 2816 * POWER(X, 11) + 231 / 13312 * POWER(X, 13) + 143 / 10240 * POWER(X, 15) / 6435 / 557056 * POWER(X, 17) + 12155 / 1245184 * POWER(X, 19) + 46189 / 5505024 * POWER(X, 21) + 88179 / 12058624 * POWER(X, 23) Number - ASIN(x) IF ABS('X - pre adj') <= 1 / SQRT(2) THEN 'ASIN (Taylor Series)' ELSE IF 'X - pre adj' > 1 / SQRT(2) AND 'X - pre adj' <= 1 THEN Pi / 2 - 'ASIN (Taylor Series)' ELSE IF 'X - pre adj' < -1 / SQRT(2) AND 'X - pre adj' > -1 THEN -Pi / 2 + 'ASIN (Taylor Series)' ELSE SQRT(-1) Number - ACOS(x) Pi / 2 - 'ASIN(x)' Number - Filters   No Data   In Market Area? Distance (miles)' > 0 AND 'Distance (miles)' <= 'INPUT: Globals'.'Market Area Radius (miles)' Boolean - Set summary settings for the user-facing population and age line items just as you did in step 2. The line items under Distance Calculations and Staging should not roll up, so use summary: None. (This is a best practice for conserving model size). The ‘In Market Area?’ Boolean should roll up using summary: Any. Filter the list with ‘In Market Area?’ = TRUE and publish the 'CALC: Post Code - Nearby Population Centers' module to your dashboard. In grid view, use pivot / filter / hide in the module:  ‘Loc 3 – Post Codes’ is the row dimension, Filter on ‘Is Market Area?’ = True, Line items are in the columns and only the desired line items show, Adjust column settings for heading wrap and column widths. Save the view and publish it to your dashboard. Step 6 Create a new Action that opens the dashboard, name it "Refresh Surrounding Locations". Publish it to your dashboard and position it between the two inputs and the output module. This action button is needed because the output module is filtered for "In Market Area?" = True but that filtering is only updated when the dashboard is refreshed. This completes the build instructions, following are more insights on the calculations. The calculation logic Take a look at the line item formulas under Staging. In those, we build the distance equation from its component parts. You might find it helpful to know that each trig operation, such as COS(90 - lat1), is a line item. Radius of Earth *  ACOS(  COS(90 - lat1) * COS(90 - lat2)       + SIN(90 - lat1) * SIN(90 - lat2) * COS(lon1 - lon2)     ) In overview, the line items represent these steps: Get the constants Pi, Earth’s radius, etc. Convert latitude and longitude from degrees to radians Use Taylor Series formulas to calculate the variety of SIN and COS components Use another Taylor Series formula and a trig identity to calc ASIN, then convert ASIN to ACOS using another trig identity. Multiply the finished ACOS by Earth’s radius. Going Multidimensional This example model is intentionally small; it uses a single list of locations and computes their distances from a selected location. In most "real world" applications, you need to know the distance between every pairing of two lists of locations, for example Stores and Towns, or DCs and Stores. Let’s call them origin and destination locations. To compute distance between every possible pairing, you would dimensionalize the CALC module above by those two lists and replace the user selection with ITEM(<origin location list>). Good luck!!
View full article
  NOTE: The following information is also attached as a PDF for downloading and using off-line.   Overview The process of designing a model will help you: Understand the customer’s problem more completely Bring to light any incorrect assumptions you may have made, allowing for correction before building begins Provide the big picture view for building. (If you were working on an assembly line building fenders, wouldn’t it be helpful to see what the entire car looked like?)   Steps: Understand the requirements and the customer’s technical ecosystem when designing a model When you begin a project, gather information and requirements using a number of tools. These include: Statement of Work (SOW): Definition of the project scope and project objectives/high level requirements Project Manifesto: Goal of the project – big picture view of what needs to be accomplished IT ecosystem: Which systems will provide data to the model and which systems will receive data from the model? What is the Anaplan piece of the ecosystem? Current business process: If the current process isn’t working, it needs to be fixed before design can start. Business logic: What key pieces of business logic will be included in the model?  Is a distributed model needed? High user concurrency Security where the need is a separate model Regional differences that are better handled by a separate model Is the organization using ALM, requiring split or similar models to effectively manage development, testing, deployment, and maintenance of applications? (This functionality requires a premium subscription or above.) User stories: These have been written by the client—more specifically, by the subject matter experts (SMEs) who will be using the model.   Why do this step? To solve a problem, you must completely understand the current situation. Performing this step provides this information and the first steps toward the solution.   Results of this step: Understand the goal of the project Know the organizational structure and reporting relationships (hierarchies) Know where data is coming from and have an idea of how much data clean-up might be needed If any of the data is organized into categories (for example, product families) or what data relationships exist that need to be carried through to the model (for example, salespeople only sell certain products) What lists currently exist and where are they are housed Know which systems the model will either import from or export to Know what security measures are expected Know what time and version settings are needed   Document the user experience Front to back design has been identified as the preferred method for model design. This approach puts the focus on the end user experience. We want that experience to align with the process so users can easily adapt to the model. During this step focus on: User roles. Who are the users? Identifing the business process that will be done in Anaplan. Reviewing and documenting the process for each role. The main steps. If available, utilize user stories to map the process. You can document this in any way that works for you. Here is a step-by-step process you can try: What are the start and end points of the process? What is the result or output of the process? What does each role need to see/do in the process? What are the process inputs and where do they come from? What are the activities the user needs to engage in? Verb/object—approve request, enter sales amount, etc. Do not organize during this step. Use post-its to capture them. Take the activities from step 4 and put them in the correct sequence. Are there different roles for any of these activities? If no, continue with step 8. If yes, assign a role to each activity. Transcribe process using PowerPoint ®  or Lucid charts. If there are multiple roles, use swim lanes to identify the roles. Check with SMEs to ensure accuracy. Once the user process has been mapped out, do a high level design of the dashboards Include: Information needed What data does the user need to see? What the user is expected to do or decisions that the user makes Share the dashboards with the SMEs. Does the process flow align?   Why do this step?  This is probably the most important step in the model design process. It may seem as though it is too early to think about the user experience, but ultimately the information or data that the user needs to make a good business decision is what drives the entire structure of the model. On some projects, you may be working with a project manager or a business consultant to flesh out the business process for the user. You may have user stories, or it may be that you are working on design earlier in the process and the user stories haven’t been written. In any case, identify the user roles, the business process that will be completed in Anaplan, and create a high level design of the dashboards. Verify those dashboards with the users to ensure that you have the correct starting point for the next step.   Results of this step: List of user roles Process steps for each user role High level dashboard design for each user role   Use the designed dashboards to determine what output modules are necessary Here are some questions to help you think through the definition of your output modules: What information (and in what format) does the user need to make a decision? If the dashboard is for reporting purposes, what information is required? If the module is to be used to add data, what data will be added and how will it be used? Are there modules that will serve to move data to another system? What data and in what format is necessary?   Why do this step? These modules are necessary for supporting the dashboards or exporting to another system. This is what should guide your design—all of the inputs and drivers added to the design are added with the purpose of providing these output modules with the information needed for the dashboards or export.   Results of this step: List of outputs and desired format needed for each dashboard   Determine what modules are needed to transform inputs to the data needed for outputs Typically, the data at the input stage requires some transformation. This is where business rules, logic, and/or formulas come into play: Some modules will be used to translate data from the data hub. Data is imported into the data hub without properties, and modules are used to import the properties. Reconciliation of items takes place before importing the data into the spoke model. These are driver modules that include business logic, rules.    Why do this step?  Your model must translate data from the input to what is needed for the output    Results of this step: Business rules/calculations needed   Create a model schema You can whiteboard your schema, but at some point in your design process, your schema must be captured in an electronic format. It is one of the required pieces of documentation for the project and is also used during the Model Design Check-in, where a peer checks over your model and provides feedback.  Identify the inputs, outputs, and drivers for each functional area Identify the lists used in each functional area Show the data flow between the functional areas Identify time and versions where appropriate   Why do this step?   It is required as part of The Anaplan Way process. You will build your model design skills by participating in a Model Design Check-in, which allows you to talk through the tougher parts of design with a peer. More importantly, designing your model using a schema means that you must think through all of the information you have about the current situation, how it all ties together, and how you will get to that experience that meets the exact needs of the end user without fuss or bother.    Result of this step: Model schema that provides the big picture view of the solution. It should include imports from other systems or flat files, the modules or functional areas that are needed to take the data from current state to what is needed to support the dashboards that were identified in Step 2. Time and versions should be noted where required. Include the lists that will be used in the functional areas/modules.  Your schema will be used to communicate your design to the customer, model builders, and others. While you do not need to include calculations and business logic in the schema, it is important that you understand the state of the data going into a module, the changes or calculations that are performed in the module and the state of the data leaving the module, so that you can effectively explain the schema to others.  For more information, check out 351 Schemas.  This 10 to 15 minute course provides basic information about creating a model schema. Verify that the schema aligns with basic design principles When your schema is complete, give it a final check to ensure: It is simple. “Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius — and a lot of courage to move in the opposite direction.”  ― Ernst F. Schumacher “Design should be easy in the sense that every step should be obviously and clearly identifiable. Simplify elements to make change simple so you can manage the technical risk.” — Kent Beck The model aligns with the manifesto. The business process is defined and works well within the model.
View full article
In most use cases, a single model provides the solution you are seeking, but there are times it makes sense to separate, or distribute, models rather than have them in a single instance. The following articles provide insight that can help you during the design process to determine if a distributed model is needed. What is Application Lifecycle Management (ALM)? What types of distributed models are there? When should I consider a distrbuted model? How do changes to the primary model impact distributed models? What should I do after building a distributed model?
View full article
PLANS is the new standard for Anaplan modelling; “the way we model”. This will cover more than just the formulas and will include and evolve existing best practices around user experience and data hubs. The initial focus is to develop a set of rules on the structure and detailed design of Anaplan models. This set of rules will provide both a clear route to good model design for the individual Anaplanner, and common guidance on which Anaplanners and reviewers can rely when passing models amongst themselves.  In defining the standard, everything we do will consider or be based around: Performance – Use the correct structures and formulae to optimize the Hyperblock Logical – Build the models and formulae more logically – See D.I.S.C.O below Auditable – Break up formulae for better understanding, performance and maintainability Necessary – Don’t duplicate expressions, store reference data and attributes once, no unnecessary calculations Sustainable – Build with the future in mind, think about process cycles and updates        The standards will be based around three axes: Performance - How do the structures and formulae impact the performance of the system? Usability/Auditability - Is the user able to understand how to interact with the functionality? Sustainability - Can the solution be easily maintained by model builders and support? We will define the techniques to use that balance the three areas to ensure the optimal design of Anaplan models and architecture       D.I.S.C.O As part of model and module design we recommend categorizing modules as follows: Data – Data hubs, transactional modules, source data; reference everywhere Inputs – Design for user entry, minimize the mix of calculations and output System – Time management, filters, mappings etc.; reference everywhere Calculations – Optimize for performance (turn summaries off, combine structures) Outputs -  Reporting modules, minimize data flows out
View full article
What happens to History when I delete a user from a workspace?
View full article
Overview The Anaplan Optimizer aids business planning and decision making by solving complex problems involving millions of combinations quickly to provide a feasible solution. Optimization provides a solution for selected variables within your Anaplan model that matches your objective based on your defined constraints. The Anaplan model must be structured and formatted to enable Optimizer to produce the correct solution. You are welcome to read through the materials and watch the videos on this page, but Optimizer is a premium service offered by Anaplan (Contact your Account Executive if you don't see Optimizer as an action on the settings tab). This means that you will not be able to actually do the training exercises until the feature is turned on in your system. Training The training involves an exercise along with documentation and videos to help you complete it. The goal of the exercise is to setup the optimization exercise for two use cases; network optimization and production optimization. To assist you in this process we have created an optimization exercise guide document which will walk you through each of the steps. To further help we have created three videos you can reference: An exercise walk-through A demo of each use case A demo of setting up dynamic time Follow the order of the items listed below to assist with understanding how Anaplan's optimization process works: Watch the use case video which demos the Optimizer functionality in Anaplan Watch the exercise walkthrough video Review documentation about how Optimizer works within Anaplan Attempt the Optimizer exercise Download the exercise walkthrough document Download the Optimizer model into your workspace How to configure Dynamic Time within Optimizer Download the Dynamic Time document Watch the Dynamic Time video Attempt Network Optimization exercise Attempt Production Optimization exercise
View full article
As a model builder you have to define line items formats over and over. Using a text expander/snippet tool, you can speed up the configuration of modules. When you add a new Line Item, Anaplan sets it by default as a Number (Min Significant Digits : 4, Thousands Separator : Comma, Zero Format : Zero, etc.). You usually change it once and copy it over to other line items in the module. Snippet tools can store format definition of generic formats (numbers, text, boolean, or no data) and by a simple shortcut, paste it in the format of the desired line items.  Below is an example of a number format line item with no decimal and hyphens instead of zeros. On my Mac, I press Option + X, I type "Num..." and get a list of all Number formats I pre-defined. I press Enter to paste it. It also works if several line items were selected. The value stored for this Number format is : {"minimumSignificantDigits":-1,"decimalPlaces":0,"decimalSeparator":"FULL_STOP","groupingSeparator":"COMMA","negativeNumberNotation":"MINUS_SIGN","unitsType":"NONE","unitsDisplayType":"NONE","currencyCode":null,"customUnits":null,"zeroFormat":"HYPHEN","comparisonIncrease":"GOOD","dataType":"NUMBER"} Here is the result of a text format snippet. {"textType":"GENERAL","dataType":"TEXT"} Or a Heading line item (No Data, Style : Heading 1). ---- false {"dataType":"NONE"} - Year Model Calendar All false false {"summaryMethod":"NONE","timeSummaryMethod":"NONE","timeSummarySameAsMainSummary":true,"ratioNumeratorIdentifier":"","ratioDenominatorIdentifier":""} All Versions true false Heading1 - - - 0  This simple trick can save you a lot of clicks. While we are unable to recommend specific snippet tools, your PC or Mac may include one by default, while others are easy to locate for free or low-cost online.
View full article
Model Load: A large and complex model such as 10B cells can take 10 minutes to load the first time it's in use after a period of inactivity of 60 minutes. The only way to reduce the load time, besides reducing the model size, is by identifying what formula takes most of the time. This requires the Anaplan L3 support, but you can reduce the time yourself by applying the formula best practices listed above. One other possible leverage is on list setup: Text properties on a list can increase the load times and subsets on lists can disproportionately increase load times by up to 10 times. See if you can impact the model load on reviewing these 2 and use module line item instead. Model Save: A model will save when the amount of changes made by end-users exceeds a certain threshold. This action can take several minutes and will be a blocking operation. Administrator have no leverage on model save besides formula optimization and model size reducing.  Model Rollback: A model will roll back in some cases of invalid formula, or when a model builder attempts to create a process, an import a view which name already exists. In some large implementation cases, on a complex model made of 8B+ cells, the rollback takes approximately the time to open the model, and up to 10 minutes worth of accumulated changes, followed by a model save. The recommendation is to use ALM and have a DEV model which size does not exceed 500M cells, with production list limited to a few dozen items, and have TEST and PROD model with the full size and large lists. Since no formula editing will happen in TEST or PROD, the model will never rollback after a user action. It can roll back on the DEV model, but will take a few seconds only if the model is small.
View full article
This article provides the steps needed to create a basic time filter module. This module can be used as a point of reference for time filters across all modules and dashboards within a given model. The benefits of a centralized Time Filter module include: One centralized governance of time filters. Optimization of workspace, since the filters do not need to be re-created for each view. Instead, use the Time Filter module.  Step 1: Create a new module with two dimensions—time and line items. The example below has simple examples for Weeks Only, Months Only, Quarters Only, and Years Only. Step 2: Line items should be Boolean formatted and the time scale should be set in accordance to the scale identified in the line item name. The example below also includes filters with and without summary methods, providing additional views depending on the level of aggregation desired. Once your preliminary filters are set, your module will look something like the screenshot below.  Step 3: Use the pre-set Time Filters across various modules and dashboards. Simply click on the filters icon in the tool bar, navigate to the time tab, select your Time Filter module from the module selection screen, and select the line item of your choosing. Use multiple line items at a time to filter your module or dashboard view.
View full article
Dimension Order affects Calculation Performance Ensuring consistency in the order of dimensions will help improve performance of your models. This consistency is relevant for modules and individual line items. Why does the order matter? Anaplan creates and uses indexes to perform calculations. Each cell in a module where dimensions intersect is given an index number. Here are two simple modules dimensioned by Customer and Product. In the first module, Product comes first and Customer second and in the second module, Customer is first and Product second. In this model, there is a third module that calculates revenue as Prices * Volumes. Anaplan assigns indexes to the intersections in the module. Here are the index values for the two modules. Note that some of the intersections are indexed the same for both modules: Customer 1 and Product 1, Customer 2 and Product 2 and Customer 3 and Product 3, and that the remainder of the cells have a different index number. Customer 1 and Product 2 is indexed with the value of 4 in the top module and the value of 2 in the bottom module. The calculation is Revenue = Price * Volume. To run the calculation, Anaplan performs the following operations by matching the index values from the two modules. Since the index values are not aligned the processor scans the index values to find a match before performing the calculation. When the dimensions in the module are reordered, these are the index values: The index values for each of the modules are now aligned. As the line-items of the same dimensional structure have an identical layout, the data is laid out linearly in memory. the calculation process accesses memory in a completely linear and predictable way. Anaplan’s microprocessors and memory sub-systems are optimized to recognise this pattern of access and to pre-emptively fetch the required data. How does the dimension order become different between modules?. When you build a module, Anaplan uses the order that you drag the lists onto the Create Module dialog The order is also dependent on where the lists are added. The lists that you add to the pages area are first, then the lists that you add to the rows area, and finally the lists added to the columns area. It is simple to re-order the lists and ensure consistency. Follow these steps: On the Modules pane, (Model Settings>Modules) look for lists that are out of order in the Applies To column. Click the Applies To row that you want to re-order, then click the ellipsis. In the Select Lists dialog, click OK. In the Confirm dialog, click OK. The lists will be in the order that they appear in General Lists. When you have completed checking the list order in the modules, click the Line Items tab and check the line items. Follow steps 1 through 3 to re-order the lists. Subsets and Line Item Subsets One word of caution about Subsets and Line Item subsets. In the example below, we have added a subset and a Line Item Subset to the module: The Applies To is as follows: Clicking on the ellipsis, the dimensions are re-ordered to: The general lists are listed in order first, followed by subsets and then line item subsets You still can re-order the dimensions by double clicking in the Applies to column and manually copying or typing the dimensions in the correct order. Other Dimensions The calculation performance relates to the common lists between the source(s) and the target. The order of separate lists in one or other doesn’t have any bearing on the calculation speed.
View full article
Thinking through the results of a modeling decision is a key part of ensuring good model performance—in other words, making sure the calculation engine isn’t overtaxed. This article highlights some ideas for how to lessen the load on the calculation engine. Formulas should be simple; a formula that is nested or uses multiple combinations uses valuable processing time. Writing a long, involved formula makes the engine work hard. Seconds count when the user is staring at the screen. Simple is better. Breaking up formulas and using other options helps keep processing speeds fast. You must keep a balance when using these techniques in your models, so the guidance is as follows: Break up the most commonly changed formula Break up the most complex formula Break up any formula you can’t explain the purpose of in one sentence Formulas with many calculated components The structure of a formula can have a significant bearing on the amount of calculation that happens when inputs in the model are changed. Consider the following example of a calculation for the Total Profit in an application. There are five elements that make up the calculation: Product Sales, Service Sales, Cost of Goods Sold (COGS), Operating Expenditure (Op EX), and Rent and Utilities. Each of the different elements are calculated in a separate module. A reporting module pulls the results together into the Total Profit line item, which is calculated using the formula shown below. What happens when one of the components of COGS changes? Since all the source components are included in the formula, when anything within any of the components changes, this formula is recalculated. If there are a significant number of component expressions, this can put a larger overhead on the calculation engine than is necessary. There is a simple way to structure the module to lessen the demand on the calculation engine. You can separate the input lines in the reporting module by creating a line item for each of the components and adding the Total Profit formula as a separate line item. This way, changes to the source data only cause the relevant line item to recalculate. For example, a change in the Product Sales calculation only affects the Product Sales and the Total Profit line items in the Reporting module; Services Sales, Op EX, COGS and Rent & Utilities are unchanged. Similarly, a change in COGS only affects COGS and Total Profit in the Reporting module. Keep the general guidelines in mind. It is not practical to have every downstream formula broken out into individual line items. Plan to provide early exits from formulas Conditional formulas (IF/THEN) present a challenge for the model builder in terms of what is the optimal construction for the formula, without making it overly complicated and difficult to read or understand. The basic principle is to avoid making the calculation engine do more work than necessary. Try to set up the formula to finish the calculations as soon as possible. Always put first the condition that is most likely to occur. That way the calculation engine can quit the processing of the expression at the earliest opportunity. Here is an example that evaluates Seasonal Marketing Promotions: The summer promotion runs for three months and the winter promotion for two months. There are more months when there is no promotion, so this formula is not optimal and will take longer to calculate. This is better as the formula will exit after the first condition more frequently. There is an even better way to do this. Following the principles from above, add another line item for no promotion. And then the formula can become: This is even better because the calculation for No Promo has already been calculated and Summer Promo occurs more frequently than Winter Promo. It is not always clear which condition will occur more frequently than others, but here are a few more examples of how to optimize formulas: FINDITEM formula The Finditem element of a formula will work its way through the whole list looking for the text item and if it does not find the referenced text it will return blank. If the referenced text is blank it will also return a blank. Inserting a conditional expression at the beginning of the formula keeps the calculation engine from being overtaxed. IF ISNOTBLANK(TEXT) THEN FINDITEM(LIST,TEXT) ELSE BLANK Or IF BLANK(TEXT) THEN BLANK ELSE FINDITEM(LIST,TEXT) Use the first expression if most of the referenced text contains data and the second expression if there are more blanks than data. LAG, OFFSET, POST, etc. If in some situations there is no need to lag or offset data, for example if the lag or offset parameter is 0. The value of the calculation is the same as the period in question. Adding a conditional at the beginning of the formula will help eliminate unnecessary calculations: IF lag_parameter = 0 THEN 0 ELSE LAG(Lineitem, lag_parameter, 0) Or IF lag_parameter <> 0 THEN LAG(Lineitem, lag_parameter, 0) ELSE 0 The use of formula a or b will depend on the most likely occurrence of 0s in the lag parameter. Booleans Avoid adding unnecessary clutter for line items formatted as BOOLEANS. There is no need to include the TRUE or FALSE expression, as the condition will evaluate to TRUE or FALSE. Sales>0 Instead of IF Sales > 0 then TRUE ELSE FALSE
View full article
General recommendations First, the bigger your model is the more performance issues you are likely to experience. So a best practice is to use all the possible tools & features we have to make the model as small and dense as possible. This includes: Line Item Checks: summary calculations, dimensionality used Line Item Duplication Granularity of Hierarchies Use of subsets and line item subsets Numbered Lists More information on eliminating sparsity can be found in Learning Center courses 309 and 310. Customer requirements  General recommendations also include whenever possible, challenging your customer’s business requirements when customer require large list (>1M), big data history and high number of dimensions used at the same time for a line item (>5) Other practices Once these general and basic sparsity recommendations have been applied, you can further performance in different areas. The articles below will expand on each subject: Imports and exports and their effects on model performance Rule 1: Carefully decide if you let end-user import (and export) during business hours Rule 2: Mapping Objective = zero errors or warning Rule 3: Watch the formulas recalculated during the import Rule 4: Import List properties Rule 5: Get your Data HUB Rule 6: Incremental import/Export Dashboard settings that can help improve model performance Rule1: Large list = Filter these on a boolean, not on text Rule 2: Use the default Sort Rule 3: Reduce the amount of dashboard component Rule 4: Watch large page drop downs Formulas and their effect on model performance Model load, Model Save, Model Rollback and their effect on model performance User roles and their effect on model performance
View full article
Overview: Imports are blocking operation: The model is locked during the time of the import, and concurrent imports run by end-user will need to run one after the other, and will block the model for everyone else. Rule 1: Carefully decide if you let end-user import (and export) during business hours Imports executed by end-users should be carefully considered, and if possible executed once or twice a day. Customer easily accept model freeze at scheduled hours for a predefined time even if it takes 10+ minutes, and are frustrated when these imports are run randomly during business hours by anyone. Your first optimization is to adjust the process and run these imports by an admin, at scheduled time and let the user based know about the schedule. Rule 2: Mapping Objective = zero errors or warning Make sure your import returns with no errors or warning, every error takes processing time. Time to import into a medium to large list (>50k) is significantly reduced if no errors are to be processed. Here are the tips to reduce errors: Always import from a saved view - NEVER from the default view. And use the naming convention for easy maintenance Hide the line items that are not needed for import, do not bring extra columns that are not needed. In the import definition, always map all displayed line items (source→target) or use the "ignore" setting - don't leave any line item unmapped Rule 3: Watch the formulas recalculated during the import If your end-users encounter poor performance when clicking a button that triggers an import or a process, it is likely due to the recalculations that is triggered by the import, especially if the action creates or moves items within a hierarchy. You will likely need the help of Anaplan support (L3) to identify what formulas are triggered after the import is done, and get a performance check on these formulas to identify which one takes most of the time. Usually those fetching many cells such as SUM, ANY or FINDITEM() are likely to be responsible for the performance impact. To solve such situations, you will need to challenge the need of recalculating the formula identified each time a user calls the action. Often, for actions such as creations, moves, assignment done in WFP or Territory Planning, many calculations used for Reporting are triggered in real-time after the hierarchy is modified by the import, and are not necessarily needed by users. the recommendation is to challenge your customer and see if these formulas couldn't be calculated only once a day, instead of each a user runs the action. If yes, you'll need to rearchitect your modules so that these heavy formulas get to run through a different process run daily by an admin, and not by each end-users. Rule 4:  Import List properties Importing list properties takes more time than importing these as module line item. Review your model list impacted by imports, and envision replacing list properties by module line items when possible. Also, please refer to the Data Hub best practices, where we recommend to upload all list properties into a Data HUB module and not in the list property itself. Rule 5: Get your Data HUB HUB and SPOKE: Setup a HUB data model, which will feed the other production model used by stakeholders. Look at the white paper on how to build a Data HUB: Performance benefits: It will prevent production models to be blocked by a large import from External Data source. But since Data HUB to Production model imports will still be blocking operations, carefully filter what you import, and use the best practices rules listed above. All import, mapping/transformation modules required to prepare the data to be loaded into Planning modules can now be located in a dedicated Data HUB model and not in the Planning model. This model will then be smaller and will work more efficiently Reminder of the other Benefits not linked to performance: Better structure, easier maintenance: Data HUB help keep all the data organized in a central location. Better governance: Whenever possible put this Data HUB on a different WS. That will ease the separation of duties between Production models and Meta Data management, at least on Actual Data and production lists. IT department will love the idea to own the Data HUB, and have no one else be an admin in the WS Lower implementation costs: Data HUB is a way to reduce the implementation time of new projects. Assuming IT can load the data needed by the new project in the Data HUB, then business users do not have to integrated with complex source system, but with the Anaplan Data HUB instead. Rule 6: Incremental import/Export This can be the magic bullet in some cases. If you export on a frequent basis (daily ot more) from Anaplan model into a reporting system, or write back to the source system, or simply transfer data from one Anaplan model to another, you have ways to only import/exports the data that have changed since the last export. Use the concatenation + Change boolean technique explained in the Data HUB white paper.
View full article
Overview When changes occur to the primary model that need to be copied to the other models, careful coordination is necessary. There are several time-saving techniques that can make model changes across distributed models simple and quick. This depends on the complexity of the change, but generally changes are merely to fix an issue or add very small things such as views or reports. Some of the model change techniques are: Module update via export/import Primary module is updated Export of module blueprint to CSV format Import of new line items into receiving module blueprint Import of new formulas/dimensionality into receiving module Model blueprint update Model blueprints can also be updated on a batch basis where required Simple copy and paste. Anaplan supports full copy and paste from other applications where minor changes to model structure are needed List/dimension additions You can export new lists or dimensions to a CSV file from one model to another, or you can carry out a direct API model-to-model import to add new lists to multiple models. Changes to data or metadata happen in a different way. Item changes within existing lists or hierarchies occur via an import, which may take place in a specific model or models, or ideally within a master data hub. It is a best practice to use an Anaplan model as a master data hub, which will store the common lists and hierarchies and will be the unique point of maintenance. Model builders will then implement automated data imports from the master data hub to every single model, including primary models and satellite models. It is important to carefully consider the business processes and rules that surround changes to the primary model, and then the coordination of the satellite models, as well as clear governance. ALM application: When changes occur We highly recommend that clients utilize ALM if metadata changes, such as any dimension, may be required at any time during implementation or even after the deployment phase of Anaplan. ALM allows clients to add or remove metadata from models, as well as test their effects, in a safe environment without running the risk of losing data or altering functionality in a live production model.
View full article
This is step four of the model design process. Next, your focus shifts to the inputs available. Remember that sometimes a dashboard is used to add information. Using the information gathered in steps 1 through 3: Identify the systems that will supply the data Identify the lists and hierarchies, especially the hierarchies needed to parse out information for the needed dashboards/exports What data hub types are needed? Master data Transactional Why do this step?   During this step, you should be thinking about the data needed to get to your defined output modules. All of the data in the system or in lists may not be needed. In addition, some hierarchies needed for the output modules may not exist and may need to be created.  Results of this step: Lists needed in the model Hierarchies needed in the model Data and where it is coming from
View full article
Dynamic Cell Access (DCA) controls the access levels for line items within modules. It is simple to implement and provides modelers with a flexible way of controlling user inputs. Here are a few tips and tricks to help you implement DCA effectively. Access control Modules Any line item can be controlled by any other applicable Boolean line item. To avoid confusion over which line item(s) to use, it is recommended that you add a separate functional area, and create specific modules to hold the driver line items. These modules should be named appropriately (e.g. Access – Customers > Products, or Access – Time etc.). The advantage of this approach is the access driver can be used for multiple line items or modules and the calculation logic is in one place. In most cases, you will probably want read and write access. Therefore, within each module it is recommended that you add two line items (Write? and Read?). If the logic is being set for Write?, then set the formulas for the Read? line item to NOT WRITE? (or vice-versa). It may be necessary to add multiple line items to use for different target line items, but start with this a default. Start Simple You may not need to create a module that mirrors the dimensionality of the line item you wish to control. For example, if you have a line item dimensioned by customer, product, and time, and you wish to make actual months read only, you can use an access module just dimensioned by time. Think about what dimension the control needs to apply to and create an access module accordingly. What settings do I need? There are three different states of access that can be applied: READ, WRITE, and INVISIBLE or hidden. There are two blueprint controls (read control and write control) and there are two states for a driver (TRUE or FALSE). The combination of these determines which state is applied to the line item. The following table illustrates the options: Only the read access driver is set:   Read Access Driver Driver Status True False Target Line Item READ INVISIBLE Only the write access driver is set:   Write Access Driver Driver Status True False Target Line Item WRITE INVISIBLE Both read access and write access drivers are set:   Read Access Driver Write Access Driver Driver Status True False True False Target Line Item READ INVISIBLE WRITE Revert to Read* *When both access drivers are set, the write access driver takes precedence with write access granted if the status of the write access driver is true. If the status of the write access driver is false, the cell access is then taken from the read access driver status. The settings can also be expressed in the following table:   WRITE ACCESS DRIVER TRUE FALSE NOT SET READ ACCESS DRIVER TRUE Write Read Read FALSE Write Invisible Invisible NOT SET Write Invisible Write Note: If you want to have read and write access, it is necessary to set both access drivers within the module blueprint.  Totals Think about how you want the totals to appear. When you create a Boolean line item, the default summary option is NONE. This means that if you used this access driver line item, any totals within the target would be invisible. In most cases you will probably want the totals to be read only, so setting the access driver line item summary to ANY will provide this setting. If you are using the Invisible setting to “hide” certain items and you do not want the end user to compute hidden values, then it is best to use the ANY setting for the access driver line item. This means that only if all values in the list are visible then the totals show; otherwise the totals are hidden from view.
View full article
If you have a multi-year model where the data range for different parts of the model vary, (for example, history covering two years, current year forecast, and three planning years), then Time Ranges should be able to deliver significant gains in terms of model size and performance. But, before you rush headlong into implementing Time Ranges across all of your models, let me share a few considerations to ensure you maximise the value of the feature and avoid any unwanted pitfalls. Naming Convention Time Ranges As with all Anaplan models, there is no set naming convention, however we do advocate consistency and simplicity. As with lists and modules, short names are good. I like to describe the naming convention thus “as short as practical,” meaning you need to understand what it means, but don’t write an essay! We recommend the using the following convention: FYyy-FYyy. For example, FY16-FY18, or FY18 for a single year Time Ranges available are from 1981 to 2079, so the “19” or the “20” prefixes are not strictly necessary. Keeping the name as short as this has a couple of advantages: Clear indication of the boundaries for the Time Range It is short enough to see the name of the Time Range in the module and line items blueprint The aggregations available for Time Ranges can differ for each Time Range and also differ from the main model calendar. If you take advantage of this and have aggregations that differ from the model calendar, you should add a suffix to the description. For example: FY16-FY19 Q (to signify Quarter totals) FY16-FY19 QHY (Quarter and Half Year totals) FY16-FY19 HY (Half Year totals only) etc. Time Ranges are Static Time Ranges can span from 1981 to 2079. As a result, they can exist entirely outside, within, or overlap the model calendar. This means that there may likely be some additional manual maintenance to perform when the year changes. Let’s review a simple example: Assume the model calendar is FY18 with 2 previous years and 2 future years; the model calendar spans FY16-FY20. We have set up Time Ranges for historic data (FY16-FY17) and plan data (FY19-FY20) We also have modules that use the model calendar to pull all of the history, forecast, and plan data together, as seen below: At year end when we “roll over the model,” we amend the model calendar simply by amending the current year. What we have now is as follows: You see that the history and plan Time Ranges are now out of sync with the model calendar. How you change the history Time Range will depend on how much historic data you need or want to keep, but assuming you don’t need more than two year’s history, the Time Range should be re-named FY17-FY18 and the start period advanced to FY17 (from FY16). Similarly, the plan Time Range should be renamed FY20-FY21 and advanced to FY20 (from FY19). FY18 is then available for the history to be populated and FY21 is available for plan data entry. Time Ranges Pitfalls Potential Data Loss Time Ranges can bring massive space and calculation savings to your model(s), but be careful. In our example above, changing the Start Period of FY16-FY17 to FY17 would result in the data for FY16 being deleted for all line items using FY16-FY17 as a Time Range. Before you implement a Time Range that is shorter or lies outside the current model calendar, and especially when implementing Time Ranges for the first time, ensure that the current data stored in the model is not needed. If in doubt, do some or all of the suggestions below: Export out the data to a file Copy the existing data on the line item(s) to other line items that are using the model calendar Back up the whole model Formula References The majority of the formulae will update automatically when updating Time Ranges. However, if you have any hard coded SELECT statements referencing years or months within the Time Range, you will have to amend or remove the formula before amending the Time Range. Hard coded SELECT statements go against best practice for exactly this reason; they cause additional maintenance. We recommend replacing the SELECT with a LOOKUP formulae from a Time Settings module. There are other examples where the formulae may need to be removed/amended before the Time Range can be adjusted. See the Anapedia documentation for more details. When to use the Model Calendar This is a good question and one that we at Anaplan pondered during the development of the feature; Do Time Ranges make the model calendar redundant? Well, I think the answer is “no,” but as with so many constructs in Anaplan, the answer probably is “it depends!” For me, a big advantage of using the model calendar is that it is dynamic for the current year and the +/- years on either side. Change the current year and the model updates automatically along with any filters and calculations you have set up to reference current year periods, historic periods, future periods, etc.  (You are using a central time settings module, aren’t you??) Time ranges don’t have that dynamism, so any changes to the year will need to be made for each Time Range. So, our advice before implementing Time Ranges for the first time is to review each Module and: Assess the scope of the calculations Think about the reduction Time Ranges will give in terms of space and calculation savings, but compare that with annual maintenance For example: If you have a two-year model, with one history year (FY17) and the current year (FY18); you could set up a Time Range spanning one year for FY17 and another one year Time Range for FY18 and use these for the respective data sets. However, this would mean each year both Time Ranges would need to be updated. We advocate building models logically, so it is likely that you will have groups of modules where Time Ranges will fall naturally. The majority of the modules should reflect the model calendar. Once Time Ranges are implemented, it may be that you can reduce the scope of the model calendar. If you have a potential Time Range that reflects either the current or future model calendar, leave the timescale as the default for those modules and line items; why make extra work? SELECT Statements As outlined above, we don’t advocate hard-coded time selects of the majority of time items because of the negative impact on maintenance (the exceptions being All Periods, YTD, YTG, and CurrentPeriod) When implementing Time Ranges for the first time, take the opportunity to review the line item formula with time selects. These formulae can be replaced with lookups using a Time Settings module. Application Lifecycle Management (ALM) Considerations As with the majority of the Time settings, Time Ranges are treated as structural data. If you are using ALM, all of the changes must be made in the Development model and synchronised to Production. This gives increased importance to refer to the pitfalls noted above to ensure data is not inadvertently deleted. Best of luck! Refer to the Anapedia documentation for more detail. Please ask if you have any further questions and let us and your fellow Anaplanners know of the impact Time Ranges have had on your model(s).
View full article
Reducing the number of calculations will lead to quicker calculations and improve performance. But this doesn’t mean combining all your calculations into fewer line items, as breaking calculations into smaller parts has major benefits for performance. Learn more about this in the Formula Structure article. How is it possible to reduce the number of calculations? Here are three easy methods: Turn off unnecessary Summary method calculations. Avoid formula repetition by creating modules to hold formulas that are used multiple times. Ensure that you are not including more dimensions than necessary in your calculations. Turn off Summary method calculations Model builders often include summaries in a model without fully thinking through if they are necessary. In many cases the summaries can be eliminated. Before we get to how to eliminate them, let’s recap on how the Anaplan engine calculates. In the following example we have a Sales Volume line-item that varies by the following hierarchies: Region Hierarchy Product Hierarchy Channel Hierarchy City SKU Channel Country Product All Channels Region All Products   All Regions     This means that from the detail values at SKU, City, and Channel level, Anaplan calculates and holds all 23 of the aggregate combinations shown below—24 blocks in total. With the Summary options set to Sum, when a detailed item is amended (represented in the grey block), all the other aggregations in the hierarchies are also re-calculated. Selecting the None summary option means that no calculations happen when the detail item changes. The varying levels of hierarchies are quite often only there to ease navigation and the roll-up calculations are not actually needed, so there may be a number of redundant calculations being performed. The native summing of Anaplan is a faster option, but if all the levels are not needed it might be better to turn off the summary calculations and use a SUM formula instead.  For example, from the structure above, let’s assume that we have a detailed calculation for SKU, City, and Channel (SALES06.Final Volume). Let’s also assume we need a summary report by Region and Product, and we have a module (REP01) and a line item (Volume) dimensioned as such. REP01.Volume = SALES06 Volume Calculation.Final Volume is replaced with REP01.Volume = SALES06.Final Volume[SUM:H01 SKU Details.Product, SUM:H02 City Details.Region] The second formula replaces the native summing in Anaplan with only the required calculations in the hierarchy. How do you know if you need the summary calculations? Look for the following: Is the calculation or module user-facing? If it is presented on a dashboard, then it is likely that the summaries will be needed. However, look at the dashboard views used. A summary module is often included on a dashboard with a detail module below; effectively the hierarchy sub-totals are shown in the summary module, so the detail module doesn’t need the sum or all the summary calculations. Detail to Detail Is the line item referenced by another detailed calculation line item? This is very common, and if the line item is referenced by another detailed calculation the summary option is usually not required. Check the Referenced by column and see if there is anything referencing the line item. Calculation and staging modules If you have used the DISCO module design, you should have calculation/staging modules. These are often not user-facing and have many detailed calculations included in them. They also often contain large cell counts, which will be reduced if the summary options are turned off. Can you have different summaries for time and lists? The default option for Time Summaries is to be the same as the lists. You may only need the totals for hierarchies, or just for the timescales. Again, look at the downstream formulas. The best practice advice is to turn off the summaries when you create a line item, particularly if the line item is within a Calculation module (from the DISCO design principles). Avoid Formula Repetition An optimal model will only perform a specific calculation once. Repeating the same formula expression multiple times will mean that the calculation is performed multiple times. Model builders often repeat formulas related to time and hierarchies. To avoid this, refer to the module design principles (DISCO) and hold all the relevant calculations in a logical place. Then, if you need the calculation, you will know where to find it, rather than add another line item in several modules to perform the same calculation. If a formula construct always starts with the same condition evaluation, evaluate it once and then refer to the result in the construct. This is especially true where the condition refers to a single dimension but is part of line item that goes across multiple dimension intersections. A good example of this can be seen in the example below: START() <= CURRENTPERIODSTART() appears five times and similarly START() > CURRENTPERIODSTART() appears twice. To correct this, include these time-related formulas in their own module and then refer to them as needed in your modules. Remember, calculate once; reference many times! Taking a closer look at our example, not only is the condition evaluation repeated, but the dimensionality of the line items is also more than required. The calculation only changes by day, as per the diagram below: But the Applies To here also contains Organization, Hour Scale, and Call Center Type. Because the formula expression is contained within the line item formula, for each day the following calculations are also being performed: And, as above, it is repeated in many other line items. Sometimes model builders use the same expression multiple times within the same line item. To reduce this overcalculation, reference the expression from a more appropriate module; for example, Days of Week (dimensioned solely by day) which was shown above. The blueprint is shown below, and you can see that the two different formula expressions are now contained in two line items and will only be calculated by day; the other dimensions that are not relevant are not calculated. Substitute the expression by referencing the line items shown above. In this example, making these changes to the remaining lines in this module reduces the calculation cell count from 1.5 million to 1500. Check the Applies to for your formulas, and if there are extra dimensions, remove the formula and place it in a different module with the appropriate dimensionality .
View full article
Details of known issues  Challenge Recommendations PERFORMANCE ISSUES WITH LONG NESTED FORMULA Need to have a long formula on time as a result of nested intermediate calculations. If model size does not prevent from adding extra line items, it's a better practice to create multiple intermediate line items and reduce the size of the formula, as opposed to nesting all intermediate calcs into one gigantic formula. This applies to summary formulae (SUM, LOOKUP, SELECT). Combining SUM and LOOKUP in the same line item formula can cause performance issues in some cases. If you have noticed a drop in performance after adding a combined SUM and LOOKUP to a single line item, then split it into two line items. RANKCUMULATE CAUSES SLOWNESS A current issue with the RANKCUMULATE formula can mean that the time to open the model incl. rollback, times can be up to 5 times slower than they should be. There is currently no suitable workaround, our recommendations are to stay within the constraints defined in Anapedia. SUM/LOOKUP WITH LARGE CELL COUNT Separate formulas into different line items to reduce calculation time (fewer cells need to recalculate parts of a formula that would only affect a subset of the data) A known issue with SUM/LOOKUP combinations within a formula can lead to slow model open and calculation times, particularly if the line item has a large cell count. Example: All line items do not apply to time or versions. Y = X[SUM: R, LOOKUP: R] Y Applies to [A,B] X Applies to [A,B] R Applies to [B] list formatted [C] Recommendation: Add a new line item 'intermediate' that must have 'Applies To' set to the 'Format' of 'R' intermediate = X[SUM: R] Y = intermediate[LOOKUP: R]  This issue is currently being worked on by Development and a fix will be available in a future release Calculations are over non common dimensions Anaplan calculates quicker if calculations are over common dimensions. Again, best seen in an example. If you have, List W, X Y = A + B Y Applies To W, X A Applies To W B Applies To W This performs slower than, Y = Intermediate Intermediate = A + B Intermediate Applies To W All other dimensions same as above. Similarly, you can substitute A & B above for a formula, e.g. SUM/LOOKUP calculations. CELL HISTORY TRUNCATED Currently history generation has a time limit of 60 seconds set. The history generation is split into 3 stages with 1/3 of time allocated to each. The first stage is to build a list of columns required for the grid - this involves reading all the history - if this takes more than 20 seconds then the user receives the message "history truncated after x seconds - please modify the date range", where x is how many seconds it took. No history is generated. If the first stage completes within 20s it goes on to generate the full list of history.  In the grid only the first 1000 rows are displayed, the user must Export history to get full history. This can take significant time depending on volume.  The same steps are taken for model and cell history. The cell history is generated from loading the entire model history and searching through the history for the relevant cell information. When the model history gets too large then it is currently truncated to prevent performance issues, unfortunately this can make it impossible to retrieve the cell history that is needed. Make it Real time when needed Do not make it real time unless it needs to be. By this, we mean do not have line items where users input data being referenced by other line items, unless they have to be. A way around this could be to have users have their data input sections which is not referenced anywhere, or as little as possible and, say, at the end of the day when no users are in the model, run an import which would update into cells where calculations are then done. This may not always be possible if the end user needs to see resulting calculations from his inputs, but if you can limit these to just do the calculations that he needs to see and use imports during quiet times then this will still help. We see this often when not all reporting modules need to be recalculated real time. In many cases, many of these modules are good to be calculated the day after. Reduce dependancies Don't have line items that are dependent on other line items unnecessarily - this can cause Anaplan to not utilize the maximum number of calculations it can do at once. This happens where a line items formula cannot be calculated because is it waiting on results of other line items. A basic example of this can be seen with line item's A, B, and C having the formulas: A - no formula B= A C = B Here B would be calculated, and then C would be calculated after this. Whereas if the setup was: A - no formula B = A C = A Here B and C can be calculated at the same time. This also helps if line item B is not needed it can then be removed, further reducing the amount of calculations and the size of the model. This needs to considered on a case by case basis and is a tradeoff between duplicating calculations and utilizing as many threads as possible - if line item B was referenced by a few other line items, it may indeed be quicker to have this line item. Summary calculation Summary cells often take processing time even if they are not actually recalculated because they must check all the lower level cells. Reduce summaries to ‘None’ where ever possible. This not only reduces aggregations but the size of the model
View full article
Overview: A dashboard with grids that include large lists that have been filtered and/or sorted can take time to open. The opening action can also become a blocking operation; when this happens, you'll see the blue toaster box showing "Processing....." when the dashboard is opening. This article includes some guidelines to help you avoid this situation.  Rule 1: Filter large lists by creating a Boolean line item  Avoid the use of filters on text or non-Boolean formatted items for large lists on the dashboard. Instead, create a line item with the format type Boolean and add calculations to the line item so that the results return the same data set as the filter would. This is especially helpful if you implement user-base filters, where the Boolean will be by user, and by the list to be filtered. The memory footprint of a Boolean line item is 8x smaller than other types of line items. Warning on a known issue: On an existing dashboard where a saved view is being modified by replacing the filters with a Boolean line item for filtering, you must republish it to the dashboard. Simply removing the filters from the published dashboard will not improve performance. Rule 2: Use the default Sort Use sort carefully, especially on large list. Opening a dashboard that has a grid where a large list is sorted on a text formatted line item will likely take 10 seconds or more and may be a blocking operation. To avoid using the sort: Your list is (by default) sorted by the criteria you need. If it is not sorted, you can still make the grid usable by reducing the items using a user-based filter. Rule 3: Reduce the amount of dashboard components There are times when the dashboard includes too many components, which slows performance. A reasonably large dashboard is no wider than 1.5 page (avoiding too much horizontal scrolling) and 3 pages deep. Once you exceed these limits, consider moving the components into multiple dashboards. Doing so will help both performance and usability. Rule 4: Avoid using large lists as page selectors If you have a large list and use it as a page selector on a dashboard, that dashboard will open slowly.  It may take10 seconds or more. The loading of the page selector takes more than 90% of the total time. Known issue / This is how Anaplan works: If a dashboard grid contains list formatted line items, the contents of page selector drop-downs are automatically downloaded until the size of the list meets a certain threshold; once this size is exceeded, the download happens on demand, or in other words, when a user clicks the drop down.  The issue is that when Anaplan requests the contents of list formatted cell drop-downs, it also requests contents of ALL other drop-downs INCLUDING page selectors. Recommendation: Limit the page selectors on medium to large lists using the following tips: a) Make the page selector available in one grid and use the synchronized paging option for all other grids and charts. No need to allow users to edit the page in every dashboard grid or chart. b) If you have a large list, it makes for a poor user experience, as there is no search available. Using a large list as a page selector creates both a performance and a usability issue. Solution 1: Design a dashboard dedicated to searching a line item: From the original dashboard (where you wanted to include the large list page selector), the user clicks a custom search button that opens a dashboard where the large list is displayed as the rows of a grid. The user can then use a search to find the item needed. If possible, implement user-based filters to help the user further reduce the list and quickly find the item. The user highlights the item found, closes the tab, and returns to the original dashboard where all grids are set on the highlighted item. Alternate solution: If the dashboard elements don't require the use of the list, you should publish them from a module that doesn't contain this list. For example, floating page selectors for time or versions, or grids that are displayed as rows/columns-only should be published from modules that does not include the list. Why? The view definitions for these elements will contain all the source module's dimensions, even if they are not shown, and so will carry the overhead of populating the large page selector if it was present in the source.
View full article