theplanual

The definitive set of standards for Anaplan model building.

Read It

Let's talk about it

Discuss what you learned from these best practices and your own experiences in the Forums.

Visit Forums
The Planual provides a systematic set of standards for model building on the Anaplan platform. The rules in it are designed produce the most efficient, usable, and scalable Anaplan models, while dramatically increasing performance for models in all contexts. We highly recommend that all model builders familiarize themselves with these standards, and start incorporating them into their model-building practices. (The results will be significant!)
View full article
PLANS is the new standard for Anaplan modeling—“the way we model.” This covers more than just the formulas and includes and evolves existing best practices around user experience and data hubs. It is a set of rules on the structure and detailed design of Anaplan models. This set of rules will provide both a clear route to good model design for the individual Anaplanner and common guidance on which Anaplanners and reviewers can rely when passing models amongst themselves.  In defining the standard, everything we do will consider or be based around: Performance – Use the correct structures and formula to optimize the Hyperblock Logical – Build the models and formula more logically – See D.I.S.C.O. below Auditable – Break up the formula for better understanding, performance, and maintainability Necessary – Don’t duplicate expressions. Store and calculate data and attributes once and reference them many times. Don't have calculations on more dimensions than needed Sustainable – Build with the future in mind, thinking about process cycles and updates        The standards will be based around three axes: Performance - How do the structures and formula impact the performance of the system? Usability/Auditability - Is the user able to understand how to interact with the functionality? Sustainability - Can the solution be easily maintained by model builders and support? We will define the techniques to use that balance on the three areas to ensure the optimal design of Anaplan models and architecture.       D.I.S.C.O As part of model and module design, we recommend categorizing modules as follows: Data – Data hubs, transactional modules, source data; reference everywhere Inputs – Design for user entry, minimize the mix of calculations and outputs System – Time management, filters, list attributes modules, mappings, etc.; reference everywhere Calculations – Optimize for performance (turn summaries off, combine structures) Outputs -  Reporting modules, minimize data flow out   Why build this way?   Performance Fewer repeated calculations Optimized structures and formulas Logical Data and calculations reside in logical places Model data flows can be easily understood Auditable Model structure can be easily understood Simplified formula (no need for complex expressions) Necessary Formulas and structures are not repeated Data is stored and calculated once, referenced many times, leading to efficient calculations Sustainable Models can be adapted and maintained more easily Expansion and scaling simplified     Recommended Content: Performance Dimension Order Formula Optimization in Anaplan Formula Structure for Performance The Truth About Sparsity: Part 1 The Truth About Sparsity: Part 2 Data Hubs: Purpose and Peak Performance To Version or Not to Version? Line Item Subsets Demystified Logical Best Practices for Module Design Data Hubs: Purpose and Peak Performance Auditable Formula Structure for Performance Necessary Reduce Calculations for Better Performance Formula Optimization in Anaplan Sustainable Dynamic Cell Access Tips and Tricks Dynamic Cell Access - Learning App Personal Dashboards Tips and Tricks Time Range Application Ask Me Anything (AMA) sessions The Planual The Planual Rises
View full article
Learn how to organize your model into logical parts to give you a  well-designed model that is easy to follow, understand and amend at a later date
View full article
You may have heard about a model called a data hub, but perhaps you aren’t confident that you understand the fundamentals, primary functions, or considerations when architecting one. There are three main advantages to incorporating a data hub: Single source of truth: Stores all transactional data from the source system. Data validations: Ensures all data is correct and valid before the data gets to the spoke model(s). Performance: It is always faster to load data from a model rather than a file. Additionally, the administrator can ensure the correct granularity of data in the spoke model(s) when using a data hub. For example, the source system may only contain transactional data at the daily level, but the planners may need the data aggregated to the month. The data hub can summarize the data and export only the data needed. The following information is designed to further define a data hub and support you in your journey of building your own. Table of Contents Definition of the Data Hub First, we need to define what a data hub is. This can be split into four sections: Use cases: The data hub should be the first model built, whether you have a single use or multiple use cases. The data should be automatically refreshed on a schedule, whether it is nightly, weekly, monthly, etc., from the source system—often an Enterprise Data Warehouse (EDW). All modules and views that create hierarchies or lists should be stored in the data hub, which enables your models in having one version of truth, as well as reducing the duplication of data. Model connectivity: Anaplan Connect , one of our 3 rd party vendors (Informatica Cloud, Dell Boomi, Mulesoft, or SnapLogic), or our REST API can be used to automate the loading of data to the data hub from the source system, as well as transferring data from the data hub to the spoke model(s). Additionally, transitional data should not be loaded directly into the spoke module, especially if there is a large volume of data. Functions: Often, simple ETL (Extract, Transform, and Load) functions can be utilized within your data hub to transform the data for the spoke model(s). This is helpful when consolidating data from multiple sources where you have different “codes” and need a mapping module to ensure the correct data gets mapped correctly. Team: The management of the data hub should have a designated team of experts who understand what data is stored in the data hub (to ensure duplication doesn’t happen), as well as the how and when the data gets loaded. Anaplan Architecture with a Data Hub There are several ways your Anaplan architecture could look, depending on the number of workspaces you currently have and the type of security your company requires. The following are illustrations of common architectures. Master Hub Model: Across Workspaces The most common, and recommended, architecture is when the data hub is in its own workspace. Not only does this have the advantage of not interfering with the other models, but it also adds an additional security layer, with a segregation of duties. In this view, the Anaplan Workspace Admin(s) can limit the access to the data hub workspace to only the people who require it. Master Hub Model: Within a Workspace The simplest depiction is where your data hub is within the same workspace as your spoke models. While this can be accomplished, it is not best practice as there is no segregation of duties and there is a possibility, upon heavy loads from the source system, of performance issues. Additionally, when adding users, the Anaplan Workspace Administrator (Admin) would need to ensure users don’t have access to the data hub, as well as any users of the data hub not having access to the spoke models Multiple Data Hubs Finally, the data hub doesn’t necessarily have to be the only model in the workspace. You can have additional data hubs, if needed. Factors to Consider When Implementing a Data Hub There are six main elements to think about when architecting a data hub: User stories. Source systems. Lists. Modules. Data validation. Exporting data to spoke model(s). User Stories One of the cornerstones of The Anaplan Way is data (process, model, and deployment being the others), which is critical to all implementations. You will need to know what data is needed for a certain use case. Consider the following, common, data questions that need to be answered in order to be successful: What granularity of the data is needed? How much history is needed? How much history do you have? Does the source system only have transactional data, but the use case needs the data at the month level? Can the source system do the aggregation for you? After all data questions have been answered, shift your focus to the source system and consider the following: Consider the source system. Where is the data coming from? What is the source system, and is it a trusted environment? Is it Excel? Typically, you should stay away from Excel as the “source” because Excel cannot be audited. Define the data source owners. Who has access to this data? Who is preparing it? Are they part of the project? These are often-overlooked questions that are critical to success. Ideally, the data source owners need to be part of the project from the start to understand the file specifications and prepare the initial load of the data, as well as towards the end of the project to do a final load of the data. Define file specifications. How many files will be needed? Typically, you will need master data, as well as transactional data. Instead of having one file with all of this data, determine if the data can be split between different files (one for transactional, one for the unique members of the master data). It will be better for Anaplan (for performance reasons) to split these to reduce warnings during the data load process. Analyze the data. Understand what makes each record unique (date/period and transactional amounts should not be part of this), and make sure the data owners don’t give you everything (Select * From Employee) when you only need five columns. Remember, it is better to ask for additional columns midway through the project than getting all columns in the beginning and only using a select amount. Consider custom codes in the source system. Find more on this in the transactional lists section. This is a great trick for transactional data. After you have analyzed the data to understand what makes each record/row unique, concatenate the “codes” of the metadata into one transactional code, but remember, you will need to be under the 60-character threshold. Define the schedule. When is the data available? Is the data on a certain schedule? What is the schedule required with this use case? Determine the ETL medium to be used. Will Anaplan Connect be sufficient, will one of our 3 rd parties be used, or will a more custom application be needed, such as REST API? Does your company already have this experience inhouse, or will training be required? These will need to be factored into all data stories. Transactional Usually, the largest lists are those containing transactional data. There can be millions of transactional ID’s with several list properties defined. First, properties should not be defined on a transactional list (or any list, except for Display Name, as they do get accounted for in the workspace memory). Secondly, instead of loading metadata to list properties (Cost Center and Account as properties), try to figure out a way to incorporate them into the code. If the transactional data is defining a transactional amount at the intersection of Cost Center and Account for a particular month, attempt to use the code of the Cost Center and the code of the Account concatenated together (0100_57000). Not only will this decrease your list size, but it will also create a healthier model. In the below example, the model builder did not create a custom code, but rather used a combination of properties to make the record unique, which included the date/period, as well as the transactional amount. Notice the original number of records vs. the number of records after a custom code was created. By incorporating the date/time period, as well as the transactional amount, it inflated the list size exponentially based on the number of years that were loaded. Doing this not only caused the model to be bigger, but also caused poor model opening performance.  See the Appendix for a simple worked example to explain further. Learn more about sparsity in the two-part series The Truth about Sparsity: Part 1 and The Truth About Sparsity: Part 2 . Flat Lists Similar to transactional lists, flat lists are not part of a hierarchy and are a series of records grouped in a list, like Products, Companies, Cost Centers, or Employees. These are your “legends” or “anchor” for all metadata about this unique record. Again, the only property that should be defined is a Display Name, if needed. It is best practice, from a model builders’ perspective, to suffix the name with “Flat” or “- Flat”. This helps identify whether the list is part of a hierarchy or flat list (Employee – Flat, Cost Center – Flat, Product – Flat). These lists can be used for data validation, which will be described later in this article. Modules Ideally, you should have three types of modules in the data hub: Transactional: A Transactional module will store the transactional data by the time series, whether that be by day, week, month, quarter, or year. The only data, or line items, should be transactional data. No other line items should be defined. Additionally, to keep the size down, make sure the summaries on the line items are turned off, or None, as there is no reason to sum the data within the module. System: System (SYS) modules, or the “S” in DISCO , do not have time associated with them and should only be dimensionalized by the same list (Employee Flat, Cost Center Flat, Product Flat). These modules store the metadata or attributes about the list item that doesn’t change over time, for example the employee’s start date. Another example of a SYS module would be any kind of mapping that is required, whether it be SYS Time Filter module or a mapping from one source system to another. Export modules: If the data from the source system is being loaded at a lower granularity than needed in the spoke model(s), export modules can aggregate the data to the specified need (month, quarter, or year level), which will lead to more efficient data load performance to the spoke model(s). Additionally, it is better to only load the granularity of data needed instead of loading all data to the spoke model, but only using a portion of it. Loading Data vs. Using Formula’s in SYS Modules If you can devise a custom code where all of the attributes of the data are accounted for, you can greatly increase the performance of your data load, especially on very large data volumes. It is actually faster to use formulas to derive the data from the custom code than it is to load the data. Why? A couple of reasons. First, when data is loaded, the load is triggering the change log, and every change is being recorded in the model history . Second, loading data to another module is an additional action. If you didn’t need that action, you would save processing time. In the example below, the exact same data was loaded four different ways: Import Properties to a List: A list was created with all attributes, including the transactional data, and was loaded to list properties (not best practice and against DISCO). Import to List, Attribute, and Trans: A list was created, the transactional data was loaded to a transactional module, and all of the attributes were loaded to a SYS Attribute module. Import to List, Trans, Calculate Attribute: A list was created, the transactional data was loaded to a transactional module, but the SYS Attribute model was calculated using two different methods: One Line Item: Using FINDITEM() with several functions parsing out the information from within the FINDITEM(). For example, FINDITEM(Cost Center, RIGHT(LEFT(Trans Details.Code, '2nd Group’), 3)). Multiple Line Items: Parsing of the member spread across multiple line items and using FINDITEM() with only the list and code as the parameter. First, you do the parsing to get the correct piece of the code (one line item), and then the FINDITEM() of that code (2 nd line item). Load Performance Notice, the best performing data load was the last one, Import to List, Trans, Calculate Attribute (multiple line items), where the parsing out of the data was spread over multiple line items. This is due to the fact that the data load was able to take advantage of Anaplan’s multithreading capabilities. The worst performing data load occurred when data was loaded to the Attribute module because, due to the sheer size of the data, a save had to be performed. Exporting to Spoke Models One of the most important concepts to remember when exporting data is to use a view from a module. Lists should not be exported because you lose control over what you export. It is either all or nothing. By using views, you can employ a filter (should always be a Boolean) to render exactly which data needs to be exported. If you need more than one filter, combine both into one line item and use that line as the filter. You will have much better performance if you are only using one Boolean line item as a filter vs. having multiple filters defined. Another important concept to remember is to only export detailed information, as there is no reason to export parent information (quarter, year, etc.). Not only will you get warnings when exporting parent information, but the performance of the export will decrease because the system will have to create a debug log. The goal is to make sure a debug log is not created, all green checks, so if there ever is an issue, you will know it truly is an issue that needs attention. Line items in the data hub formatted as text should not be exported as text, but actually as list formatted line items in the spoke model (text->list formatted line item). The goal is to reduce the number of text formatted line items in the spoke model. Some say they need to do validation in the spoke model, therefore they need to import the data as text. Actually, this is false, because the validation should have already been done in the data hub, so there should be no need to do the validation again. Lastly, you should think about what really needs to be exported. Do you really need to export historical data that hasn’t been changed? Instead, just export the newly loaded data, or delta data. This can be accomplished by using one of two methods: From the source system, request IT to only send the updated information, not the full load every time. Additionally, request IT to create a column in the source file with a hardcoded value of “TRUE.” This will tell Anaplan which row is new or has been updated and can be used as a filter for an export. Just know, before the import of the source data gets loaded, make sure the first action within the process clears out the previous true records (set this up via a view using a filter where the view only shows members with a value of true). Utilize the current period function to only export the current period data. In the SYS Time Filter module, create a line item named Current Period with the formula CURRENTPERIODSTART(). In the export views, filter the data on this line item. Tips and Tricks A few of tips and tricks to be aware of include the following: Hierarchies should not be in the data hub. Analytical modules should not be in the data hub. Do not delete and reload lists. Data Validations Model Why should hierarchies not be in the data hub? To answer that question, you need to understand why hierarchies are used in the first place. Essentially, hierarchies are only needed to aggregate data for analytical purposes, and since users will not normally login to the data hub, the lists essentially take up space. With that said, it is perfectly okay to create the hierarchies for testing purposes to ensure your actions from the meta modules are building the hierarchies correctly, but as soon as the actions are working correctly and have been verified, you can remove the list structures from the data hub. A case can be made that certain implementations may need the hierarchies created in the data hub for validation purposes of several sources. If this is the case in your implementation, just be sure to only use the hierarchies for validation purposes. In addition to the above, there are two more reasons to not have hierarchies built in the data hub—cluttered data, and spoke models that pull data from the lists.   Data hubs need to be clean and clutter free to ensure optimal performance, which also makes it easier for the administrators to understand exactly what data is stored in the data hub. Additionally, when you have lists—especially hierarchical lists—spoke model builders will sometimes build their lists from the lists within the data hub instead of from a view. It is best practice to always build lists from views from within a module so the action can benefit from filters (there are no filters when importing from lists). Analytical modules should not be in the data hub since end users don’t normally access the data hub. There really isn’t a reason to have products by versions by time in the data hub, that belongs to the spoke model. Remember, the data hub should only be used to store data from the source system(s). Within your nightly data load process, do not delete and reload data, including the list structures. If you have a proper code, you shouldn’t need to do this. Additionally, not only does this impact the overall performance of the process (adding an additional action to delete the list, which then deletes all data associated with that list), but the process is essentially filling up the change log with the exact same data that it had before the delete. When a certain threshold is surpassed, the model will require a save, thus taking up even more time. Ultimately, you are forcing the model to re-aggregate all of the data, instead of just the new data. Lastly, if you know you will have to do a lot of transformations on your data (consolidating multiple source systems or your data is not clean), think about creating a Data Validations model.  This model’s sole purpose would be to clean the data and then feed the data to the data hub, thus keeping the transformations to a minimum in the data hub as well as keeping the data hub clean. Worked Example Use Case: Transaction Data is by Store and SKU and Month Bad Way The code for the Transaction list is a three-part code Store_SKU_Month Attributes for Store, SKU and Month are imported as Text and matched against the Store list, SKU list and Time period respectively An additional line item is needed for the Store and SKU code (for export). This is the screenshot of the bad way: Notice the repetition of the attributes. STR07 and SKU031 are repeated each month. Good Way Two data files Unique combinations of Store and SKU (two-part code) Store SKU code by month for the quantity. The transaction details are stored in a module dimensioned by Transactions The Store and SKU attributes are calculated using the “_” delimiter The quantity is stored in a module dimensioned Transactions and by month The additional line item is needed for the Store and SKU code (for export). This is a subsidiary view in the module as it is not dimensionalized by Time. These are the screenshots of the good way: Below lists out the breakdown of the model in terms of List size, Line items and the associated member usage of the various structures. The main reasons for the improvement are because lists themselves account for approximately 500b for each member and also there is repetition of the attributes per “month” in the transaction data (as mentioned above). Hopefully, this article has shed some light on data hubs, how they should be used, and what you can do to ensure they perform at their peak level. Remember, analyze the data to understand what makes the row unique and use that as the code. Every list should have a code—every list!
View full article
The process of designing a model will help you: Understand the customer’s problem more completely. Bring to light any incorrect assumptions you may have made, allowing for correction before building begins. Provide the big-picture view for building. (If you were working on an assembly line building fenders, wouldn’t it be helpful to see what the entire car looked like?) Table of Contents:   Understand the Requirements and the Customer’s Technical Ecosystem when Designing a Model When you begin a project, gather information and requirements using a number of tools. These include: Statement of Work (SOW): Definition of the project scope and project objectives/high-level requirements. Project Manifesto: Goal of the project – big-picture view of what needs to be accomplished. IT ecosystem: Which systems will provide data to the model and which systems will receive data from the model? What is the Anaplan piece of the ecosystem? Current business process: If the current process isn’t working, it needs to be fixed before design can start. Business logic: What key pieces of business logic will be included in the model? Is a distributed model needed? High user concurrency. Security where the need is a separate model. Regional differences that are better handled by a separate model. Is the organization using ALM, requiring split or similar models to effectively manage development, testing, deployment, and maintenance of applications? (This functionality requires a premium subscription or above.) User stories: These have been written by the client—more specifically, by the subject matter experts (SMEs) who will be using the model. Why do this step? To solve a problem, you must completely understand the current situation. Performing this step provides this information and the first steps toward the solution. Results of this step: Understand the goal of the project. Know the organizational structure and reporting relationships (hierarchies). Know where data is coming from and have an idea of how much data clean-up might be needed. If any of the data is organized into categories (for example, product families) or what data relationships exist that need to be carried through to the model (for example, salespeople only sell certain products). What lists currently exist and where are they are housed. Know which systems the model will either import from or export to. Know what security measures are expected. Know what time and version settings are needed. Document the User Experience Front-to-back design has been identified as the preferred method for model design. This approach puts the focus on the end-user experience. We want that experience to align with the process so users can easily adapt to the model. During this step focus on: User roles. Who are the users? Identifying the business process that will be done in Anaplan. Reviewing and documenting the process for each role. The main steps. If available, utilize user stories to map the process. You can document this in any way that works for you. Here is a step-by-step process you can try: What are the start and end-points of the process? What is the result or output of the process? What does each role need to see/do in the process? What are the process inputs and where do they come from? What are the activities the user needs to engage in? Verb/object—approve request, enter sales amount, etc. Do not organize during this step. Use post-its to capture them. Take the activities from step 4 and put them in the correct sequence. Are there different roles for any of these activities? If no, continue with step 8. If yes, assign a role to each activity. Transcribe process using PowerPoint ®  or Lucid charts. If there are multiple roles, use swim lanes to identify the roles. Check with SMEs to ensure accuracy. Once the user process has been mapped out, do a high-level design of the dashboards. Include: Information needed. What data does the user need to see? What the user is expected to do or decisions that the user makes. Share the dashboards with the SMEs. Does the process flow align? Why do this step?  This is probably the most important step in the model-design process. It may seem as though it is too early to think about the user experience, but ultimately the information or data that the user needs to make a good business decision is what drives the entire structure of the model. On some projects, you may be working with a project manager or a business consultant to flesh out the business process for the user. You may have user stories, or it may be that you are working on design earlier in the process and the user stories haven’t been written. In any case, identify the user roles, the business process that will be completed in Anaplan, and create a high-level design of the dashboards. Verify those dashboards with the users to ensure that you have the correct starting point for the next step. Results of this step: List of user roles. Process steps for each user role. High-level dashboard design for each user role. Use the Designed Dashboards to Determine What Output Modules are Necessary Here are some questions to help you think through the definition of your output modules: What information (and in what format) does the user need to make a decision? If the dashboard is for reporting purposes, what information is required? If the module is to be used to add data, what data will be added and how will it be used? Are there modules that will serve to move data to another system? What data and in what format is necessary? Why do this step? These modules are necessary for supporting the dashboards or exporting to another system. This is what should guide your design—all of the inputs and drivers added to the design are added with the purpose of providing these output modules with the information needed for the dashboards or export. Results of this step: List of outputs and desired format needed for each dashboard. Determine What Modules are Needed to Transform Inputs to the Data Needed for Outputs Typically, the data at the input stage requires some transformation. This is where business rules, logic, and/or formulas come into play: Some modules will be used to translate data from the data hub. Data is imported into the data hub without properties, and modules are used to import the properties. Reconciliation of items takes place before importing the data into the spoke model. These are driver modules that include business logic, rules.  Why do this step?  Your model must translate data from the input to what is needed for the output.  Results of this step: Business rules/calculations needed. Create a Model Schema You can whiteboard your schema, but at some point in your design process, your schema must be captured in an electronic format. It is one of the required pieces of documentation for the project and is also used during the Model Design Check-in, where a peer checks over your model and provides feedback.  Identify the inputs, outputs, and drivers for each functional area. Identify the lists used in each functional area. Show the data flow between the functional areas. Identify time and versions where appropriate. Why do this step?   It is required as part of The Anaplan Way process. You will build your model design skills by participating in a Model Design Check-in, which allows you to talk through the tougher parts of design with a peer. More importantly, designing your model using a schema means that you must think through all of the information you have about the current situation, how it all ties together, and how you will get to that experience that meets the exact needs of the end-user without fuss or bother.  Result of this step: A model schema that provides the big-picture view of the solution. It should include imports from other systems or flat files, the modules or functional areas that are needed to take the data from current state to what is needed to support the dashboards that were identified in Step 2. Time and versions should be noted where required. Include the lists that will be used in the functional areas/modules.  Your schema will be used to communicate your design to the customer, model builders, and others. While you do not need to include calculations and business logic in the schema, it is important that you understand the state of the data going into a module, the changes or calculations that are performed in the module and the state of the data leaving the module, so that you can effectively explain the schema to others. For more information, check out 351 Schemas. This 10-to-15-minute course provides basic information about creating a model schema. Verify That the Schema Aligns with Basic Design Principles When your schema is complete, give it a final check to ensure: It is simple. “Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius — and a lot of courage to move in the opposite direction.”  ― Ernst F. Schumacher “Design should be easy in the sense that every step should be obviously and clearly identifiable. Simplify elements to make change simple so you can manage the technical risk.” — Kent Beck The model aligns with the manifesto. The business process is defined and works well within the model.
View full article
Filters can be very useful in model building and are widely used, but they can come at the expense of performance—often very visible to users through their use on dashboards. Performance can also hit imports and exports, which in turn may lead to the blocking of other activity, causing a poor perception of the model. There are some very simple guidelines to design well-performing filters: Using a Single Boolean Filter on a Line Item That Does Not Have Time or Versions Applied and Does Not Have a Summary Is Fastest Try to create a Boolean line item that incorporates all the filter criteria you want to apply. This allows you to re-use the line item and combine a series of Boolean line items into a single Boolean for use in the filter. For example, you may want to filter on three data points: Volume, Product Category, and Active Status. Volume is numeric, Product Category is a list formatted line item matching a user selection, and Active Status is a Boolean. Create a line item called Filter with the following formula: Volume > Min Vol AND Product Cat = User Selection.Category AND Active Status Here’s a very simple example module to demonstrate this: A Filter line item is added to represent all the filters we need on the view. Only the Filter line needs to be dimensioned by Users. A User Selection module dimension only by Users is created to capture user-specific filter choices: Here’s the data before we apply the filter:  Here's the data with the filter applied: A best practice suggestion would be to create a filter module and line items for each filter part. You may want other filters and you can then combine each filter as needed from this system module. This should reduce repetition and give you control over the filters to ensure they can all be Boolean. What Can Make a Filter Slow? The Biggest Performance Hit for Filters Is When Nesting Dimensions on Rows. The performance loss is significantly increased by the number of nested dimensions and the number of levels they contain. With a flat list versus nested dimensions (filtering on the same number of items) the nested filter will be slower. This was tested with a 10,000,000 list versus 2 lists of 10 and 1,000,000 items as nested rows; the nested dimension filter was 40% slower. Filtering on Line Items With a Line Item Summary Will Be Slow. A numeric filter on 10,000,000 items can take less than a second, but with a summary will take at least five seconds. Multiple Filters Will Increase Time. This is especially significant if any of the preceding filters do not lower the load because they will take additional time to evaluate. If you do use multiple filter conditions, try to order them so the most effective filters are first. If a filter doesn’t often match on anything, evaluate whether it's even needed. Hidden Levels Act as a Filter. If you hide levels on a composite list, this acts like a filter before any other filter is applied. The hiding does take time to process and will impact more depending on the number of levels and the size of the list. Avoid Nested Rows for Export Views Using nested rows can be a useful way to filter a complex set of data for export, but, as discussed above, the filter performance here can be poor. The best way around this is to pivot the dimensions so there is only one dimension on rows and use the Tabular Multi Column export option with a Filter Row based on Boolean option. Some extra filter tips include the following:  Filter duration will affect saved views used in imports, so check the saved view open time to see the impact. This view open time will be on every use of the view, including imports or exports. If you need to filter on a specific list, create a subset of those items and create a new module dimensioned by the subset to view that data.
View full article
Learn how small changes can lead to dramtic improvements in model calculations
View full article
Overview The following is a g uide for the new  Statistical Forecasting Calculation Engine Models (monthly and weekly).  It includes enablement videos, practice data import exercise, model documentation, and specific steps when using the model for implementations .  1. Enablement Videos & Practice Exercise # Item Details Link 1a Intro and Overview Video Model overview and review of new key features. Video Below 1b Initial Model & Data Import Steps Steps on how to setup model, product hierarchy, customer list and multi-level forecast analysis.  Video Below  1c Practice Exercise—Import data to setup stat forecast Two sets of load files included to practice setup for single level product set or multi-level product set w/ customers, product and brand level.  Start on "Initial App Setup" dashboard and load   either Single OR Multi Level files   into model, and use Import video as guide if needed.  .Zip File Attached  2. Documentation  # Item Details Link 2a Lucidchart Process Maps Lucidchart Process Map document includes High-Level process flow for end-user navigation and detailed tabs for each section.  **Details & links also on "Training & Enablement" dashboard. Process Maps  2b High-Level Process Map PDF High-level process map PDF format. Attached 2c Forecast Methods PDFs High-level version with forecast algorithms list and overview. Detailed version which includes a slide for each forecast method, m ethod overview, advantages/disadvantages, equation and graph example output.  **These slides are also included on "Forecast Methods Overview & Formulas" dashboard.   Attached 3. Implementation Specifics # Item Details 3a Training & Enablement Dashboard Training & Enablement dashboard contains details on process map navigation.  3b Initial Model Setup  Initial Setup: current model staged with chocolate data from data hub, execute CLEAR MODEL action prior to loading customer-specific data. 3c Changing Model Time Scale— align Native & Dynamic Time Settings If a Time Settings change is required, need to review Initial App Setup dashboard to align Native Time with Dynamic Time setup in model.   3d Monthly Update Process After initial setup, use Monthly Data History Upload dashboard to update prior period actuals and settings . 3e Single Level vs. Multi-Level Forecast Setup Two implementation options & when to use:  Single Level Forecast:  Forecast at one level of product hierarchy (i.e. all stat forecasts calculated at Item level). Most use cases will leverage single level forecast setup. Multi-Level Forecast : Ability to forecast at different levels of the product hierarchy (i.e. Top Item | Customers, Item and Brand level can all have stat forecast generated). This requires a complex forecast reconciliation process, review "Multi-Level Forecast Overview" dashboard if this process is needed.   3f Troubleshooting Tips Follow troubleshooting tips on Training & Enablement dashboard if having issues with stat forecast generating before reaching out for support.  3g Model Notes & Documentation Module Notes—includes DISCO classification and module purpose.   3h "Do Not Modify" Items Module notes contain DO NOT MODIFY for items that should not be changed during the implementation process.  3i User Roles & Selective Access Demo, Demand Planner, Demand Planning Manager ro les can be adjusted  After Selective Access process run on Flat List Management dashboard; then users can be given access to certain product groups/brands etc. 3j Batch Processing Details on daily batch processing and how to prepare a roadmap of your batch processes – files, queries, import actions/processes in Anaplan (see attachment). 4. Videos Intro & Model Intro and Overview Video. Data Import and Setup Steps.  5. Model Download Links Monthly Statistical Forecasting Calculation Engine Weekly Statistical Forecasting Calculation Engine
View full article
How do we keep our users in the Anaplan platform to do their work which requires a high level of advanced customization, faster and more easily than their previous Excel environment? The solution is called “Smart Filters”. Check it out !
View full article
Imagine This Scenario: You are in the middle of making changes in your development model and have been doing so for the last few weeks. The changes are not complete and are not ready to synchronize. However, you just received a request for an urgent fix from the user community that is critical for the forthcoming monthly submission. What do you do? What you don’t want to do is take the model out of deployed mode! You also don’t want to lose all the development work you have been doing.  Don’t worry! Following the procedure below will ensure you can apply the hotfix quickly and keep your development work. The following diagram illustrates the procedure: It’s a Two-Stage Process: Stage 1: Roll the development model back to a version that doesn’t contain any changes (is the same as production), and apply the hotfix to that version. Add a new revision tag to the development model as a temporary placeholder. (Note the History ID of the last structural change as you'll need it later.) On the development model, use History to restore to a point where development and production were identical (before any changes were made in development). Apply the hotfix. Save a new revision of the development model. Sync the development model with the production model. Production now has its hotfix. Stage 2: Restore the changes to development and apply the hotfix. On the development model, use the History ID from Stage 1 – Step 1 to restore to the version containing all of the development work (minus the hotfix). Reapply the hotfix to this version of development. Create a new revision of the development model. Development is now back to where it was, with the hotfix now applied. When your development work is complete, you can promote the new version to production using ALM best practice. Additional Resources: The procedure is documented in the Fixing Production Issues Anapedia article.
View full article
Overview Imports are blocking operations: To maintain a consistent view of the data, the model is locked during the import, and concurrent imports run by end-users will need to run one after the other and will block the model for everyone else. Exports are blocking for data entry while the export data is retrieved, and then the model is released. During the blocking phase, users can still navigate within the model. Rule #1  Carefully Decide If You Let End-Users Import (And Export) During Business Hours Imports executed by end-users should be carefully considered, and if possible, executed once or twice a day. Customers more easily accept model updates at scheduled hours for a predefined time—even if it takes 10+ minutes—and are frustrated when these imports are run randomly during business hours. Your first optimization is to adjust the process and run these imports by an administrator at a scheduled time, and then let the user based know about the schedule. Rule #2 Use a Saved View The first part of any import (or export) is retrieving the data. The time it takes to open the view directly affects the time of the import or export. Always import from a saved view—NEVER from the default view. And use the naming convention for easy maintenance. Ensure the view is using optimized filters with a single Boolean value per axis. Hide the line items that are not needed for import; do not bring extra columns that are not needed. If you have done all of the above, and the view is still taking time to complete, consider using the Tabular Multi Column export and filter "in the way out." This has been proven to improve some sluggish exports.  Rule #3 Mapping Objective = Zero Errors or Warning Make sure your import executes with no errors or warnings as every error takes processing time. The time to import into a medium-to-large list (>50k) is significantly reduced if no errors are to be processed. In the import definition, always map all displayed line items (source→target) or use the "ignore" setting. Don't leave any line item unmapped. Rule #4 Watch the Formulas Recalculated During the Import If your end-users encounter poor performance when clicking a button that triggers an import or a process, it is likely due to the recalculations that are triggered by the import, especially if the action creates or moves items within a hierarchy. You will likely need the help of the Anaplan Model Optimization team to identify what formulas are triggered after the import is done and to get a performance check on these formulas to identify which one takes most of the time. Usually, those fetching many cells such as SUM, LOOUKP, ANY, or FINDITEM are likely to be responsible for the performance impact. Speak to your Business Partner for more details on the Model Optimization services available to you. To solve such situations, you will need to challenge the need for recalculating the formula identified each time a user calls the action. Often, for actions such as creations, moves, assignments done in WFP or Territory Planning, many calculations used for reporting are triggered in real-time after the hierarchy is modified by the import, and are not necessarily needed by users until later in the process. The recommendation is to challenge your customer and see if these formulas can be calculated only once a day, instead of each time a user runs the action. If this is acceptable, you'll need to rearchitect your modules and/or formulas so that these heavy formulas get to run through a different process run daily by an administrator and not by each end-users. If not, you will need to look at the formulas more closely to see what improvements can be made. Remember, breaking formulas up often helps performance. Rule #5 Don't Import List Properties Importing list properties takes more time than importing these as a module line item. Review your model list impacted by imports, and look to replace list properties with module line items when possible. Use a system module to hold these for the key hierarchies, as per D.I.S.C.O. Rule #6 Get Your Data Hub Hub and spoke: Setup a data hub model, which will feed the other production models used by stakeholders. Performance benefits: It will prevent production models to be blocked by a large import from an external data source. But since data hub to production model imports will still be blocking operations, carefully filter what you import, and use the best practices rules listed above. All import, mapping/transformation modules required to prepare the data to be loaded into planning modules can now be located in a dedicated data hub model and not in the planning model. This model will then be smaller and will work more efficiently. Try and keep the transaction data history in the data hub with a specific analysis dashboard made available for end users; often, the detail is not needed for planning purposes, and holding this data in the planning model has a negative impact on size, model opening times, and performance. As a reminder of the other benefits of a data hub not linked to performance: Better structure, easier maintenance: data hub helps keep all the data organized in a central location. Better governance: Whenever possible put this Data Hub on a different workspace. That will ease the separation of duties between production models and meta-data management, at least on actual data and production lists. IT departments will love the idea to own the data hub and have no one else be an administrator in the workspace. Lower implementation costs: A data hub is a way to reduce the implementation time of new projects. Assuming IT can load the data needed by the new project in the data hub, then business users do not have to integrate with complex source systems but with the Anaplan data hub instead. Rule #7 Incremental Import/Export This can be the magic bullet in some cases. If you export on a frequent basis (daily or more) from an Anaplan model into a reporting system, or write back to the source system, or simply transfer data from one Anaplan model to another, you have ways to only import/export the data that have changed since the last export. Use a Boolean line item to identify records that have changed and only import those.
View full article
You can interact with the data in your models using Anaplan's RESTful API. This enables you to securely import and export data, as well as run actions through any programmatic way you desire. The API can be leveraged in any custom integration, allowing for a wide range of integration solutions to be implemented. Completing an integration using the Anaplan API is a technical process that will require significant action by an individual with programming experience. Visit the links below to learn more: API Documentation Anaplan API Guide You can also view demonstration videos to understand how to implement APIs in your custom Integration client. The below videos show step-by-step guides of sequencing API calls and exporting data from Anaplan, importing data into Anaplan, and running delete actions and Anaplan processes. API sequence for uploading a file to Anaplan and running an import action is as follows: API sequence for running an export action and downloading a file from Anaplan is as follows: API sequence for running an Anaplan process and a delete action is as follows:
View full article
Summary This article describes the technique to dynamically filter specific levels of a hierarchy on a dashboard and provides a method to select and visualize hierarchies on a dashboard. Details This article explains how to configure the calculation of the level of a list in a hierarchy in order to apply specific calculations (custom summary) or filters by level on a dashboard. In this example, we have an organized hierarchy of 4 levels (Org L1 to Org L4). For each item in the hierarchy, we want to calculate a module value that returns the associated level that is to be displayed on a dashboard. Notes and Platform Context The technique addresses a specific limitation within dashboards where a composite hierarchy's list level cannot be selected if the list is synchronized to module objects on the dashboard. The technique uses a static module based on the levels of the composite structure used for filtering of the object on a dashboard. The technique is based on utilizing the Summary Method "Ratio" on line items corresponding to the list levels of the composite hierarchy to define the values of the filtering line items. Note that this method is not a formula calculation, but rather a use of the Summary Method Ratio on each line item applied to the composite hierarchy. Example List In this example, a four-level list composite hierarchy list is used. The hierarchy in this example has asymmetrical leaf items per parent: Defining the Level of Each List In order to calculate the level of each item in each of the lists L1 - L4, we need to create a module that calculates the associated level of each member by this technique: 1) Create as many line items as levels of hierarchy, plus one technical line item. 2) Configure the settings in the blueprint of the line items of this filtering module, per this example and table: Line Item Formula Applies to Summary Summary method Setting Ratio Technical line item* 1 (empty) Formula   Level or L4 (lowest level) 4 Org L4 Ratio* L3 / Technical L3 3 Org L3 Ratio L2 / Technical L2 2 Org L2 Ratio L1 / Technical L1 1 Org L1 Ratio L1 / Technical                     When applying these settings, the filtering module looks like this: *Note the Technical line item Summary method is using Formula. Alternatively, The Minimum Summary Method can be used but will return an error when a level of the hierarchy does not have any children and the level calculated is blank. The filtering module with Summary method applied results: Use the line item at the lowest level—Level (or L4) (LOWEST)—as the basis of filters or calculations. Applying a Filter on Specific Levels in Case of Synchronization When synchronization is enabled, the option “Select levels to show” is not available. Instead, a filter based on the level calculated can be used to show only specific levels. In the example, we apply a filter which matches any of the level 4 and 1: The following filtered dashboard result is achieved by using the composite hierarchy as a page selector:
View full article
Little and Often Would you spend weeks on your budget submission spreadsheet or your college thesis without once saving it? Probably not. The same should apply to making developments and setting revision tags. Anaplan recommends that during the development cycle, you set revision tags at least once per day. We also advise testing the revision tags against a dummy model if possible. The recommended procedure is as follows: After a successful sync to your production model, create a dummy model using the ‘Create from Revision’ feature. This will create a small test model with no production list items. At the end of each day (as a minimum), set a revision tag and attempt to synchronize the test model to this revision tag. The whole process should only take a couple of minutes. Repeat step 2 until you are ready to promote the changes to your production model. Why Do We Recommend This? There are a very small number of cases where combinations of structural changes cause a synchronization error (99 percent of synchronizations are successful). The Anaplan team is actively working to provide a resolution within the product, but in most cases, splitting changes between revision tags allows the synchronization to complete. In order to understand the issue when a synchronization fails, our support team needs to analyze the structural changes between the revisions. Setting revision tags frequently provides the following benefits: The number of changes between revisions is reduced, resulting in easier and faster issue diagnosis.  It provides an early warning of any problems so that someone can investigate them before they become critical. The last successful revision tag allows you to promote some, if not most, of the changes if appropriate. In some cases, a synchronization may fail initially, but when applying the changes in sequence the synchronization completes. Using the example from above: Synchronizations to the test model for R1, R2, and R3 were all successful, but R3 fails when trying to synchronize to production. Since the test model successfully synchronized from R2 and then R3, you can repeat this process for the production model. The new comparison report provides clear visibility of the changes between revision tags.
View full article
It is important to understand what Application Lifecycle Management (ALM) enables clients to do within Anaplan. In short, ALM enables clients to effectively manage the development, testing, deployment, and ongoing maintenance of applications in Anaplan. With ALM, it is possible to introduce changes without disrupting business operations by securely and efficiently managing and updating your applications with governance across different environments and quickly deploying changes to run more “what-if” scenarios in your planning cycles as you test and release development changes into production. Learn more here: Understanding model synchronization in Anaplan ALM Training on ALM is also available in the Education section 313 Application Lifecycle Management (ALM)
View full article
Note: While all of these scripts have been tested and found to be fully functional, due to the vast amount of potential use cases, Anaplan does not explicitly support custom scripts built by our customers. This article is for information only and does not suggest any future product direction. Getting Started Python 3 offers many options for interacting with an API. This article will explain how you can use Python 3 to automate many of the requests that are available in our apiary, which can be found at   https://anaplan.docs.apiary.io/#. This article assumes you have the requests (version 2.18.4), base64, and JSON modules installed, as well as the Python 3 version 3.6.4. Please make sure you are installing these modules with Python 3, and not for an older version of Python. For more information on these modules, please see their respective websites: Python   (If you are using a Python version older or newer than 3.6.4, or requests version older or newer than 2.18.4, we cannot guarantee the validity of the article.)   Requests   Base Converter   JSON   (Note: Install instructions are not at this site but will be the same as any other Python module.) Note:   Please read the comments at the top of every script before use, as they more thoroughly detail the assumptions that each script makes. Authentication To start, let's talk about Authentication. Every script run that connects to our API will be required to supply valid authentication. There are two ways to authenticate a Python script that I will be covering. Certificate Authentication Basic Encoded Authentication Certificate authentication will require that you have a valid Anaplan certificate, which you can read more about   here. Once you have your certificate saved locally, to properly convert your Anaplan certificate to be usable with the API, first you will need   OpenSSL. Once you have that, you will need to convert the certificate to PEM format by running the following code in your terminal: openssl x509 -inform der -in certificate-(certnumber).cer -out certtest.pem If you are using Certificate Authorization, the scripts we use in this article will assume you know the Anaplan account email associated with the certificate. If you do not know it, you can extract the common name (CN) from the PEM file by running the following code in your terminal: openssl x509 -text -in certtest.pem To be used with the API, the PEM certificate string will need to be converted to base64, but the scripts we will be covering will take care of that for you, so I won't cover that in this section. To use basic authentication, you will need to know the Anaplan account email that is being used, as well as the password. All scripts in this article will have the following code near the top: # Insert the Anaplan account email being used username = '' ----------------- # If using cert auth, replace cert.pem with your pem converted certificate # filename. Otherwise, remove this line. cert = open('cert.pem').read() # If using basic auth, insert your password. Otherwise, remove this line. password = '' # Uncomment your authentication method (cert or basic). Remove the other. user = 'AnaplanCertificate ' + str(base64.b64encode(( f'{username}:{cert}').encode('utf-8')).decode('utf-8')) # user = 'Basic ' + str(base64.b64encode((f'{username}:{password}' # ).encode('utf-8')).decode('utf-8') Regardless of the authentication method, you will need to set the username variable to the Anaplan account email being used. If you are using a certificate to authenticate, you will need to have your PEM converted certificate in the same folder or a child folder of the one you are running the scripts from. If your certificate is in a child folder, please remember to include the file path when replacing cert.pem (e.g. cert/cert.pem). You can remove the password line and its comments, and its respective user variable. If you are using basic authentication, you will need to set the password variable to your Anaplan account password, and you can remove the cert line, its comments, and its respective user variable. Getting the Information Needed for Each Script Most of the scripts covered in this article will require you to know an ID or metadata for the file, action, etc., that you are trying to process. Each script that gets this information for their respective fields is titled get_____.py. For example, if you want to get your file's metadata, you'll run getFiles.py, which will write the file metadata for each file in the selected model, in the selected workspace, in an array to a JSON file titled files.json. You can then open the JSON file, find the file you need to reference, and use the metadata from that entry in your other scripts. TIP:   If you open the raw data tab of the JSON file it makes it much easier to copy the whole set of metadata. The following are the links to download each get____.py script. Each get script uses the requests.get method to send a get request to the proper API endpoint. getWorkspaces.py: Writes an array to workspaces.json of all the workspaces the user has access to. getModels.py: Writes an array to models.json of either all the models a user has access to if wGuid is left blank or all of the models the user has access to in a selected workspace if a workspace ID was inserted. getModelInfo.py: Writes an array to modelInfo.json of all metadata associated with the selected model. getFiles.py: Writes an array to files.json of all metadata for each file the user has access to in the selected model and workspace. (Please refer to   the Apiary   for more information on private vs default files. Generally, it is recommended that all scripts be run via the same user account.) getChunkData.py: Writes an array to chunkData.json of all metadata for each chunk of the selected file in the selected model and workspace. getImports.py: Writes an array to imports.json of all metadata for each import in the selected model and workspace. getExports.py: Writes an array to exports.json of all metadata for each export in the selected model and workspace. getActions.py: Writes an array to actions.json of all metadata for all actions in the selected model and workspace. getProcesses.py: Writes an array to processes.json of all metadata for all processes in the selected model and workspace. Uploads A file can be uploaded to the Anaplan API endpoint either in chunks or as a single chunk. Per our apiary: We recommend that you upload files in several chunks. This enables you to resume an upload that fails before the final chunk is uploaded. In addition, you can compress files on the upload action. We recommend compressing single chunks that are larger than 50MB. This creates a Private File. Note: To upload a file using the API that file must exist in Anaplan. If the file has not been previously uploaded, you must upload it initially using the Anaplan user interface. You can then carry out subsequent uploads of that file using the API. Multiple Chunk Uploads The script we have for reference is built so that if the script is interrupted for any reason, or if any particular chunk of a file fails to upload, simply rerunning the script will start uploading the file again, starting at the last successful chunk. For this to work, the file must be initially split using a standard naming convention, using the terminal script below. split -b [numberofBytes] [path and filename] [prefix for output files] You can store the file in any location as long as you the proper file path when setting the chunkFilePrefix (e.g. chunkFilePrefix = ''upload_chunks/chunk-" This will look for file chunks named chunk-aa, chunk-ab, chunk-ac etc., up to chunk-zz in the folder script_origin/upload_chunks/. It is very unlikely that you will ever exceed chunk-zz). This will let the script know where to look for the chunks of the file to upload. You can download the script for running a multiple chunk upload from this link: chunkUpload.py. Note:   The assumed naming conventions will only be standard if using Terminal, and they do not necessarily work if the file was split using another method in Windows. If you are using Windows you will need to either create a way to standardize the naming of the chunks alphabetically {chunkFilePrefix}(aa - zz) or run the script as detailed in the   Apiary. Note:   The chunkUpload.py script keeps track of the last successful chunk by writing the name of the last successful chunk to a .txt file chunkStop.txt. This file is deleted once the import completes successfully. If the file is modified in between runs of the script, the script may not function correctly. Best practice is to leave the file alone and delete it if you want to start the upload from the first chunk. Single Chunk Upload The single chunk upload should only be used if the file is small enough to upload in a reasonable time frame. If the upload fails, it will have to start again from the beginning. If your file has a different name then that of its version of the server, you will need to modify line 31 ("name" : '') to reflect the name of the local file. This script runs a single put request to the API endpoint to upload the file. You can download the script for running a single chunk upload from this link: singleChunkUpload.py Imports The import.py script sends a post request to the API endpoint for the selected import. You will need to set the importData value to the metadata for the import. See Getting the Information Needed for Each Script for more information. You can download the script for running an import from this link: Import.py. Once the import is finished, the script will write the metadata for the import task in an array to postImport.json, which you can use to verify which task you want to view the status of while running the importStatus.py script. The importStatus.py script will return a list of all tasks associated with the selected importID and their respective list index. If you are wanting to check the status of the last run import, make sure you are checking postImport.json to verify you have the correct taskID. Enter the index for the task and the script will write the task status to an array in file importStatus.json. If the task is still in progress, it will print the task status and progress. If the task finished and a failure dump is available, it will write the failure dump in comma delimited format to importDump.csv which can be used to review the cause of the failure. If the task finished with no failures, you will get a message telling you the import has completed with no failures. You can download the script for importStatus.py from this link: importStatus.py Note:   If you check the status of a task with an old taskID for an import that has been run since you last checked it, the dump will no longer exist and importDump.csv will be overwritten with an HTTP error, and the status of the task will be 410 Gone. Exports The export.py script sends a post request to the API endpoint for the selected export. You will need to set the exportData value to the metadata for the export. See Getting the Information Needed for Each Script for more information. You can download the script for running an export from this link: Export.py Once the export is finished, the script will write the metadata for the export task in an array to postExport.json, which you can use to verify which task you want to view the status of while running the exportStatus.py script. The exportStatus.py script will return a list of all tasks associated with the selected exportID and their respective list index. If you are wanting to check the status of the last run import, make sure you are checking postExport.json to verify you have the correct taskID. Enter the index for the task and the script will write the task status to an array in file exportStatus.json. If the task is still in progress, it will print the task status and progress. It is important to note that no failure dump will be generated if the export fails. You can download the script for exportStatus.py from this link: exportStatus.py Actions The action.py script sends a post request to the API endpoint for the selected action (for use with actions other than imports or exports). You will need to set the actionData value to the metadata for the action. See Getting the Information Needed for Each Script for more information. You can download the script for running an action from this link: actionStatus.py. Processes The process.py script sends a post request to the API endpoint for the selected process. You will need to set the processData value to the metadata for the process. See Getting the Information Needed for Each Script for more information. You can download the script for running a process from this link: Process.py. Once the process is finished, the script will write the metadata for the process task in an array to postProcess.json, which you can use to verify which task you want to view the status of while running the processStatus.py script. The processStatus.py script will return a list of all tasks associated with the selected processID and their respective list index. If you are wanting to check the status of the last run import, make sure you are checking postProcess.json to verify you have the correct taskID. Enter the index for the task and the script will write the task status to an array in file processStatus.json. If the task is still in progress, it will print the task status and progress. If the task finished and a failure dump is available, it will write the failure dump in comma delimited format to processDump.csv which can be used to review the cause of the failure. It is important to note that no failure dump will be generated for the process itself, only if one of the imports in the process failed. If the task finished with no failures, you will get a message telling you the process has completed with no failures. You can download the script for processStatus.py from this link: processStatus.py. Downloading a File Downloading a file from the Anaplan API endpoint will download the file in however many chunks it exists in on the endpoint. It is important to note that you should set the variable fileName to the name it has in the file metadata. First, the downloads individual chunk metadata will be written in an array to downloadChunkData.json for reference. The script will then download the file chunk by chunk and write each chunk to a new local file with the same name as the 'name' listed in the file's metadata. You can download the link for this script from this link: downloadFile.py. Note: If a file already exists in the same folder as your script with the same name as the name value in the file's metadata, the local file will be overwritten with the file being downloaded from the server. Deleting a File You can delete the file contents of any file that the user has access to that exists in the Anaplan server. Note: This only removes private content. Default content and the import data source model object will remain. You can download the link for this script from this link: deleteFile.py. Standalone Requests Code and Their Required Headers In this section, I will list the code for each request detailed above, including the API URL and the headers necessary to complete the call. I will be leaving the content right of Authorization: headers blank. Authorization header values can be either Basic encoded_username: password or AnaplanCertificate encoded_CommonName:PEM_Certificate_String (see   Certificate-Authorization-Using-the-Anaplan-API   for more information on encoded certificates) Note: requests.get will only generate a response body from the server, and no data will be locally saved unless written to a local file. Get Workspaces List requests.get('https://api.anaplan.com/1/3/workspaces/', headers='Authorization':) Get Models List requests.get('https://api.anaplan.com/1/3/models/', headers={'Authorization':}) or requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models', headers={'Authorization':}) Get Model Info requests.get(f'https://api.anaplan.com/1/3/models/{mGuid}', headers={'Authorization':}) Get Files/Imports/Exports/Actions/Processes List The get request for files, imports, exports, actions, or processes is largely the same. Change files to imports, exports, actions, or processes to run each. requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files', headers={'Authorization':}) Get Chunk Data requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/chunks', headers={'Authorization':}) Post Chunk Count requests.post('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/chunks/{chunkNumber}', headers={'Authorization': , 'Content-type': 'application/json'}, json={fileMetaData}) Upload a Chunk of a File requests.put('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/chunks/{chunkNumber}', headers={'Authorization': , 'Content-Type': 'application/octet-stream'}, data={raw contents of local chunk file}) Mark an upload complete requests.put('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/complete', headers=={'Authorization': , 'Content-Type': 'application/json'}, json={fileMetaData}) Upload a File in a Single Chunk requests.put('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}', headers={'Authorization': , 'Content-Type': 'application/octet-stream'}, data={raw contents of local file}) Run an Import/Export/Process The post request for imports, exports, and processes are largely the same. Change imports to exports, actions, or processes to run each. requests.post('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/imports/{Id}/tasks', headers={'Authorization': , 'Content-Type': 'application/json'}, data=json.dumps({'localeName': 'en_US'})) Run an Action requests.post('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/imports/{Id}/tasks', data={'localeName': 'en_US'}, headers={'Authorization': , 'Content-Type': 'application/json'}) Get Task list for an Import/Export/Action/Process The get request for import, export, action and process task lists are largely the same. Change imports to exports, actions, or processes to get each task list. requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/imports/{importID}/tasks', headers={'Authorization':}) Get Status for an Import/Export/Action/Process Task The get request for import, export, action and process task statuses are largely the same. Change imports to exports, actions, or processes to get each task list. Note: Only imports and processes will ever generate a failure dump. requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/imports/{ID}/tasks/{taskID}' headers={'Authorization':}) Download a File Note:   You will need to get the chunk metadata for each chunk of a file you want to download. requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/chunks/{chunkID}, headers={'Authorization': ,'Accept': 'application/octet-stream'}) Delete a File Note:   This only removes private content. Default content and the import data source model object will remain. requests.delete('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}', headers={'Authorization': , 'Content-type': 'application/json'} Note:  SFDC user administration is not covered in this article, but the same concepts from the scripts provided can be applied to SFDC user administration. For more information on SFDC user administration see the apiary entry for  SFDC user administration .
View full article
What happens to History when I delete a user from a workspace?
View full article
Note that this article uses a planning dashboard as an example, but many of these principles apply to other types of dashboards as well. Methodology User Stories Building a useful planning dashboard always starts with getting a set of very clear user stories, which describe how a user should interact with the system. The user stories need to identify the following: What the user wants to do. What data the user needs to see to perform this action. What data the user wants to change. How the user will check that changes made have taken effect. If one or more of the above is missing in a user story, ask the product owner to complete the description. Start the dashboard design, but use it to obtain the answers. It will likely change as more details arrive. Product Owners Versus Designers Modelers should align with product owners by defining concrete roles and responsibilities for each team member. Product owners should provide what data users are expecting to see and how they wish to interact with the data, not ask for specific designs (this is the role of the modelers/designers). Product owners are responsible for change management and should be extra careful when dashboard/navigation is significantly different than what is currently being used (i.e. Excel ® ). Pre-Demo Peer Review  Have a usability committee that: Is made up of modeling peers outside the project and/or project team members outside of modeling team. Will host a mandatory gate-check meeting to review models before demos to product owners or users. Committee is designed to ensure: Best design by challenging modelers. Consistency between models. The function is clear. Exceptions/calls to action are called out. The best first impression. Exception, Call to Action, Measure Impact Building a useful planning dashboard will be successful if the dashboard allows users to highlight and analyze exceptions (issues, alerts, warning), take action and plan to solve these, and always visually check the impact of the change against a target. Dashboard structure Example: A dashboard is built for these two user stories that compliment each other. Story 1: Review all of my accounts for a specific region, manually adjust the goals and enter comments. Story 2: Edit my account by assigning direct and overlay reps. The dashboard structure should be made of: Dashboard header: Short name describing the purpose of the dashboard at the top of the page in "Heading 1." Groupings: A collection of dashboard widgets. Call to action. Main grid(s). Info grid(s) : Specific to one item of the main grid. Info charts: Specific to one item of the main grid. Specific action buttons: Specific to one item of the main grid. Main charts: Covers more than one item of the main grid. Individual line items: Specific to one item of the main grid, usually used for commentaries. Light instructions. A dashboard can have more than one of these groupings, but all elements within a grouping need to answer the needs of the user story. Use best judgements to determine the number of groupings added to one dashboard. A maximum of two-to-three groupings is reasonable. Past this range, consider building a new dashboard. Avoid having a "does it all" dashboard, where users keep scrolling up and down to find each section. If users ask for a table of contents at the top of a dashboard, it's a sign that the dashboard has too much functionality and should be divided into multiple dashboards. Example:   General Guidelines  Call to Action Write a short sentence describing the task to be completed within this grouping. Use the Heading 2 format. Main Grid(s) The main grid is the central component of the dashboard, or of the grouping. It's where the user will spend most of their time. This main grid will display the KPIs needed for the task (usually in the columns) and will display one or more other dimension in the rows. Warning: Users may ask for 20+ KPIs and need these KPIs to be broken down by many dimensions, such as by product, actual/plan/variance, or by time. It's critical to have a main grid as simple and as decluttered as possible. Avoid the "data jungle" syndrome. Users are used to "data jungles" simply because that's what they are used to with Excel. Tips to avoid data jungle syndrome: Make a careful KPI election (KPIs are usually the line items of a module). Display the most important KPIs ONLY, which are those needed for decision making. Hide the others for now. A few criteria for electing a KPI in the main grid are: The KPI is meant to be compared across the dimension items in the rows, or across other KPIs. Viewing the KPI values for all of the rows is required to make the decision. The KPI is needed for sorting the rows (except on row name). A few criteria for not electing a KPI in the main grid are (besides not matching the above criteria) when we need these KPIs in more of a drill down mode; The KPI provides valid extra info, but just for the selected row of the Dashboard and does not need to be displayed for all rows. These "extra info" KPIs should be displayed in a different grid, which will be referred to as "info grid" in this document. Take advantage of the row/column sync functionality to provide a ton of data in your dashboard but only display data when requested or required. Design your main grid in such a way that it does not require the user to scroll left and right to view the KPIs: Efficiently select KPIs. Use the column header wrap. Set the column size accordingly. Vertical Scroll It is ok to have users scroll vertically on the main grid. Only display 15 to 20 rows at a time when there are numerous rows, as well as other groupings and action buttons, to display on the same dashboard. Use sorts and a filter to display relevant data. Sort Your Grid Always sort your rows. Obtain the default sort criteria via user stories. If no special sort criteria is called out, use the alphanumeric sort on the row name. This will require a specific line item. Train end users to use the sort functionality. Filter Your Grid Ask end users or product owners what criteria to use to display the most relevant rows. It could be: Those that make 80 percent of a total. Use the RankCumulate function. Those that have been modified lately. This requires a process to attach a last modified date to a list item, updated daily via a batch mode. When the main grid allows item creation, always display the newly created first. Status Flag. If end users need to apply their own filter values on some attributes of the list items, such as filter to show only those who belong to EMEA or those whose status is "in progress," build pre-set user-based filters. Use the new Users list. Create modules dimensioned by user with line items (formatted as lists) to hold the different criteria to be used. Create a module dimensioned by Users and the list to be filtered. In this module resolve the filter criteria from above against the list attributes to a single Boolean. Apply this filter in the target module.  Educate the users to use the refresh button, rather than create an "Open Dashboard" button. Color Code Your Grid Use colored cells to call attention to areas of a grid, such as green for positive and red for negative. Color code cells that specifically require data entry. Display the Full Details If a large grid is required, something like 5k lines and 100 columns, then: Make it available in a dedicated full-screen dashboard via a button available from the summary dashboard, such as an action button. Do not add such a grid to a dashboard where KPIs, charts, or multiple grids are used for planning.  These dashboards are usually needed for ad-hoc analysis and data discovery, or random verification of changes, and can create a highly cluttered dashboard. Main Charts The main chart goes hand-in-hand with the main grid. Use it to compare one or more of the KPIs of the main grid across the different rows. If the main grid contains hundreds or thousands of items, do not attempt to compare this in the main chart. Instead, identify the top 20 rows that really matter or that make most of the KPI value and compare these 20 rows for the selected KPI. Location: Directly below or to the right of main display grid; should be at least partially visible with no scrolling. Synchronization with a selection of KPI or row of the main display grid. Should be used for: Comparison between row values of the main display grid. Displaying difference when the user makes change/restatement or inputs data. In cases where a chart requires 2–3 additional modules to be created: Implement and test performance. If no performance issues are identified, keep the chart. If performance issues are identified, work with product owners to compromise. Info Grid(s) These are the grids that will provide more details for an item selected on the main grid. If territories are displayed as rows, use an info grid to display as many line items as necessary for this territory. Avoid cluttering your main grid by displaying all of these line items for all territories at once. This is not necessary and will create extra clutter and scrolling issues for end users. Location: Below or to the right of the main display grid. Synced to selection of list item in the main display grid. Should read vertically to display many metrics pertaining to list item selected. Info Charts Similar to info grids, an info chart is meant to compare one or more KPIs for a selected item in the rows of the main grid. These should be used for: Comparison of multiple KPIs for a single row. Comparison or display of KPIs that are not present on the main grid, but are on info grid(s). Comparing a single row's KPI(s) across time. Place it on the right of the main grid, above or below an info grid. Specific Action Buttons Location: Below main grid; Below the KPI that the action is related to, OR to the far left/right - similar to "checkout." Should be an action that is to be performed on the selected row of the main grid. Can be used for navigation as a drill down to a detailed view of a selected row/list item. Should NOT be used as lateral navigation between dashboards; Users should be trained to use the left panel for lateral navigation. Individual Line Items Serve as a call out of important KPIs or action opportunities (i.e., user setting container for explosion, Container Explosion status). If actions taken by users require additional collaboration with other users, it should be published outside the main grid (giving particular emphasis by publishing the individual line item/s). Light Instructions Call to action. Serves as a header for a grouping. Short sentence describing what the user should be performing within the grouping. Formatted in "Heading 2." Action Instructions. Directly located next to a drop-down, input field, or button where the function is not quite clear. No more than 5–6 words. Formatted in "instructions." Tooltips. Use Tooltips on modules and line items for more detailed instructions to avoid cluttering the dashboard.
View full article
Dimension Order Affects Calculation Performance Ensuring consistency in the order of dimensions will help improve the performance of your models. This consistency is relevant for modules and individual line items. Why does the order matter? Anaplan creates and uses indexes to perform calculations. Each cell in a module where dimensions intersect is given an index number. Here are two simple modules dimensioned by Customer and Product. In the first module, Product comes first and Customer second, and in the second module, Customer is first and Product is second. In this model, there is a third module that calculates revenue as Prices * Volumes. Anaplan assigns indexes to the intersections in the module. Here are the index values for the two modules. Note that some of the intersections are indexed the same for both modules: Customer 1 and Product 1, Customer 2 and Product 2, and Customer 3 and Product 3, and that the remainder of the cells has a different index number. Customer 1 and Product 2 is indexed with the value of 4 in the top module and the value of 2 in the bottom module. The calculation is Revenue = Price * Volume. To run the calculation, Anaplan performs the following operations by matching the index values from the two modules. Since the index values are not aligned, the processor scans the index values to find a match before performing the calculation. When the dimensions in the module are reordered, these are the index values: The index values for each of the modules are now aligned. As the line-items of the same dimensional structure have an identical layout, the data is laid out linearly in memory. The calculation process accesses memory in a completely linear and predictable way. Anaplan’s microprocessors and memory sub-systems are optimized to recognize this pattern of access and to pre-emptively fetch the required data. How does the dimension order become different between modules? When you build a module, Anaplan uses the order that you drag the lists onto the Create Module dialog. The order is also dependent on where the lists are added. The lists that you add to the 'pages' area are first, then the lists that you add to the 'rows' area, and finally the lists added to the 'columns' area. It is simple to re-order the lists and ensure consistency. Follow these steps: On the Modules pane, (Model Settings>Modules) look for lists that are out of order in the Applies To column. Click the Applies To row that you want to re-order, then click the ellipsis. In the Select Lists dialog, click OK. In the Confirm dialog, click OK. The lists will be in the order that they appear in General Lists. When you have completed checking the list order in the modules, click the Line Items tab and check the line items. Follow steps 1 through 3 to re-order the lists. Subsets and Line Item Subsets One word of caution about Subsets and Line Item subsets. In the example below, we have added a subset and a Line Item Subset to the module: The Applies To is as follows: Clicking on the ellipsis, the dimensions are re-ordered to: The general lists are listed in order first, followed by subsets and then line item subsets. You still can reorder the dimensions by double-clicking in the Applies to column and manually copying or typing the dimensions in the correct order. Largest vs. Smallest? This is the normal follow up question, and unfortunately, the answer is "it depends." Through research we have found that it all depends on the data within the module. Also, it can get very confusing if subsets are used; the Customer list might be bigger than the Products list, but if a subset of Customers is used that is smaller than Products, then what?   Also, we don't advocate ordering the lists in the General Lists setting in size order; the lists should be ordered in hierarchical order top to bottom, so, by definition, that will be smallest to largest. So our advice is be consistent. Think about how you describe the problem. Does the business talk about Customer by Product, or Products for Customers? Agree to a convention, and stick to it. Other Dimensions The calculation performance only relates to the common lists between the source(s) and the target. The order of separate lists in one or other doesn’t have any bearing on the calculation speed.
View full article
Personal dashboards are a great new feature that enables end users to save a personalized view of a dashboard. To get the most out of this feature, here are a few tips and tricks. Tidy Up Dashboards Any change to a master dashboard (using the Dashboard Designer) will reset all personal views of a dashboard, so before enabling personal dashboards, take some time to ensure that the current dashboards are up to date: Implement any pending development changes (including menu options). Turn on the Dashboard Quick Access toolbar (if applicable). Check and amend all text box headings and comments for size, alignment, spelling, and grammar. Delete or disable any redundant dashboards to ensure end users don’t create personal views of obsolete dashboards. Use Filters R ather Th an Show/Hide It’s best practice to use a filter rather than show and hide for the rows and/or columns on a grid.  This is now more beneficial because amending the items shown or hidden on a master dashboard will reset the personal views. For example, suppose you want to display just the current quarter of a timescale. You could manually show/hide the relevant periods, but, at quarter end when the Current Period is updated, the dashboard will need to be amended, and all those personal views will be reset. If you use a filter, referencing a time module, the filter criteria will update automatically, as will the dashboard. No changes are made to the master dashboard, and all the personal views are preserved.  Create a Communication and Migration Strategy Inevitably there will be changes that must be made to master dashboards. To minimize the disruption for end users, create a communication plan and follow a structured development program . These can include the following: Bundle up dashboard revisions into a logical set of changes. Publish these changes at regular intervals (e.g., on a monthly cycle). Create a regular communication channel to inform users of changes and the implications of those changes. Create a new dashboard, and ask end users to migrate to the new dashboard over a period of time before switching off the old dashboard. Application Lifecycle Management (ALM) If you are using ALM: any structural changes to master dashboards will reset all personal views of dashboards.
View full article