Platform Filters
Choose a label to narrow down the articles.

Modeling

Import and Export Data

Data Integration

Application Lifecycle Management

Dashboard

Share a Best Practice

Share what you know. Contribute your best practices and Anaplan expertise using our Contributor's Toolkit.

Contribute a Best Practice

Let's talk about it

Discuss what you learned from these best practices and your own experiences in the Forums.

Visit Forums
Overview G uide for the new  Statistical Forecasting Calculation Engine Models (monthly and weekly).  Includes enablement videos, practice data import exercise, model documentation, and specific steps when using the model for implementations .  1. Enablement Videos & Practice Exercise # Item Details Link 1a Intro and Overview Video Model overview and review of new key features   Video Below 1b Initial Model & Data Import Steps Steps on how to setup model, product hierarchy, customer list and multi-level forecast analysis  Video Below  1c Practice Exercise—Import data to setup stat forecast Two sets of load files included to practice setup for single level product set or multi-level product set w/ customers, product and brand level.  Start on "Initial App Setup" dashboard and load   either Single OR Multi Level files   into model, use Import video as guide if needed.  .Zip File Attached  2. Documentation  # Item Details Link 2a Lucidchart Process Maps Lucidchart Process Map document includes High Level process flow for end user navigation and detailed tabs for each section  **Details & links also on "Training & Enablement" dashboard Process Maps  2b High Level Process Map PDF High level process map PDF format  Attached 2c Forecast Methods PDFs High level version with forecast algorithms list and overview  Detailed version which includes slide for each forecast method, m ethod overview, advantages/disadvantages, equation and graph example output   **These slides are also included on "Forecast Methods Overview & Formulas" dashboard     Attached 3. Implementation Specifics # Item Details 3a Training & Enablement Dashboard Training & Enablement dashboard contains details on process map navigation  3b Initial Model Setup  Initial Setup: current model staged with chocolate data from data hub, execute CLEAR MODEL action prior to loading customer specific data 3c Changing Model Time Scale— align Native & Dynamic Time Settings If a Time Settings change is required, need to review Initial App Setup dashboard to align Native Time with Dynamic Time setup in model   3d Monthly Update Process After initial setup, use Monthly Data History Upload dashboard to update prior period actuals and settings  3e Single Level vs. Multi-Level Forecast Setup Two implementation options & when to use:  Single Level Forecast:  Forecast at one level of product hierarchy (i.e. all stat forecasts calculated at Item level). Most use cases will leverage single level forecast setup. Multi-Level Forecast : Ability to forecast at different levels of the product hierarchy (i.e. Top Item | Customers, Item and Brand level can all have stat forecast generated). This requires complex forecast reconciliation process, review "Multi-Level Forecast Overview" dashboard if this process is needed.   3f Troubleshooting Tips Follow troubleshooting tips on Training & Enablement dashboard if having issues with stat forecast generating before reaching out for support  3g Model Notes & Documentation Module Notes—includes DISCO classification and module purpose   3h "Do Not Modify" Items Module notes contain DO NOT MODIFY for items that should not be changed during implementation process     3i User Roles & Selective Access Demo, Demand Planner, Demand Planning Manager ro les can be adjusted  After Selective Access process run on Flat List Management dashboard then users can be given access to certain product groups / brands etc 3j Batch Processing Details on daily batch processing and how to prepare a roadmap of your batch processes – files, queries, import actions / processes in Anaplan (see attachment) 4. Videos Intro & Model Intro and Overview Video Data Import and Setup Steps  5. Model Download Links Monthly Statistical Forecasting Calculation Engine Weekly Statistical Forecasting Calculation Engine
View full article
Filters can be very useful in model building and are widely used, but they can come at the expense of performance; often very visible to users through their use on dashboards. Performance can also hit imports and exports, which in turn may lead to the blocking of other activity, causing a poor perception of the model. There are some very simple guidelines to designing well performing filters: Using a single Boolean filter on a line item that does not have time or versions applied and does not have a summary is fastest. Try to create a Boolean line item that incorporates all the filter criteria you want to apply. This allows you to re-use the line item and combine a series of Boolean line items into a single Boolean for use in the filter. For example, you may want to filter on three data points: Volume, Product Category, and Active Status. Volume is numeric, Product Category is a list formatted line item matching a user selection, and Active Status is a Boolean. Create a line item called Filter with the following formula: Volume > Min Vol AND Product Cat = User Selection.Category AND Active Status Here’s a very simple example module to demonstrate this… A Filter line item is added to represent all the filters we need on the view. Only the Filter line needs to be dimensioned by Users. A User Selection module dimension only by Users is created to capture user-specific filter choices: Here’s the data before we apply the filter: With the filter applied: A best practice suggestion would be to create a filter module and line items for each filter part. You may want other filters and you can then combine each filter as needed from this system module. This should reduce repetition and give you control over the filters to ensure they can all be Boolean. What can make a filter slow? The biggest performance hit for filters are when nesting dimensions on rows. The performance loss is significantly increased by the number of nested dimensions and the number of levels they contain. With a flat list versus nested dimensions (filtering on the same number of items) the nested filter will be slower. This was tested with a 10,000,000 list versus 2 lists of 10 and 1,000,000 items as nested rows; the nested dimension filter was 40% slower. Filtering on line items with a line item summary will be slow. A numeric filter on 10,000,000 items can take less than a second, but with a summary will take at least 5 seconds. Multiple filters will increase time This is especially significant if any of the preceding filters do not lower the load, because they will take additional time to evaluate. If you do use multiple filter conditions, try to order them so the most effective filters are first. If a filter doesn’t often match on anything, evaluate whether it's even needed. Hidden levels act as a filter If you hide levels on a composite list, this acts like a filter before any other filter is applied. The hiding does take time to process and will impact more depending on the number of levels and the size of the list. How to avoid nested rows for export views Using nested rows can be a useful way to filter a complex set of data for export but, as discussed above, the filter performance here can be poor. The best way around this is to pivot the dimensions so there is only one dimension on rows and use the Tabular Multi Column export option with a Filter Row based on Boolean option. Some extra filter tips… Filter duration will affect saved views used in imports, so check the saved view open time to see the impact. This view open time will be on every use of the view, including imports or exports. If you need to filter on a specific list, create a subset of those items and create a new module dimensioned by the subset to view that data.
View full article
Model Load: A large and complex model such as 10B cells can take up to 10 minutes to load the first time it's in use after a period of inactivity of 60 minutes. The only way to reduce the load time is by identifying what formula takes most of the time. This requires the Anaplan L3 support (ask for a Model Opening Analysis), but you can reduce the time yourself by applying the formula best practices listed above. One other possible leverage is on list setup: Text properties on a list can increase the load times, and subsets on lists can disproportionately increase load times. It is best practice not to use List Properties but house the attributes in a System model dimensioned by the list.  See Best practice for Module design for more details. Model Save: A model will save when the amount of changes made by end-users exceeds a certain threshold. This action can take several minutes and will be a blocking operation. Administrators have no leverage on model save besides formula optimization and reducing model complexity. Using ALM and Deployed mode increases this threshold, so it is best to use Deployed mode whenever possible. Model Rollback: A model will roll back in some cases of an invalid formula, or when a model builder attempts to adjust a setting that would result in an invalid state. In some large models, the rollback takes approximately the time to open the model, and up to 10 minutes worth of accumulated changes, followed by a model save. The recommendation is to use ALM and have a DEV model which size does not exceed 500M cells, with a production list limited to a few dozen items, and have TEST and PROD models with the full size and large lists. Since no formula editing will happen in TEST or PROD, the model will never rollback after a user action. It can roll back on the DEV model but will take a few seconds only if the model is small.
View full article
Details of known issues  Challenge Recommendations Performance issues with long nested formulas Need to have a long formula on time as a result of nested intermediate calculations. If the model size does not prevent from adding extra line items, it's a better practice to create multiple intermediate line items and reduce the size of the formula, as opposed to nesting all intermediate calculations into one gigantic formula. This applies to summary formulae (SUM, LOOKUP, SELECT). Combining SUM and LOOKUP in the same line item formula can cause performance issues in some cases. If you have noticed a drop in performance after adding a combined SUM and LOOKUP to a single line item, then split it into two line items. RANKCUMULATE causes slowness A current issue with the RANKCUMULATE formula can mean that the time to open the model, including rollback, can be up to five times slower than they should be. There is currently no suitable workaround. Our recommendations are to stay within the constraints defined in Anapedia. SUM/LOOKUP with large cell count Separate formulas into different line items to reduce calculation time (fewer cells need to recalculate parts of a formula that would only affect a subset of the data). A known issue with SUM/LOOKUP combinations within a formula can lead to slow model open and calculation times, particularly if the line item has a large cell count. Example: All line items do not apply to time or versions. Y = X[SUM: R, LOOKUP: R] Y Applies to [A,B] X Applies to [A,B] R Applies to [B] list formatted [C] Recommendation: Add a new line item 'intermediate' that must have 'Applies To' set to the 'Format' of 'R' intermediate = X[SUM: R] Y = intermediate[LOOKUP: R]  This issue is currently being worked on by Development and a fix will be available in a future release Calculations are over non-common dimensions Anaplan calculates quicker if calculations are over common dimensions. Again, best seen in an example. If you have, List W, X Y = A + B Y Applies To W, X A Applies To W B Applies To W This performs slower than, Y = Intermediate Intermediate = A + B Intermediate Applies To W All other dimensions are the same as above. Similarly, you can substitute A & B above for a formula, e.g. SUM/LOOKUP calculations. Cell history truncated Currently, history generation has a time limit of 60 seconds set. The history generation is split into three stages with 1/3 of time allocated to each. The first stage is to build a list of columns required for the grid. This involves reading all the history. If this takes more than 20 seconds, then the user receives the message "history truncated after x seconds - please modify the date range," where X is how many seconds it took. No history is generated. If the first stage completes within 20 seconds, it goes on to generate the full list of history.  In the grid only the first 1000 rows are displayed; the user must Export history to get a full history. This can take significant time depending on volume.  The same steps are taken for model and cell history. The cell history is generated from loading the entire model history and searching through the history for the relevant cell information. When the model history gets too large then it is currently truncated to prevent performance issues. Unfortunately, this can make it impossible to retrieve the cell history that is needed. Make it real time when needed Do not make it real time unless it needs to be. By this we mean, do not have line items where users input data being referenced by other line items unless they have to be. A way around this could be to have users have their data input sections, which is not referenced anywhere, or as little as possible, and, say, at the end of the day when no users are in the model, run an import which would update into cells where calculations are then done. This may not always be possible if the end user needs to see resulting calculations from his inputs, but if you can limit these to just do the calculations that he needs to see and use imports during quiet times then this will still help. We see this often when not all reporting modules need to be recalculated real time. In many cases, many of these modules are good to be calculated the day after. Reduce dependencies Don't have line items that are dependent on other line items unnecessarily.This can cause Anaplan to not utilize the maximum number of calculations it can do at once. This happens where a line items formula cannot be calculated because it is waiting on results of other line items. A basic example of this can be seen with line item's A, B, and C having the formulas: A - no formula B= A C = B Here B would be calculated, and then C would be calculated after this. Whereas if the setup was: A - no formula B = A C = A Here B and C can be calculated at the same time. This also helps if line item B is not needed it can then be removed, further reducing the number of calculations and the size of the model. This needs to considered on a case-by-case basis and is a tradeoff between duplicating calculations and utilizing as many threads as possible. If line item B was referenced by a few other line items, it may indeed be quicker to have this line item. Summary calculation Summary cells often take processing time even if they are not actually recalculated because they must check all the lower level cells. Reduce summaries to ‘None’ wherever possible. This not only reduces aggregations, but also the size of the model.
View full article
Overview: A dashboard with grids that includes large lists that have been filtered and/or sorted can take time to open. The opening action can also become a blocking operation; when this happens, you'll see the blue toaster box showing "Processing....." when the dashboard is opening. This article includes some guidelines to help you avoid this situation.  Rule 1: Filter large lists by creating a Boolean line item.  Avoid the use of filters on text or non-Boolean formatted items for large lists on the dashboard. Instead, create a line item with the format type Boolean and add calculations to the line item so that the results return the same data set as the filter would. Combine multiple conditions into a single Boolean value for each axis. This is especially helpful if you implement user-based filters, where the Boolean will be by the user and by the list to be filtered. The memory footprint of a Boolean line item is 8x smaller than other types of line items. Known issue: On an existing dashboard where a saved view is being modified by replacing the filters with a Boolean line item for filtering, you must republish it to the dashboard. Simply removing the filters from the published dashboard will not improve performance. Rule 2: Use the default Sort. Use sort carefully, especially on large lists. Opening a dashboard that has a grid where a large list is sorted on a text formatted line item will likely take 10 seconds or more and may be a blocking operation. To avoid using the sort: Your list is (by default) sorted by the criteria you need. If it is not sorted, you can still make the grid usable by reducing the items using a user-based filter. Rule 3: Reduce the number of dashboard components. There are times when the dashboard includes too many components, which slows performance. Avoid horizontal scrolling and try and keep vertical scrolling to no more than three pages deep. Once you exceed these limits, consider moving the components into multiple dashboards. Doing so will help both performance and usability. Rule 4: Avoid using large lists as page selectors. If you have a large list and use it as a page selector on a dashboard, that dashboard will open slowly. It may take 10 seconds or more. The loading of the page selector takes more than 90% of the total time. Known issue: If a dashboard grid contains list formatted line items, the contents of page selector drop-downs are automatically downloaded until the size of the list meets a certain threshold; once this size is exceeded, the download happens on demand, or in other words when a user clicks the drop down. The issue is that when Anaplan requests the contents of list formatted cell drop-downs, it also requests contents of ALL other drop-downs INCLUDING page selectors. Recommendation: Limit the page selectors on medium to large lists using the following tips: a) Make the page selector available in one grid and use the synchronized paging option for all other grids and charts. No need to allow users to edit the page in every dashboard grid or chart. b) For multi-level hierarchies, consider creating a separate dashboard with multiple modules (just with the list entries) to enable the users to navigate to the desired level. They can then navigate back to the main planning dashboard. This approach also de-clutters the dashboards. c) If the dashboard elements don't require the use of the list, you should publish them from a module that doesn't contain this list. For example, floating page selectors for time or versions, or grids that are displayed as rows/columns-only should be published from modules that do not include the list. Why? The view definitions for these elements will contain all the source module's dimensions, even if they are not shown, and so will carry the overhead of populating the large page selector if it was present in the source. Rule 5: Split formatted line items into separate modules. Having many line items (that are formatted as lists) in a single module displayed on a dashboard can reduce performance as all of the lists are stored in memory (similar to Rule 4). It is better, if possible, to split the line items into separate modules. Remember from above, try not to have too many components on a dashboard; only include what the users really need and create more dashboards as needed.  
View full article
Overview These dashboards are absolutely critical to good usability of a model. Dashboards are the first contact between the end users and a model. What SHOULD NOT be done in a landing dashboard: Display detailed instructions on how to use the model. See "Instruction Dashboard" instead. Use it for global navigation, built using text boxes and navigation buttons. It will create maintenance challenges if different roles have different navigation paths. It's not helpful once users know where to go. What SHOULD be done in a landing dashboard: Display KPIs with a chart that highlights where they stand on these KPIs, and highlight gaps/errors/exceptions/warnings. A summary/aggregated view of data on a grid to support the chart. The chart should be the primary element. Short instructions on the KPIs. A link to an instruction-based dashboard that includes guidance and video links. A generic instruction to indicate that the user should open the left-side sliding panel to discover the different navigation paths. Users who perform data entry need access to the same KPIs as execs are seeing. Landing dashboard example 1:   Displays the main KPI, which the planning model allows the organization to plan. Landing dashboard example 2:   Provides a view on how the process is progressing against the calendar. Landing dashboard example 3:   Created for executives who need to focus on escalation. Provides context and a call to action (could be a planning dashboard, too).  
View full article
Overview Imports are blocking operations: To maintain a consistent view of the data, the model is locked during the import, and concurrent imports run by end-users will need to run one after the other and will block the model for everyone else. Exports are blocking for data entry while the export data is retrieved, and then the model is released. During the blocking phase, users can still navigate within the model. Rule #1: Carefully decide if you let end-users import (and export) during business hours Imports executed by end-users should be carefully considered, and if possible, executed once or twice a day. Customers more easily accept model updates at scheduled hours for a predefined time—even if it takes 10+ minutes—and are frustrated when these imports are run randomly during business hours. Your first optimization is to adjust the process and run these imports by an administrator at a scheduled time, and then let the user based know about the schedule. Rule #2: Use a saved view The first part of any Import (or Export) is retrieving the data. The time it takes to open the view directly affects the time of the Import or Export. Always import from a saved view—NEVER from the default view. And use the naming convention for easy maintenance. Ensure the view is using optimized filters with a single Boolean value per axis. Hide the line items that are not needed for import; Do not bring extra columns that are not needed. If you have done all of the above, and the view is still taking time to complete, consider using the Tabular Multi Column export and filter "in the way out." This has been proven to improve some sluggish exports.  Rule #3: Mapping objective = zero errors or warning Make sure your import executes with no errors or warnings as every error takes processing time. The time to import into a medium-to-large list (>50k) is significantly reduced if no errors are to be processed. In the import definition, always map all displayed line items (source→target) or use the "ignore" setting. Don't leave any line item unmapped. Rule #4: Watch the formulas recalculated during the import If your end-users encounter poor performance when clicking a button that triggers an import or a process, it is likely due to the recalculations that are triggered by the import, especially if the action creates or moves items within a hierarchy. You will likely need the help of the Anaplan Model Optimization team to identify what formulas are triggered after the import is done and to get a performance check on these formulas to identify which one takes most of the time. Usually, those fetching many cells such as SUM, LOOUKP, ANY, or FINDITEM are likely to be responsible for the performance impact. Speak to your Business Partner for more details on the Model Optimization services available to you. To solve such situations, you will need to challenge the need for recalculating the formula identified each time a user calls the action. Often, for actions such as creations, moves, assignments done in WFP or Territory Planning, many calculations used for Reporting are triggered in real-time after the hierarchy is modified by the import, and are not necessarily needed by users until later in the process. The recommendation is to challenge your customer and see if these formulas can be calculated only once a day, instead of each time a user runs the action. If this is acceptable, you'll need to rearchitect your modules and/or formulas so that these heavy formulas get to run through a different process run daily by an administrator and not by each end-users. If not, you will need to look at the formulas more closely to see what improvements can be made. Remember, breaking formulas up often helps performance. Rule #5: Don't import list properties Importing list properties takes more time than importing these as a module line item. Review your model list impacted by imports, and look to replace list properties with module line items when possible. Use a System module to hold these for the key hierarchies, as per D.I.S.C.O.— Best practice for Module design. Rule #6: Get your Data Hub HUB and SPOKE: Setup a Data Hub model, which will feed the other production models used by stakeholders. Look at the white paper on how to build a Data Hub: Performance benefits: It will prevent production models to be blocked by a large import from an External Data Source. But since Data Hub to Production model imports will still be blocking operations, carefully filter what you import, and use the best practices rules listed above. All import, mapping/transformation modules required to prepare the data to be loaded into Planning modules can now be located in a dedicated Data HUB model and not in the Planning model. This model will then be smaller and will work more efficiently. Try and keep the transaction data history in the Data Hub with a specific analysis dashboard made available for end users; often, the detail is not needed for planning purposes, and holding this data in the planning model has a negative impact on size, model opening times, and performance. As a reminder of the other benefits of a Data Hub not linked to performance: Better structure, easier maintenance: Data Hub helps keep all the data organized in a central location. Better governance: Whenever possible put this Data Hub on a different Workspace. That will ease the separation of duties between production models and meta-data management, at least on actual data and production lists. IT departments will love the idea to own the Data Hub and have no one else be an administrator in the Workspace. Lower implementation costs: A Data Hub is a way to reduce the implementation time of new projects. Assuming IT can load the data needed by the new project in the Data Hub, then business users do not have to integrate with complex source systems but with the Anaplan Data Hub instead. Rule #7: Incremental import/export This can be the magic bullet in some cases. If you export on a frequent basis (daily or more) from an Anaplan model into a reporting system, or write back to the source system, or simply transfer data from one Anaplan model to another, you have ways to only import/export the data that have changed since the last export. Use a boolean line item to identify records that have changed and only import those.
View full article
If you’re familiar with Anaplan, you’ve probably heard the buzz about having a data hub and wondered why it’s considered a “best practice” within the Anaplan community. Wonder no more. Below, I will share four reasons why you should spend the time to build a data hub before Anaplan takes your company by storm.   1. Maintain consistent hierarchies Hierarchies are a common list structure built by Anaplan and come in a variety of options depending on use case, e.g., product hierarchy, cost center hierarchy, and management hierarchy, just to name a few. These hierarchies should be consistent across the business whether you’re doing demand planning or financial planning. With a data hub, your organization has a higher likelihood of keeping hierarchies consistent over time since everyone is pulling the same structure from one source of truth: the data hub.   2. Data Optimization As you expand the use of Anaplan across multiple departments, you may find that you only need a portion of a list, rather than the entire list. For instance, you may want the full list of employees for workforce planning purposes, but only a portion of the employees for incentive compensation calculations. With a data hub, you can distribute only the pertinent information. You can filter the list of employees to build the employee hierarchy in the incentive compensation model while having the full list of employees in the workforce planning model. Keep them both in sync using the data hub as your source of truth.   3. Separate duties by roles and responsibilities An increasing number of customers have asked about roles and responsibilities with Anaplan as they expand internally. In Anaplan, we recommend each model have a separate owner. For example, an IT owner for the data hub, an operations owner for the demand planning model, and a finance owner for the financial planning model. The three owners combined would be your Center of Excellence, but each has their separate roles and responsibilities for development and maintenance in the individual models.   4. Accelerate future builds One of the main reasons many companies choose Anaplan is for the platform’s flexibility. Its use can easily and quickly expand across an entire organization. Development rarely stops after the first implementation. Model builders are enabled and excited to continue to bring Anaplan into other areas of the business. If you start by building the data hub as your source of truth for data and metadata, you can accelerate the development of future models since you already have defined the foundation of the model, the lists, and dimensions. As you begin to implement, build, and roll out Anaplan, starting with a data hub is a key consideration. In addition to this, there are many other fundamental Anaplan best practices to consider when rolling out a new technology and driving internal adoption.
View full article
Table of Contents   Overview A data hub is a separate model that holds an organization’s data. Data can be shared with all your models, making expands easier to implement and ensuring data integrity across models. The data hub model can be placed in a different workspace, allowing for role segregation. This allows you to assign administrator rights to users to manage the data hub without allowing those users access to the production models. The method for importing to the data hub (into modules, rather than lists) allows you to reconcile properties using formulas. One type of data hub can be integrated with an organization’s data warehouse and hold ERP, CRM, HR, and other data as shown in this example. Anaplan Data Architecture But this isn’t the only type of data hub. Some organizations may require a data hub for transactional data, such as bookings, pipeline, or revenue. Whether you will be using a single data hub or multiple hubs, it is a good idea to plan your approach for importing from the organization’s systems into the data hub(s) as well as how you will synchronize the imports from the data hub to the appropriate model. The graphic below shows best practices.   High level best practices   When building a data hub, the best practice is to import a list with properties into a module rather than directly into a list. Using this method, you set up line items to correspond with the properties and import them using the text data type. This imports all the data without errors or warnings. The data in the data hub module can be imported to a list in the required model. The exception for importing into a module is if you are using a numbered list without a unique code (or in other words, you are using combination of properties). In that case, you will need to import the properties into the list.   Implementation steps Here are the steps to create the basics of a hub and spoke architecture. 1) Create a model and name it master data hub You can create the data hub in the same workspace where all the other models are, but a better option is to put the data hub in a different workspace. The advantage is role segregation; you can assign administrator rights to users to manage the Hub and not provide them with access to the actual production models, which are in a different workspace. Large customers may require this segregation of duties. Note: This functionality became available in release 2016.2.   2) Import your data files into the data hub Set up your lists. Identify the lists that are required in the data hub. Create these lists using good naming conventions. Set up any needed hierarchies, working from the top level down. Import data into the list from the source files, mapping only the unique name, the parent (if the name rolls up into a hierarchy), and code, if available. Do not import any list properties. These will be imported into a module. Create corresponding modules for those lists that include properties. For each list, create a module. Name the module [List Name] Properties. In the module, create a line item for each property and use the data type TEXT. Import the source file into the corresponding module. There should be no errors or warnings. Automate the process with actions. Each time you imported, an action was created. Name your actions using the appropriate naming conventions. Note: Indicate the name of the source in the name of the import action. To automate the process, you’ll want to create one process that includes all your imports. For hierarchies, it is important to get the actions in the correct order. Start with the highest level of the hierarchy list import, then the next level list and on down the hierarchy. Then add the module imports. (The order of the module imports is not critical.) Now, let's look at an example: You have a four-level hierarchy to load, such as 1) Employee→ 2) State → 3) Region → 4) Country   Lists Create lists with the right naming conventions. For this example, create these lists: G1 Country G2 Region G3 State Employee G4 Set the parent hierarchy to create the composite hierarchy. Import into each list from the source file(s), and only map name and parent. The exception is the employee list, which includes a code (employee ID) which should be mapped. Properties will be added to the data hub later.   Properties → Modules Create one module for each list that includes properties. Name the module [List Name] Properties. For this example, only the Employees list includes properties, so create one module named Employee Properties. In each module, create as many line items as you have properties. For this example, the line items are Salary and Bonus. Open the Blueprint view of the module and in the Format column, select Text. Pivot the module so that the line items are columns. Import the properties. In the grid view of the module, click on the property you are going to import into. Set up the source as a fixed line item. Select the appropriate line item from the Line Item tab and on the Mapping tab, select the correct column for the data values. You’ll need to import each property (line item) separately. There should be no errors or warnings.     Actions  Each time you run an import, an action is created. You can view these actions by selecting Actions from the Model Settings tab. The previous imports into lists and modules have created one import action per list. You can combine these actions into a process that will run each action in the correct order. Name your actions following the naming conventions. Note, the source is included in the action name.   Create one process that includes the imports. Name your process Load [List Name]. Make sure the order is correct: Put the list imports first, starting with the top hierarchy level (numbered as 1) and working down the module imports in any order.   3) Reconcile These list imports should be running with zero errors because imports are going into text formatted items. If some properties should match with items in lists, it's recommended to use FINDITEM formulas to match text to list items: FINDITEM simply looks at the text formatted line item, and finds the match in the list that you specify. Every time data is uploaded into Anaplan, you just need to make sure all items from the text formatted line item are being loaded into the list. This will be useful as you will be able to always compare the "raw data" to the "Anaplan data," and not have to load that data more than once if there are concerns about the data quality in Anaplan. If there is not a list of the properties included in your data hub model, first, create that list. Let’s use the example of Territory. Add a line item to the module and select list as the format type, then select the list name of your list of properties—in this case, Territory from the drop-down. Add the FINDITEM formula FINDITEM(x,y) where x is the name of your list (Territory for our example) and y is the line item. You can then filter this line item so that it shows all of the blank items. Correct the data in the source system. If you will be importing frequently, you may want to set up a dashboard to allow users to view the data so they can make corrections in the source system. Set up a saved view for the errors and add conditional formatting to highlight the missing (blank items) data. You can also include a counter to show the number of errors and add that information to the dashboard.   4) Split models: Filter and Set up Saved Views If the architecture of your model includes spoke models by regions, you need one master hierarchy that covers all regions and a corresponding module that stores the properties. Use that module and create as many saved views as you have spoke region models. For example, filter on Country GI = Canada if you want to import only Canadian accounts into the spoke model. You will need to create a saved view for each hierarchy and spoke model.   5) Import to the spoke module Use the cross-workspace imports if you have decided to put your Master data hub in a separate workspace. Create the lists that correspond to the hierarchy levels in each spoke model. There is no way to create a list via import for now. Create the properties in the list where needed. Keep in mind that the import of properties into the data hub as line items is an exception. List properties generally do not vary, unlike a line item in a module, which are often measured over time. Note: Properties can also be housed in modules and there are some benefits to this. See Anapedia - Model Building (specifically, the "List Attributes" and "List attributes in a module" topics). If you decide to use a module to hold the properties, you will need to create a line item for each property type and then import the properties into the module. To simplify the mapping, make sure the property names in each spoke model match the line item names of the data hub model. In each spoke model, create an import from the filtered module view of the data hub model into the lists you created in step 1. In the Actions window, name your imports using naming conventions. Create a process that includes these actions (imports). Begin with the highest level in the hierarchy and work down to the lowest. Well done! You have imported your hierarchy from a data hub model.   6) Incremental list imports When you are in the midst of your peak planning cycle and your large lists are changing frequently, you’ll want to update the data hub and push the changes to the spoke models. Running imports of several thousand list members, may cause performance issues and block users during the import activity. In a best case scenario, your data warehouse provides a date field that shows when the item was added or modified, and is able to deliver a flat file or table that includes only the changes. Your import into the HUB model will just take few seconds, and you can filter on this date field to only export the changes to the spoke models. But in most cases, all you have is a full list from the data warehouse, regardless of what has changed. To mitigate this, we'll use a technique to export only the list items that have changed (edited, deleted, updated) since the last export, using the logic in Anaplan.   Setting up the incremental loads: In the data hub model: Create a text formatted line item in your module. Name it CHECKSUM, set the format as Text, and enter a formula to concatenate of all the properties that you want to track changes for. These properties will form the base of the incremental import. Example: CHECKSUM = State & Segment & Industry & Parent & Zip Code Create a second line item, name it CHECKSUM OLD, set the format as Text, and create an import that imports CHECKSUM into CHEKSUM_OLD. Ignore any other mappings. Name this import: 1/2 im DELTA and put it in a process called "RESET DELTA" Create a line item and name it "DELTA" and set the format as Boolean. Enter this formula: IF CHECKSUM <> CHECKSUM OLD THEN TRUE ELSE FALSE. Update the filtered view that you created to export only the hierarchy for a specific region or geography. Add a filter criteria "DELTA = true". You will only see the list items which differ from the last time you imported into the data hub In the example above, we'll import into a spoke model only the list items that are in US East, and that have changed since the last import. Execute the import from the source into the data hub and then into the spoke models. In the data hub model, upload the new files and run the process import. In the spoke models, run the process import that takes the list from the data hub's filtered view. → Check the import logs and verify that only the number of items that have changed are actually imported. Back in the data hub model, run the RESET DELTA process (1/2 im DELTA import). The RESET DELTA process resets the changes, so you are ready for the next set of changes. Your source, data hub model and spoke models are all in sync. 7) Incremental data load The Actual transaction file might need to be imported several times into the data hub model and from there into the spoke models during the planning peak cycle. If the file is large, it can create performance issues for end users. Since not all transactions will change as the data is imported several times a day, there is a strong opportunity to optimize this process. In the data hub model transaction module, create the same CHECKSUM, CHECKSUM OLD and DELTA line items. CHECKSUM should concatenate all the fields you want to track the delta on, including the values. "DELTA" line item will actually catch new transactions, as well as modified transactions. See 6. Incremental List Imports above for more information   Filter the view using DELTA to only import transaction list items into the list, and the actuals transaction into the module. Create an import from CHECKSUM to CHECKSUM OLD, to be able to reset the delta after the imports have run, name this import: 2/2 im DELTA, and add it to the DELTA process created for the list. In the spoke model, import into the transaction list and into the transaction module, from the transaction filtered view. Run the DELTA import or process.   😎  Automation You can semi-automate this process and have it run automatically on a frequent basis if incremental loads have been implemented. That provides immediacy of master data and actuals across all models during a planning cycle. It's semi-automatic because it requires a review of the reconciliation dashboards before pushing the data to the spoke models. There are a few ways to automate, all requiring an external tool: Anaplan Connect or the customer's ETL. The automation script needs to execute in this order: Connect to the master data hub model. Load the external files into the master data hub model. Execute the process that imports the list into the data hub. Execute the process that imports actuals (transactions) into the data hub. Manual step: Open your reconciliation dashboards, and check that data and the list are clean. Again, these imports should run with zero errors or warnings. Connect to the spoke model. Execute the list import process. Execute the transaction import models. Repeat 5, 6, and 7 for all spoke models. Connect to the master data hub model. Run the Clear DELTA process to reset the incremental checks.   Other best practices Create deletes for all your lists Create a module called Clear Lists. In the module, create a line item of type Boolean in the module where you have list and properties, call it "CLEAR ALL" and set a formula to TRUE. In Actions, create a "delete from list using selection" action and set it as below: Repeat this for all lists and create one process that executes all these delete actions.   Example of a maintenance/reconcile dashboard Use a maintenance/reconcile dashboard when manual operations are required to update applications from the hub. One method that works well is to create a module that highlights if there are errors in each data source. In that module, create a line item message that displays on the dashboard if there are errors, for example: There are errors that need correcting. A link on this dashboard to the error status page will make it easy for users to check on errors. A best practice is to automate the list refresh. Combine this with a modeling solution that only exports what has changed.   Dev-test-prod considerations There should be two saved views: One for development and one for production. That way, the hub can feed the development models with shortened versions of the lists and the production models will get the full lists. ALM considerations: The development (DEV) model will need the imports set up for DEV and production (PROD) if the different saved view option is taken. The additional ALM consideration is that the lists that are imported into the spoke models from the hub need to be marked as production data.   Development DATA HUB The data hub houses all global data needed to execute the Anaplan use case. The data hub often houses complex calculations and readies data for downstream models. DEVELOPMENT MODEL The development model is built to the 80/20 rule. It is built upon a global process, regional specific functionality is added in the deployment phase. The model is built to receive data from the data hub. DATA INTEGRATION During this stage, Anaplan Connect or a 3rd party tool is used to automate data integration. Data feeds are built from the source system into the data hub and from the data hub to downstream models. PERFORMANCE TESTING The application is put through rigorous performance testing, including automated and end user testing. These tests mimic real world usage and exceptionally heavy traffic to see how the system will perform.   Deployment DATA HUB The data hub is refreshed with the latest information from the source systems. The data hub readies data for downstream models. DEPLOYMENT  MODEL The development model is copied and the appropriate data is loaded from the data hub. Regional specific functionality is added during this phase. DATA INTEGRATION Additional data feeds from the data hub to downstream models are finalized. The integrations are tested and timed to establish baseline SLA. Automatic feeds are placed on timed schedules to keep the data up to date. PERFORMANCE TESTING The application is again put through rigorous performance testing.   Expansion DATA HUB The need for additional data for new use cases is often handled by splitting the data hub into regional data hubs. This helps the system perform more efficiently. MODEL  DEVELOPMENT The models built for new use cases are developed and thoroughly tested. Additional functionality can be added to the original models deployed. DATA INTEGRATION Data integration is updated to reflect the new system architecture. Automatic feeds are tested and scheduled according to business needs. PERFORMANCE TESTING At each stage, the application is put through rigorous performance testing. These tests mimic real world usage and exceptionally heavy traffic to see how the system will perform.
View full article
Problem to solve: As an HR manager, I need to enter the salary raise numbers for multiple regions that I'm responsible for. As a domain best practice, my driver-based model helps me to enter raise guidelines, which will then change at the employee level. Usability issue addressed: I have ten regions, eight departments in each, with a total of 10,000+ employees. I need to align my bottom-up plan to the down target I received earlier. I need to quickly identify what region is above/behind target and address the variance. My driver-based raise modeling is fairly advanced, and I need to see what the business rules are. I need to quickly see how it impacts the employee level. Call to action: Step 1: Spot what region I need to address.  Step 2: Drill into the variances by department. Steps 1 & 2 are analytics steps: "As an end user, I focus first on where the biggest issues are." This is a good usability practice that helps users. Step 3: Adjusting the guidelines (drivers) There are not excessive instructions on how to build and use guidelines, which would have cluttered the dashboard. Instead, Anaplan added a "view guideline instruction" button. This button should open a dashboard dedicated to detailed instructions or link to a video that explains how guideline works. Impact analysis: The chart above the grid will adjust as guidelines are edited. That is a good practice for impact analysis— no scrolling or clicking needed to view how the changes will impact the plan. Step 4: Review a summary of the variance after changes are made. Putting steps 1–4 close to each other is a usable way of indicating to a user that he/she needs to iterate through these four steps to achieve their objective, which is to have every region and every department be within the top down target. Step 5: A detailed impact analysis, which is placed directly below steps 3 and 4. This allows end users to drill into the employee-level details and view the granular impact of the raise guidelines. Notice the best practices in step 5:   The customer will likely ask to see 20 to 25 employee KPIs across all employees and will be tempted to display these as one large grid. This can quickly lead to an unusable grid made of thousands of rows (employees) across 25 columns. Instead, we have narrowed the KPI list to only ten that display without left-right scrolling. Criteria to elect these ten: be able to have a chart that compares employees by these KPIs. The remaining KPIs are displayed as an info grid, which only displays values for the selected employee. Things like region, zip codes, and dates are removed from the grid as they do not need to be compared side-by-side with other KPIs or between employees.  
View full article
I recently posted a Python library for version 1.3 of our API. With the GA announcement of API 2.0, I'm sharing a new library that works with these endpoints. Like the previous library, it does support certificate authentication, however, it requires the private key in a particular format (documented in the code, and below). I'm pleased to announce, the use of Java keystore is now supported. Note:   While all of these scripts have been tested and found to be fully functional, due to the vast amount of potential use cases, Anaplan does not explicitly support custom scripts built by our customers. This article is for information only and does not suggest any future product direction. This library is a work in progress and will be updated with new features once they have been tested.   Getting Started The attached Python library serves as a wrapper for interacting with the Anaplan API. This article will explain how you can use the library to automate many of the requests that are available in our Apiary, which can be found at   https://anaplanbulkapi20.docs.apiary.io/#. This article assumes you have the requests and M2Crypto modules installed as well as the Python 3.7. Please make sure you are installing these modules with Python 3, and not for an older version of Python. For more information on these modules, please see their respective websites: Python   (If you are using a Python version older or newer than 3.7 we cannot guarantee the validity of the article)   Requests   M2Crypto Note:   Please read the comments at the top of every script before use, as they more thoroughly detail the assumptions that each script makes. Gathering the Necessary Information In order to use this library, the following information is required: Anaplan model ID Anaplan workspace ID Anaplan action ID CA certificate key-pair (private key and public certificate), or username and password There are two ways to obtain the model and workspace IDs: While the model is open, go Help>About:  Select the workspace and model IDs from the URL:  Authentication Every API request is required to supply valid authentication. There are two (2) ways to authenticate: Certificate Authentication Basic Authentication For full details about CA certificates, please refer to our Anapedia article. Basic authentication uses your Anaplan username and password. To create a connection with this library, define the authentication type and details, and the Anaplan workspace and model IDs: Certificate Files: conn = AnaplanConnection(anaplan.generate_authorization("Certificate","<path to private key>", "<path to public certificate>"), "<workspace ID>", "<model ID>") Basic: conn = AnaplanConnection(anaplan.generate_authorization("Basic","<Anaplan username>", "<Anaplan password>"), "<workspace ID>", "<model ID>")   Java Keystore: from anaplan_auth import get_keystore_pair key_pair=get_keystore_pair('/Users/jessewilson/Documents/Certificates/my_keystore.jks', '<passphrase>', '<key alias>', '<key passphrase>') privKey=key_pair[0] pubCert=key_pair[1] #Instantiate AnaplanConnection without workspace or model IDs conn = AnaplanConnection(anaplan.generate_authorization("Certificate", privKey, pubCert), "", "") Note: In the above code, you must import the get_keystore_pair method from the anaplan_auth module in order to pull the private key and public certificate details from the keystore. Getting Anaplan Resource Information You can use this library to get the necessary file or action IDs. This library builds a Python key-value dictionary, which you can search to obtain the desired information: Example: list_of_files = anaplan.get_list(conn, "files") files_dict = anaplan_resource_dictionary.build_id_dict(list_of_files) This code will build a dictionary, with the file name as the key. The following code will return the ID of the file: users_file_id = anaplan_resource_dictionary.get_id(files_dict, "file name") print(users_file_id) To build a dictionary of other resources, replace "files" with the desired resource: actions, exports, imports, processes.  You can use this functionality to easily refer to objects (workspace, model, action, file) by name, rather than ID. Example: #Fetch the name of the process to run process=input("Enter name of process to run: ") start = datetime.utcnow() with open('/Users/jessewilson/Desktop/Test results.txt', 'w+') as file: file.write(anaplan.execute_action(conn, str(ard.get_id(ard.build_id_dict(anaplan.get_list(conn, "processes"), "processes"), process)), 1)) file.close() end = datetime.utcnow() The code above prompts for a process name, queries the Anaplan model for a list of processes, builds a key-value dictionary based on the resource name, then searches that dictionary for the user-provided name, and executes the action, and writes the results to a local file. Uploads You can upload a file of any size and define a chunk size up to 50mb. The library loops through the file or memory buffer, reading chunks of the specified size and uploads to the Anaplan model. Flat file:  upload = anaplan.file_upload(conn, "<file ID>", <chunkSize (1-50)>, "<path to file>") "Streamed" file: with open('/Users/jessewilson/Documents/countries.csv', "rt") as f: buf=f.read() f.close() print(anaplan.stream_upload(conn, "113000000000", buf)) print(anaplan.stream_upload(conn, "113000000000", "", complete=True)) The above code reads a flat file and saves the data to a buffer (this can be replaced with any data source, it does not necessarily need to read from a file). This data is then passed to the "streaming" upload method. This method does not accept the chunk size input. Instead, it simply ensures that the data in the buffer is less than 50mb before uploading. You are responsible for ensuring that the data you've extracted is appropriately split. Once you've finished uploading the data, you must make one final call to mark the file as complete and ready for use by Anaplan actions. Executing Actions You can run any Anaplan action with this script and define a number of times to retry the request if there's a problem. In order to execute an Anaplan action, the ID is required. To execute, all that is required is the following: run_job = execute_action(conn, "<action ID>", "<retryCount>") print(run_job) This will run the desired action, loop until complete, then print the results to the screen. If failure dump(s) exits, this will also be returned. Example output: Process action 112000000082 completed. Failure: True Process action 112000000079 completed. Failure: True Details: hierarchyName Worker Report successRowCount 0 successCreateCount 0 successUpdateCount 0 warningsRowCount 435 warningsCreateCount 0 warningsUpdateCount 435 failedCount 4 ignoredCount 0 totalRowCount 439 totalCreateCount 0 totalUpdateCount 435 invalidCount 4 updatedCount 435 renamedCount 435 createdCount 0 lineItemName Code rowCount 0 ignoredCount 435 Failure dump(s): Error dump for 112000000082 "_Status_","Employees","Parent","Code","Prop1","Prop2","_Line_","_Error_1_" "E","Test User 2","All employees","","101.1a","1.0","2","Error parsing key for this row; no values" "W","Jesse Wilson","All employees","a004100000HnINpAAN","","0.0","3","Invalid parent" "W","Alec","All employees","a004100000HnINzAAN","","0.0","4","Invalid parent" "E","Alec 2","All employees","","","0.0","5","Error parsing key for this row; no values" "W","Test 2","All employees","a004100000HnIO9AAN","","0.0","6","Invalid parent" "E","Jesse Wilson - To Delete","All employees","","","0.0","7","Error parsing key for this row; no values" "W","#1725","All employees","69001","","0.0","8","Invalid parent" [...] "W","#2156","All employees","21001","","0.0","439","Invalid parent" "E","All employees","","","","","440","Error parsing key for this row; no values" Error dump for 112000000079 "Worker Report","Code","Value 1","_Line_","_Error_1_" "Jesse Wilson","a004100000HnINpAAN","0","434","Item not located in Worker Report list: Jesse Wilson" "Alec","a004100000HnINzAAN","0","435","Item not located in Worker Report list: Alec" "Test 2","a004100000HnIO9AAN","0","436","Item not located in Worker Report list: Test 2 Downloading a File If the above code is used to execute an export action, the fill will not be downloaded automatically. To get this file, use the following: download = get_file(conn, "<file ID>", "<path to local file>") print(download) This will save the file to the desired location on the local machine (or mounted network share folder) and alert you once the download is complete, or warn you if there is an error. Get Available Workspaces and Models API 2.0 introduced a new means of fetching the workspaces and models available to a given user. You can use this library to build a key-value dictionary (as above) for these resources. #Instantiate AnaplanConnection without workspace or model IDs conn = AnaplanConnection(anaplan.generate_authorization("Certificate", privKey, pubCert), "", "") #Setting session variables uid = anaplan.get_user_id(conn) #Fetch models and workspaces the account may access workspaces = ard.build_id_dict(anaplan.get_workspaces(conn, uid), "workspaces") models = ard.build_id_dict(anaplan.get_models(conn, uid), "models") #Select workspace and model to use while True: workspace_name=input("Enter workspace name to use (Enter ? to list available workspaces): ") if workspace_name == '?': for key in workspaces: print(key) else: break while True: model_name=input("Enter model name to use (Enter ? to list available models): ") if model_name == '?': for key in models: print(key) else: break #Extract workspace and model IDs from dictionaries workspace_id = ard.get_id(workspaces, workspace_name) model_id = ard.get_id(models, model_name) #Updating AnaplanConnection object conn.modelGuid=model_id conn.workspaceGuid=workspace_id The above code will create an AnaplanConnection instance with only the user authentication defined. It queries the API to return the ID of the user in question, then queries for the available workspaces and models and builds a dictionary with these results. You can then enter the name of the workspace and model you wish to use (or print to screen all available), then finally update the AnaplanConnection instance to be used in all future requests.
View full article
Learn how to organize your model into logical parts to give you a  well-designed model that is easy to follow, understand and amend at a later date
View full article
As a model builder, you have to define line item formats over and over. Using a text expander/snippet tool, you can speed up the configuration of modules. When you add a new Line Item, Anaplan sets it by default as a Number (Min Significant Digits: 4, Thousands Separator: Comma, Zero Format: Zero, etc.). You usually change it once and copy it over to other line items in the module. Snippet tools can store format definition of generic formats (numbers, text, boolean, or no data) and by a simple shortcut, paste it in the format of the desired line items.  Below is an example of a number format line item with no decimal and hyphens instead of zeros. On my Mac, I press Option + X, I type "Num..." and get a list of all Number formats I pre-defined. I press Enter to paste it. It also works if several line items were selected. The value stored for this Number format is : {"minimumSignificantDigits":-1,"decimalPlaces":0,"decimalSeparator":"FULL_STOP","groupingSeparator":"COMMA","negativeNumberNotation":"MINUS_SIGN","unitsType":"NONE","unitsDisplayType":"NONE","currencyCode":null,"customUnits":null,"zeroFormat":"HYPHEN","comparisonIncrease":"GOOD","dataType":"NUMBER"} Here is the result of a text format snippet. {"textType":"GENERAL","dataType":"TEXT"} Or a Heading line item (No Data, Style: Heading 1). ---- false {"dataType":"NONE"} - Year Model Calendar All false false {"summaryMethod":"NONE","timeSummaryMethod":"NONE","timeSummarySameAsMainSummary":true,"ratioNumeratorIdentifier":"","ratioDenominatorIdentifier":""} All Versions true false Heading1 - - - 0  This simple trick can save you a lot of clicks. While we are unable to recommend specific snippet tools, your PC or Mac may include one by default, while others are easy to locate for free or low-cost, online.
View full article
This article provides the steps needed to create a basic time filter module. This module can be used as a point of reference for time filters across all modules and dashboards within a given model. The benefits of a centralized Time Filter module include: One centralized governance of time filters. Optimization of workspace, since the filters do not need to be re-created for each view. Instead, use the Time Filter module. Conforms with the D.I.S.C.O. methodology as a 'System' module.  More on D.I.S.C.O. can be found here.   Step 1: Create a new module with two dimensions—time and line items. The example below has simple examples for Weeks Only, Months Only, Quarters Only, and Years Only. Step 2: Line items should be Boolean formatted and the time scale should be set in accordance to the scale identified in the line item name. The example below also includes filters with and without summary methods, providing additional views depending on the level of aggregation desired. Once your preliminary filters are set, your module will look something like the screenshot below.  Step 3: Use the pre-set Time Filters across various modules and dashboards. Simply click on the filters icon in the toolbar, navigate to the time tab, select your Time Filter module from the module selection screen, and select the line item of your choosing. Use multiple line items at a time to filter your module or dashboard view.  
View full article
Assume the following Non-Composite list, ragged hierarchy, needs to be set to Production Data. We need to refer to the ultimate parent to define the logic calculation. In the example, we have assumed that children of Parent 1 and Parent 3 need to return the 'logic 1' value from the constants module below, and those under Parent 2 return 'logic 2,' and we apportion the results based on the initial data of the children. Select Proportion: Data / IF PARENT(ITEM('Non-Composite List')) = 'Non-Composite List'.'Parent 1' THEN Data[SELECT: 'Non-Composite List'.'Parent 1'] ELSE IF PARENT(ITEM('Non-Composite List')) = 'Non-Composite List'.'Parent 2' THEN Data[SELECT: 'Non-Composite List'.'Parent 2'] ELSE IF PARENT(ITEM('Non-Composite List')) = 'Non-Composite List'.'Parent 3' OR PARENT(ITEM('Non-Composite List')) = 'Non-Composite List'.'Child 3.1' THEN Data[SELECT: 'Non-Composite List'.'Parent 3'] ELSE 0   Select Calculation: Select Proportion * IF PARENT(ITEM('Non-Composite List')) = 'Non-Composite List'.'Parent 1' OR PARENT(ITEM('Non-Composite List')) = 'Non-Composite List'.'Parent 3' OR PARENT(ITEM('Non-Composite List')) = 'Non-Composite List'.'Child 3.1' THEN Parent Logic Constants.'Logic 1' ELSE IF PARENT(ITEM('Non-Composite List')) = 'Non-Composite List'.'Parent 2' THEN Parent Logic Constants.'Logic 2' ELSE 0   These “hard references” will prevent the list from being set as a production list. SOLUTION: Create a Parents Only list (this could be imported from the Non-Composite list).  As we don't need the sub-level parents, we do not need to include 'Child 3.1,' even though it is technically a parent. To calculate the proportion calculation without the SELECT, a couple of intermediate modules are needed:   Parent Mapping module This module maps the Non-Composite parents to the Parents Only list.  Due to the different levels in the hierarchy, we need to check for sub levels and use the parent of Child 3.1. In this example, the mapping is automatic because the items in the Parents Only list have the same name as those in the Non-Composite list. The mapping could be a manual entry if needed.   The formula and “applies to” are:   Non Composite Parent: PARENT(ITEM('Non-Composite List')) Applies to: Non-Composite List   Parent of Non Composite Parent: PARENT(Non-Composite Parent) Applies to: Non-Composite List   Parent to Map: IF ISNOTBLANK(PARENT(Parent of Non Composite Parent)) THEN Parent of Non Composite Parent ELSE Non Composite Parent Applies to: Non-Composite List    Parents Only List FINDITEM(Parents Only List, NAME(Parent to Map)) Applies to: Parents Only List   Parents Only subtotals An intermediary module is needed to hold the subtotals. Calculation: Parent Logic Calc.Data[SUM: Parent Mapping.Parents Only List]   Parent Logic? Module We now define the logic for the parents in a separate module. Add Boolean line items for each of the “logic” types. Then you can refer to the logic above  in the calculations. Lookup Proportion: Data / Parents Only Subtotals.Calculation[LOOKUP: Parent Mapping.Parents Only List]   Lookup Calculation: Lookup Proportion * IF Parent Logic?.'Logic 1?'[LOOKUP: Parent Mapping.Parents Only List] THEN Parent Logic Constants.'Logic 1' ELSE IF Parent Logic?.'Logic 2?'[LOOKUP: Parent Mapping.Parents Only List] THEN Parent Logic Constants.'Logic 2' ELSE 0 The list can now be set as a production list as there are no “hard references”.  Also, the formulas are smaller, simpler and now more flexible should the logic need to change.  If Parent 3 needs to use Logic 2, it is a simple change to the checkbox.     Appendix: Blueprints:      
View full article
Anaplan API: Communication failure <SSL peer unverified:  peer not authenticated> This is a common error if a Customer Server is behind a proxy or firewall. The solution is to have the customer whitelist '*.anaplan.com' for firewall blocks.  If behind a proxy, use the '-via" or 'viauser" commands in Anaplan Connect. The other very common cause for this error is that the security certificate isn’t synced up with java. If the whitelist or via command solutions don’t apply or don’t resolve the error, uninstalling and reinstalling Java usually does the trick. Thanks to Jesse Wilson for the technical details.   jesse.wilson@anaplan.com Here are the commands available:  
View full article
Making sure that production data lists are correctly marked within a model is a  key step to setting up and using ALM . This guide will provide a solution to how someone can make revisions to their model to allow for the tagging of a list as a production data list. Please note: this solution doesn’t work if there are hard-coded references on non-composite summary items. For more information on working with production lists and ragged hierarchies, please visit Production lists and ragged hierarchies logic. The issue arises as a model administrator needs to tag a production data list, but there are hard-coded references in the model that won’t allow the person to do so. When this occurs and the model administrator tries to tag it as a production list, they will get a warning similar to this: See  Formula Protection  for more details. To fix this issue, all direct formula references to production data lists need to be changed to be indirect references to lists using either LOOKUPs or Boolean formatted conditional logic.  Below, you will find a step-by-step guide to replacing these formulas. Identify formulas with hard-coded references There is now an easy way to identify all of the formulas which are hard-coded to production data lists. Check the 'Referenced in Formula' column in the General Lists section. This will show the line items where the list is used. Check the respective formula for hard-coded references.  If there are no hard-coded references, then it is OK to check the list as a production data list.  This is the recommended approach, as just setting the lists without prior checking may lead to a rollback error being generated, which could be time-consuming for large models (as well as frustrating). It is possible to just export the General Lists grid to help where there are multiple references for the same list and then use formulas and filters to identify all offenders in the same effort. This option will save significant amounts of time if there are many line items that would need to be changed. You are looking for direct references on the list members: [SELECT: List Name.list member] ITEM(List Name) =List Name.List member The following constructs are valid, but not recommended, as any changes to the names or codes could change the result of calculations: IF CODE(ITEM(List Name))= IF NAME(ITEM(List Name))= After following those steps, you should have a list of all of the line items that need to be changed in the model in order for production data list to be open to being checked. Please note: There may still be list properties that have hard-coded references to items. You will need to take note of these as well, but as per D.I.S.C.O., (Best practice for Module design) we recommend that List Properties are replaced with Line Items in System Modules. Replacing model formulas: The next step is to replace these formulas within the model. For this, there are two recommended options. The first option (Option 1 below) is to replace your SELECT statements with a LOOKUP formula that is referencing a list drop-down. Use this option when there are 1:1 mappings between list items and your formula logic. For example, if you were building out a P&L variance report and needed to select from a specific revenue account, you might use this option.  The second option (Option 2 below) for replacing these formulas is to build a logic module that allows you to use Booleans to select list items and reference these Boolean fields in your formulas. Use this option when there is more complex modeling logic than a 1:1 mapping. For example, you might use this option if you are building a variance report by region and you have different logic for all items under Region 1 (ex: budget – actual) than the items under Region 2 (ex: budget – forecast).  (Option 1) Add List Selections module to be used in LOOKUPs for 1:1 mappings: From here you should make a module called List Selections, with no lists applied to it and a line item for each list item reference that you previously used in the formulas that will be changed. Each of these line items will be formatted as the list that you are selecting to be production data. Afterward, you should have a module that looks similar to this: An easy and effective way to stay organized is to partition and group your line items of similar list formats into the same sections with a section header line item formatted as No Data and a style of "Heading 1." After the line items have been created, the model administrator should use the list drop-downs to select the appropriate items which are being referenced. As new line items are created in a standard mode model, the model administrator will need to open the deployed model downstream to reselect or copy and paste the list formatted values in this module since this is considered production data. Remove hard-coding and replace with LOOKUPs: Once you have created the List Selections module with all of the correct line items, you will begin replacing old formulas, which you’ve identified in Excel, with new references. For formulas where there is a SELECT statement, you will replace the entire SELECT section of the formula with a LOOKUP to the correct line item in the list selections. Example: Old Formula = Full PL.Amount[SELECT: Accounts.Product Sales] New Formula = Full PL.Amount[LOOKUP: List Selections.Select Product Sales] For formulas where there is an IF ITEM (List Name) = List Name Item, you will replace the second section of the formula after the ‘=’ to directly reference the correct line item in the list selections. Example: Old Formula = If ITEM(Accounts) = Accounts.Product Sales THEN Full PL.Amount ELSE 0 New Formula = IF ITEM(Accounts) = List Selections.Select Product Sales THEN Full PL.Amount ELSE 0   (Option 2) Modeling for complex logic and many to many relationship: In the event that you are building more complex modeling logic in your model, you should start by building Boolean references that you can use in your formulas. To accomplish this, you will create a new module with Boolean line items for each logic type that you need. Sticking with the same example as above, if you need to build a variance report where you have different logic depending on the region, start by creating a module by region that has different line items for each different logic that you need similar to the view below: Once you have the Boolean module set up, you can then change your hard-coded formulas to reference these Boolean formatted line items to write your logic. The formula may look similar to this: IF Region Logic.Logic 1 THEN logic1 ELSE IF Region Logic.Logic 2 THEN logic2 ELSE IF Region Logic.Logic 3 THEN logic3 ELSE 0   Here is a screenshot of what the end result may look like:   This method can be used across many different use cases and will provide a more efficient way of writing complex formulas while avoiding hard-coding for production data lists. Selecting production data list: After all of the hard-coded formulas have been changed in the model, you can navigate back to the Settings tab, and open General Lists. In the Production Data column, check the box for the list that you want to set as a production data list. Repeat for each list in the model that needs to be a production data list: For each list in the model that you need to make a production data list, you can repeat the steps throughout this process to successfully remove all hard-coded list references.
View full article
Reducing the number of calculations will lead to quicker calculations and improve performance. However, this doesn’t mean combining all your calculations into fewer line items, as breaking calculations into smaller parts has major benefits for performance. Learn more about this in the Formula Structure article. How is it possible to reduce the number of calculations? Here are three easy methods: Turn off unnecessary Summary method calculations. Avoid formula repetition by creating modules to hold formulas that are used multiple times. Ensure that you are not including more dimensions than necessary in your calculations. Turn off Summary method calculations Model builders often include summaries in a model without fully thinking through if they are necessary. In many cases, the summaries can be eliminated. Before we get to how to eliminate them, let’s recap on how the Anaplan engine calculates. In the following example we have a Sales Volume line-item that varies by the following hierarchies: Region Hierarchy Product Hierarchy Channel Hierarchy City SKU Channel Country Product All Channels Region All Products   All Regions     This means that from the detail values at SKU, City, and Channel level, Anaplan calculates and holds all 23 of the aggregate combinations shown below—24 blocks in total. With the Summary options set to Sum, when a detailed item is amended (represented in the grey block), all the other aggregations in the hierarchies are also re-calculated. Selecting the None summary option means that no calculations happen when the detail item changes. The varying levels of hierarchies are quite often only there to ease navigation, and the roll-up calculations are not actually needed, so there may be a number of redundant calculations being performed. The native summing of Anaplan is a faster option, but if all the levels are not needed it might be better to turn off the summary calculations and use a SUM formula instead.  For example, from the structure above, let’s assume that we have a detailed calculation for SKU, City, and Channel (SALES06.Final Volume). Let’s also assume we need a summary report by Region and Product, and we have a module (REP01) and a line item (Volume) dimensioned as such. REP01.Volume = SALES06 Volume Calculation.Final Volume is replaced with REP01.Volume = SALES06.Final Volume[SUM:H01 SKU Details.Product, SUM:H02 City Details.Region] The second formula replaces the native summing in Anaplan with only the required calculations in the hierarchy. How do you know if you need the summary calculations? Look for the following: Is the calculation or module user-facing? If it is presented on a dashboard, then it is likely that the summaries will be needed. However, look at the dashboard views used. A summary module is often included on a dashboard with a detail module below; Effectively, the hierarchy sub-totals are shown in the summary module, so the detail module doesn’t need the sum or all the summary calculations. Detail to Detail Is the line item referenced by another detailed calculation line item? This is very common, and if the line item is referenced by another detailed calculation the summary option is usually not required. Check the Referenced by column and see if there is anything referencing the line item. Calculation and staging modules If you have used the D.I.S.C.O. module design, you should have calculation/staging modules. These are often not user-facing and have many detailed calculations included in them. They also often contain large cell counts, which will be reduced if the summary options are turned off. Can you have different summaries for time and lists? The default option for Time Summaries is to be the same as the lists. You may only need the totals for hierarchies, or just for the timescales. Again, look at the downstream formulas. The best practice advice is to turn off the summaries when you create a line item, particularly if the line item is within a Calculation module (from the D.I.S.C.O. design principles). Avoid Formula Repetition An optimal model will only perform a specific calculation once. Repeating the same formula expression multiple times will mean that the calculation is performed multiple times. Model builders often repeat formulas related to time and hierarchies. To avoid this, refer to the module design principles (D.I.S.C.O.) and hold all the relevant calculations in a logical place. Then, if you need the calculation, you will know where to find it, rather than add another line item in several modules to perform the same calculation. If a formula construct always starts with the same condition evaluation, evaluate it once and then refer to the result in the construct. This is especially true where the condition refers to a single dimension but is part of a line item that goes across multiple dimension intersections. A good example of this can be seen in the example below: START() <= CURRENTPERIODSTART() appears five times and similarly START() > CURRENTPERIODSTART() appears twice. To correct this, include these time-related formulas in their own module and then refer to them as needed in your modules. Remember, calculate once; reference many times! Taking a closer look at our example, not only is the condition evaluation repeated, but the dimensionality of the line items is also more than required. The calculation only changes by the  day, as per the diagram below: But the Applies To here also contains Organization, Hour Scale, and Call Center Type. Because the formula expression is contained within the line item formula, for each day the following calculations are also being performed: And, as above, it is repeated in many other line items. Sometimes model builders use the same expression multiple times within the same line item. To reduce this overcalculation, reference the expression from a more appropriate module; for example, Days of Week (dimensioned solely by day) which was shown above. The blueprint is shown below, and you can see that the two different formula expressions are now contained in two line items and will only be calculated by day; the other dimensions that are not relevant are not calculated. Substitute the expression by referencing the line items shown above. In this example, making these changes to the remaining lines in this module reduces the calculation cell count from 1.5 million to 1500. Check the Applies to for your formulas, and if there are extra dimensions, remove the formula and place it in a different module with the appropriate dimensionality .
View full article
Dynamic Cell Access (DCA) controls the access levels for line items within modules. It is simple to implement and provides modelers with a flexible way of controlling user inputs. Here are a few tips and tricks to help you implement DCA effectively. Access control modules Any line item can be controlled by any other applicable Boolean line item. To avoid confusion over which line item(s) to use, it is recommended that you add a separate functional area and create specific modules to hold the driver line items. These modules should be named appropriately (e.g. Access – Customers > Products, or Access – Time etc.). The advantage of this approach is the access driver can be used for multiple line items or modules, and the calculation logic is in one place. In most cases, you will probably want read and write access. Therefore, within each module it is recommended that you add two line items (Write? and Read?). If the logic is being set for Write?, then set the formulas for the Read? line item to NOT WRITE? (or vice-versa). It may be necessary to add multiple line items to use for different target line items, but start with this a default. Start simple You may not need to create a module that mirrors the dimensionality of the line item you wish to control. For example, if you have a line item dimensioned by customer, product, and time, and you wish to make actual months read-only, you can use an access module just dimensioned by time. Think about what dimension the control needs to apply to and create an access module accordingly. What settings do I need? There are three different states of access that can be applied: READ, WRITE, and INVISIBLE or hidden. There are two blueprint controls (read control and write control) and there are two states for a driver (TRUE or FALSE). The combination of these determines which state is applied to the line item. The following table illustrates the options: Only the read access driver is set:   Read Access Driver Driver Status True False Target Line Item READ INVISIBLE Only the write access driver is set:   Write Access Driver Driver Status True False Target Line Item WRITE INVISIBLE Both read access and write access drivers are set:   Read Access Driver Write Access Driver Driver Status True False True False Target Line Item READ INVISIBLE WRITE Revert to Read* *When both access drivers are set, the write access driver takes precedence with write access granted if the status of the write access driver is true. If the status of the write access driver is false, the cell access is then taken from the read access driver status. The settings can also be expressed in the following table:   WRITE ACCESS DRIVER TRUE FALSE NOT SET READ ACCESS DRIVER TRUE Write Read Read FALSE Write Invisible Invisible NOT SET Write Invisible Write Note: If you want to have read and write access, it is necessary to set both access drivers within the module blueprint.  Totals Think about how you want the totals to appear. When you create a Boolean line item, the default summary option is NONE. This means that if you used this access driver line item, any totals within the target would be invisible. In most cases, you will probably want the totals to be read-only, so setting the access driver line item summary to ANY will provide this setting. If you are using the Invisible setting to “hide” certain items and you do not want the end user to compute hidden values, then it is best to use the ANY setting for the access driver line item. This means that only if all values in the list are visible then the totals show; otherwise, the totals are hidden from view.
View full article
The Connect Manager is a tool that allows non-technical users to create Anaplan Connect scripts from scratch, simply by walking through a step-by-step wizard.                 Features include:  - Create scripts for the Import/Export of flat files  - Create scripts for Importing from JDBC/ODBC sources  - Ability to chose between commonly used JDBC connection – New in v4  - Run scripts from the new Connection Manager Interface – New in v4  - Ability to use certificate authentication Please note that this program is currently only supported on Windows systems and requires .Net 4.5 or newer to run (.Net has been included in the download package). The Connect Manager is approved by Anaplan for general release, however, it is not supported by Anaplan. If there are any specific enhancements you want to see in the next version, please leave a comment or send me an email at graham.gronhoff@anaplan.com. Download the Anaplan Connect Wizard here. If you are migrating to the new Anaplan Connect 1.4 release please check back soon as a new version will be published that includes updated features and functionality. Keystore creation can be tricky if you are not familiar with the command line. To that end, I have created an additional program that will perform all the required steps for its creation. Use this link to go to the application: KeyStore Wizard Connect Manager for Anaplan Connect versions 1.4.x will be published in July 2019.
View full article