Choose a label or article, or search below to begin.
Sort by:
Question I am working on a dashboard and I do not want to show every line item or list item included on a module. What is the difference between using the Hide functionality and the Show functionality?    Answer There are two main differences. The first is what happens when you add a new line item or item to a list. If you've used Hide, you will see new items or line items added to the module. If you've used Show, the new item will not appear.    Let's say I have a module with 5 line items and I only want to see line items 1, 2, and 5. In this example, I choose to hide line items 3 and 4. If I were to add a new line item to this module I will see that new line item appear. In this same senario, if I were to select to Show line items 1, 2, and 5, I would not see the new line item appear. I could also reorder the line items with the Show functionality if I wanted to. The other difference is that by using Show you can change the order of the items based off of the order that you select them. Try this: Select multiple line items by holding down Command (Mac) or Control (Windows), and then right-click or choose the drop down option and click on Show Selection. The line items will appear in the order you selected them.
View full article
Summary We explain here a dynamic way to filter specific levels of a hierarchy. This provides a better way to filter & visualize hierarchies.  Overview This tutorial explains how to calculate the level of a list in a hierarchy in order to apply specific calculations (custom summary) or filters by level. In this example we have an organization hierarchy of 4 levels (Org L1 to Org L4). For each item in the hierarchy we want to calculate a filtering module value that returns the associated level. Context and notes This technique addresses a specific limitation within dashboards where a composite hierarchy's level cannot be selected if the list is synchronized to multiple module objects on the dashboard. We show the technique of creating a static filtering module based on the levels of the composite structure. The technique utilizes the Summary method Ratio of line items corresponding to the list levels to define the value of the filtering line items. Note that it is not a formula calculation but a use of the summary method Ratio applied to the composite hierarchy.   Example list We defined in this example a 4-levels list as follows: Defining the level of each list In order to calculate the level of each item in the lists, we need to create a module that calculates it by: Creating as many line items as level of hierarchy + one technical line item. Changing the settings in the blueprint of those line items according to the following table: Line Item Formula Applies to Summary Summary method Setting Ratio Technical line item 1 (empty) Formula   Level or L4 (lowest level) 4 Org L4 Ratio* L3 / Technical L3 3 Org L3 Ratio L2 / Technical L2 2 Org L2 Ratio L1 / Technical L1 1 Org L1 Ratio L1 / Technical                       When applying these settings, the calculation module looks like this: *Note that the Technical line item Summary method is using Formula, Minimum Summary method can be used but will return an error when a level of the hierarchy does not have any children and the level calculated is blank.     We can now use the line item at the lowest level—“Level (or L4)” in the example—as the basis of filters or calculations.   Applying a filter on specific levels in case of synchronization When synchronization is enabled, the option “Select levels to show” is not available. Instead, a filter based on the level calculated can be used to show only specific levels. In the example, we apply a filter on the level 4 and 1:   This gives the following result:    
View full article
Personal dashboards is a great new feature that enables end users to save a personalized view of a dashboard. To get the most out of this feature here are a few tips and tricks. Tidy up dashboards Any change to a master dashboard (using the Dashboard Designer) will reset all personal views of a dashboard, so before enabling personal dashboards, take some time to ensure that the current dashboards are up to date: Implement any pending development changes (including menu options) Turn on the Dashboard Quick Access toolbar (if applicable) Check and amend all text box headings and comments for size, alignment, spelling and grammar Delete or disable any redundant dashboards to ensure end users don’t create personal views of obsolete dashboards Use filters rather than show/hide It’s best practice to use a filter rather than show and hide for the rows and/or columns on a grid.  This is now more beneficial because amending the items shown or hidden on a master dashboard will reset the personal views. For example, suppose you want to display just the current quarter of a timescale. You could manually show/hide the relevant periods but, at quarter end, when the Current Period is updated, the dashboard will need to be amended and all those personal views will be reset.  If you use a filter,  referencing a time module, the filter criteria will update automatically, as will the dashboard. No changes are made to the master dashboard and all the personal views are preserved.   Create a communication and migration strategy Inevitably there are going to be changes that must be made to master dashboards. To minimize the disruption for end users, create a communication plan and follow a structured development program .  These can include the following: Bundle up dashboard revisions into logical set of changes Publish these changes at regular intervals (e.g., on a monthly cycle) Create a regular communication channel to inform users of changes and the implications of those changes Create a new dashboard and ask end users to migrate to the new dashboard over a period of time before switching off the old dashboard Application Lifecycle Management (ALM) If you are using ALM: any structural changes to master dashboards will reset all personal views of dashboards.
View full article
This article provides the steps needed to create a basic time filter module. This module can be used as a point of reference for time filters across all modules and dashboards within a given model. The benefits of a centralized Time Filter module include: One centralized governance of time filters. Optimization of workspace, since the filters do not need to be re-created for each view. Instead, use the Time Filter module.  Step 1: Create a new module with two dimensions—time and line items. The example below has simple examples for Weeks Only, Months Only, Quarters Only, and Years Only. Step 2: Line items should be Boolean formatted and the time scale should be set in accordance to the scale identified in the line item name. The example below also includes filters with and without summary methods, providing additional views depending on the level of aggregation desired. Once your preliminary filters are set, your module will look something like the screenshot below.  Step 3: Use the pre-set Time Filters across various modules and dashboards. Simply click on the filters icon in the tool bar, navigate to the time tab, select your Time Filter module from the module selection screen, and select the line item of your choosing. Use multiple line items at a time to filter your module or dashboard view.
View full article
Dimension Order affects Calculation Performance Ensuring consistency in the order of dimensions will help improve performance of your models. This consistency is relevant for modules and individual line items. Why does the order matter? Anaplan creates and uses indexes to perform calculations. Each cell in a module where dimensions intersect is given an index number. Here are two simple modules dimensioned by Customer and Product. In the first module, Product comes first and Customer second and in the second module, Customer is first and Product second. In this model, there is a third module that calculates revenue as Prices * Volumes. Anaplan assigns indexes to the intersections in the module. Here are the index values for the two modules. Note that some of the intersections are indexed the same for both modules: Customer 1 and Product 1, Customer 2 and Product 2 and Customer 3 and Product 3, and that the remainder of the cells have a different index number. Customer 1 and Product 2 is indexed with the value of 4 in the top module and the value of 2 in the bottom module. The calculation is Revenue = Price * Volume. To run the calculation, Anaplan performs the following operations by matching the index values from the two modules. Since the index values are not aligned the processor scans the index values to find a match before performing the calculation. When the dimensions in the module are reordered, these are the index values: The index values for each of the modules are now aligned. As the line-items of the same dimensional structure have an identical layout, the data is laid out linearly in memory. the calculation process accesses memory in a completely linear and predictable way. Anaplan’s microprocessors and memory sub-systems are optimized to recognise this pattern of access and to pre-emptively fetch the required data. How does the dimension order become different between modules?. When you build a module, Anaplan uses the order that you drag the lists onto the Create Module dialog The order is also dependent on where the lists are added. The lists that you add to the pages area are first, then the lists that you add to the rows area, and finally the lists added to the columns area. It is simple to re-order the lists and ensure consistency. Follow these steps: On the Modules pane, (Model Settings>Modules) look for lists that are out of order in the Applies To column. Click the Applies To row that you want to re-order, then click the ellipsis. In the Select Lists dialog, click OK. In the Confirm dialog, click OK. The lists will be in the order that they appear in General Lists. When you have completed checking the list order in the modules, click the Line Items tab and check the line items. Follow steps 1 through 3 to re-order the lists. Subsets and Line Item Subsets One word of caution about Subsets and Line Item subsets. In the example below, we have added a subset and a Line Item Subset to the module: The Applies To is as follows: Clicking on the ellipsis, the dimensions are re-ordered to: The general lists are listed in order first, followed by subsets and then line item subsets You still can re-order the dimensions by double clicking in the Applies to column and manually copying or typing the dimensions in the correct order. Other Dimensions The calculation performance relates to the common lists between the source(s) and the target. The order of separate lists in one or other doesn’t have any bearing on the calculation speed.
View full article
Thinking through the results of a modeling decision is a key part of ensuring good model performance—in other words, making sure the calculation engine isn’t overtaxed. This article highlights some ideas for how to lessen the load on the calculation engine. Formulas should be simple; a formula that is nested or uses multiple combinations uses valuable processing time. Writing a long, involved formula makes the engine work hard. Seconds count when the user is staring at the screen. Simple is better. Breaking up formulas and using other options helps keep processing speeds fast. You must keep a balance when using these techniques in your models, so the guidance is as follows: Break up the most commonly changed formula Break up the most complex formula Break up any formula you can’t explain the purpose of in one sentence Formulas with many calculated components The structure of a formula can have a significant bearing on the amount of calculation that happens when inputs in the model are changed. Consider the following example of a calculation for the Total Profit in an application. There are five elements that make up the calculation: Product Sales, Service Sales, Cost of Goods Sold (COGS), Operating Expenditure (Op EX), and Rent and Utilities. Each of the different elements are calculated in a separate module. A reporting module pulls the results together into the Total Profit line item, which is calculated using the formula shown below. What happens when one of the components of COGS changes? Since all the source components are included in the formula, when anything within any of the components changes, this formula is recalculated. If there are a significant number of component expressions, this can put a larger overhead on the calculation engine than is necessary. There is a simple way to structure the module to lessen the demand on the calculation engine. You can separate the input lines in the reporting module by creating a line item for each of the components and adding the Total Profit formula as a separate line item. This way, changes to the source data only cause the relevant line item to recalculate. For example, a change in the Product Sales calculation only affects the Product Sales and the Total Profit line items in the Reporting module; Services Sales, Op EX, COGS and Rent & Utilities are unchanged. Similarly, a change in COGS only affects COGS and Total Profit in the Reporting module. Keep the general guidelines in mind. It is not practical to have every downstream formula broken out into individual line items. Plan to provide early exits from formulas Conditional formulas (IF/THEN) present a challenge for the model builder in terms of what is the optimal construction for the formula, without making it overly complicated and difficult to read or understand. The basic principle is to avoid making the calculation engine do more work than necessary. Try to set up the formula to finish the calculations as soon as possible. Always put first the condition that is most likely to occur. That way the calculation engine can quit the processing of the expression at the earliest opportunity. Here is an example that evaluates Seasonal Marketing Promotions: The summer promotion runs for three months and the winter promotion for two months. There are more months when there is no promotion, so this formula is not optimal and will take longer to calculate. This is better as the formula will exit after the first condition more frequently. There is an even better way to do this. Following the principles from above, add another line item for no promotion. And then the formula can become: This is even better because the calculation for No Promo has already been calculated and Summer Promo occurs more frequently than Winter Promo. It is not always clear which condition will occur more frequently than others, but here are a few more examples of how to optimize formulas: FINDITEM formula The Finditem element of a formula will work its way through the whole list looking for the text item and if it does not find the referenced text it will return blank. If the referenced text is blank it will also return a blank. Inserting a conditional expression at the beginning of the formula keeps the calculation engine from being overtaxed. IF ISNOTBLANK(TEXT) THEN FINDITEM(LIST,TEXT) ELSE BLANK Or IF BLANK(TEXT) THEN BLANK ELSE FINDITEM(LIST,TEXT) Use the first expression if most of the referenced text contains data and the second expression if there are more blanks than data. LAG, OFFSET, POST, etc. If in some situations there is no need to lag or offset data, for example if the lag or offset parameter is 0. The value of the calculation is the same as the period in question. Adding a conditional at the beginning of the formula will help eliminate unnecessary calculations: IF lag_parameter = 0 THEN 0 ELSE LAG(Lineitem, lag_parameter, 0) Or IF lag_parameter <> 0 THEN LAG(Lineitem, lag_parameter, 0) ELSE 0 The use of formula a or b will depend on the most likely occurrence of 0s in the lag parameter. Booleans Avoid adding unnecessary clutter for line items formatted as BOOLEANS. There is no need to include the TRUE or FALSE expression, as the condition will evaluate to TRUE or FALSE. Sales>0 Instead of IF Sales > 0 then TRUE ELSE FALSE
View full article
General recommendations First, the bigger your model is the more performance issues you are likely to experience. So a best practice is to use all the possible tools & features we have to make the model as small and dense as possible. This includes: Line Item Checks: summary calculations, dimensionality used Line Item Duplication Granularity of Hierarchies Use of subsets and line item subsets Numbered Lists More information on eliminating sparsity can be found in Learning Center courses 309 and 310. Customer requirements  General recommendations also include whenever possible, challenging your customer’s business requirements when customer require large list (>1M), big data history and high number of dimensions used at the same time for a line item (>5) Other practices Once these general and basic sparsity recommendations have been applied, you can further performance in different areas. The articles below will expand on each subject: Imports and exports and their effects on model performance Rule 1: Carefully decide if you let end-user import (and export) during business hours Rule 2: Mapping Objective = zero errors or warning Rule 3: Watch the formulas recalculated during the import Rule 4: Import List properties Rule 5: Get your Data HUB Rule 6: Incremental import/Export Dashboard settings that can help improve model performance Rule1: Large list = Filter these on a boolean, not on text Rule 2: Use the default Sort Rule 3: Reduce the amount of dashboard component Rule 4: Watch large page drop downs Formulas and their effect on model performance Model load, Model Save, Model Rollback and their effect on model performance User roles and their effect on model performance
View full article
Overview: Imports are blocking operation: The model is locked during the time of the import, and concurrent imports run by end-user will need to run one after the other, and will block the model for everyone else. Rule 1: Carefully decide if you let end-user import (and export) during business hours Imports executed by end-users should be carefully considered, and if possible executed once or twice a day. Customer easily accept model freeze at scheduled hours for a predefined time even if it takes 10+ minutes, and are frustrated when these imports are run randomly during business hours by anyone. Your first optimization is to adjust the process and run these imports by an admin, at scheduled time and let the user based know about the schedule. Rule 2: Mapping Objective = zero errors or warning Make sure your import returns with no errors or warning, every error takes processing time. Time to import into a medium to large list (>50k) is significantly reduced if no errors are to be processed. Here are the tips to reduce errors: Always import from a saved view - NEVER from the default view. And use the naming convention for easy maintenance Hide the line items that are not needed for import, do not bring extra columns that are not needed. In the import definition, always map all displayed line items (source→target) or use the "ignore" setting - don't leave any line item unmapped Rule 3: Watch the formulas recalculated during the import If your end-users encounter poor performance when clicking a button that triggers an import or a process, it is likely due to the recalculations that is triggered by the import, especially if the action creates or moves items within a hierarchy. You will likely need the help of Anaplan support (L3) to identify what formulas are triggered after the import is done, and get a performance check on these formulas to identify which one takes most of the time. Usually those fetching many cells such as SUM, ANY or FINDITEM() are likely to be responsible for the performance impact. To solve such situations, you will need to challenge the need of recalculating the formula identified each time a user calls the action. Often, for actions such as creations, moves, assignment done in WFP or Territory Planning, many calculations used for Reporting are triggered in real-time after the hierarchy is modified by the import, and are not necessarily needed by users. the recommendation is to challenge your customer and see if these formulas couldn't be calculated only once a day, instead of each a user runs the action. If yes, you'll need to rearchitect your modules so that these heavy formulas get to run through a different process run daily by an admin, and not by each end-users. Rule 4:  Import List properties Importing list properties takes more time than importing these as module line item. Review your model list impacted by imports, and envision replacing list properties by module line items when possible. Also, please refer to the Data Hub best practices, where we recommend to upload all list properties into a Data HUB module and not in the list property itself. Rule 5: Get your Data HUB HUB and SPOKE: Setup a HUB data model, which will feed the other production model used by stakeholders. Look at the white paper on how to build a Data HUB: Performance benefits: It will prevent production models to be blocked by a large import from External Data source. But since Data HUB to Production model imports will still be blocking operations, carefully filter what you import, and use the best practices rules listed above. All import, mapping/transformation modules required to prepare the data to be loaded into Planning modules can now be located in a dedicated Data HUB model and not in the Planning model. This model will then be smaller and will work more efficiently Reminder of the other Benefits not linked to performance: Better structure, easier maintenance: Data HUB help keep all the data organized in a central location. Better governance: Whenever possible put this Data HUB on a different WS. That will ease the separation of duties between Production models and Meta Data management, at least on Actual Data and production lists. IT department will love the idea to own the Data HUB, and have no one else be an admin in the WS Lower implementation costs: Data HUB is a way to reduce the implementation time of new projects. Assuming IT can load the data needed by the new project in the Data HUB, then business users do not have to integrated with complex source system, but with the Anaplan Data HUB instead. Rule 6: Incremental import/Export This can be the magic bullet in some cases. If you export on a frequent basis (daily ot more) from Anaplan model into a reporting system, or write back to the source system, or simply transfer data from one Anaplan model to another, you have ways to only import/exports the data that have changed since the last export. Use the concatenation + Change boolean technique explained in the Data HUB white paper.
View full article
This article outlines the requirements for Anaplan Technology Partners who want to integrate with Anaplan using Anaplan v2.0 REST APIs. Use Cases The following use cases are covered: Allow users to run integrations from the partner technology or application with or without an external integration tool to move data to and from Anaplan. Provide the ability to import data into Anaplan for planning and dashboarding and extract the planning results from Anaplan into the partner technology or application. Provide the ability to extract data from Anaplan modules and lists or import data into Anaplan modules and lists. Provide the ability to extract data from Anaplan into the partner technology or application to run specific planning scenarios or calculations. Requirements To integrate with Anaplan: Users must have a license for the partner technology and credentials to log in to Anaplan. Basic authentication or certificate authentication methods are supported. Users must have Import and/or Export actions configured in Anaplan or have the ability to create these actions in Anaplan. Assumptions Technology partners are familiar with Anaplan modelling concepts and Anaplan APIs. Information can be found on anaplan.com , help.anaplan.com , Anaplan Academy and Anaplan API reference material . Anaplan supports the deletion of items from very long lists using the Delete from List using Selection This can be invoked via a REST API. Import chunks are between 1 MB and 50 MB in size. Export chunks are 10 MB in size. Anaplan data exports and imports will run in batch mode. All Anaplan exports will be generated as .csv or .txt (tab delimited files). Anaplan imports will similarly accept .csv or .txt formatted data. All data movements will follow format and rules defined in Anaplan actions. Constraints Users can create an Anaplan Process to chain multiple Import / Export actions together and execute them in sequence. However, some functionality is not supported, e.g. files will not be output to UI for Export actions. Not in Scope Process action support is not required. OAuth File types other than .csv and tab-delimited files. Changes to Anaplan UI, login mechanism or Anaplan APIs. Guidelines  Authentication Support for Basic Authentication (user name and password). Support for Certificate Authentication (uploading an x509 cert). A custom header will be sent in the header for every API call to Anaplan to uniquely identify the partner technology and its version. For example, format "{Partner Prod name} {version}". Behavior The partner technology or application must allow users to log in to Anaplan with credentials and present list of Export or Import actions for user to select from: Get the workspace that the user has access to (present workspace name, not ID). Get the models that the user has access to (present model name, not ID) Workspace and model are used in the URL for other endpoints.  Export and Import actions Based on the Workspace and Model selected, present the Export / Import actions, with the name, to the user for selection. This list of actions will match what is presented in the Anaplan UI. Each action is associated with a Module or List in Anaplan. Execute the Export / Import action; by posting a task against the action, the action is run.  Export action: getting the file Assuming that the task succeeded, pull down the file (in some cases, in chunks). If the file is in chunks, Partner code will need to concatenate the chunks together.  Export action: parse the exported file The file should be in .csv or .txt format.  Invoke Anaplan Export API endpoint "GET https://api.anaplan.com/.../exports/<export id>" to get fields for the Export action.  Export action: analyze exported data Most users will want to analyze multiple modules and lists. Each export is for one module or list. Users will need to be able to execute more than one export in order to populate their partner technology environment.  Export action: multiple exports In Anaplan, a Process is a wrapper of multiple actions that are executed in sequential order. It is not possible to pull the export files using a process so individual exports are required. The partner technology must allow for more than one export to be selected by the user. The calls will need to be made independently as each export will need its own task ID. (This is assuming that exports run on different modules or lists.)   Export action: get the files from multiple exports This is the same as pulling files from a single export call, except that the code needs to ensure that it is pulling the correct file after the export is called. Files for all defined exports should already exist in the system so calling them will not result in the failure. However, calling them without executing a new export task or before the export task completes successfully can lead to downloading outdated information. If tasks are created against a single model in parallel, the actions will be queued and run in sequence.  Check that the task completes successfully before pulling the related file.  Import action: uploading data The Technology Partner will split data to be uploaded into chunks of certain size. Anaplan APIs support upload chunk sizes from 1MB to 50MB. These chunks will be uploaded to Anaplan in sequential manner. Once all chunks are uploaded, the Import action will be triggered by a separate REST API call.  Error handling The Anaplan API is REST so expect standard HTTP error codes for API failures. Import action failures are found by doing a GET on the TASK endpoint. The JSON response will have a summary and for error conditions, there will be a dump file that can be pulled to get more details.  The partner technology or application will need to fetch the dump file via a REST API call, save the file, and then process it. Export dump files are unusual - they are more common for imports. Ensuring that the task completes successfully before retrieving the file will avoid receiving outdated information from Anaplan. If a task fails, report the errors back to the user. Any automatic restarts should be very limited in scope and user configurable to prevent infinite loops and performance degradation of the model.  Labeling Labels should follow Anaplan naming conventions: Export Workspace Model File For example, executing an Export action should be called ‘Export’ not ‘Read’. Definitions  Workspace Each company (or autonomous workgroup) has its own Workspace. A workspace has its own set of users and may contain any number of models. Model A structure where a user can conduct planning. It contains all the objects needed for planning, such as modules and lists, but also the data values. Module Components of each Anaplan model, built up using line items, timescales, list dimensions, and pages. A module contains the metrics for planning. Lists Groups of similar items, such as people, products, and regions. They can be combined into Modules. Actions Operations defined by users to execute certain functions, such as imports, exports, or processes. Actions must be defined in Anaplan before they can be called in the API. Process Groups actions and executes them in sequential order.  Data Source Definition The configuration of an action that details how the data is handled. Task The job that executes actions and contains metadata regarding the job itself.
View full article
Note:  This article is meant to be a guide on converting an existing Anaplan Security Certificate to PEM format for the purpose of testing its functionality via cURL commands. Please work with your developers on any in more depth application of this process.  The current Production API version is v1.3. Using a certificate to authenticate will eliminate the need to update your script when you have to change your Anaplan password. To use a certificate for authentication with the API, it first has to be converted into a Base64 encoded string recognizable by Anaplan. Information on how to obtain a certificate can be found in Anapedia. This article assumes that you already have a valid certificate tied to your user name. Steps: 1.   To properly convert your Anaplan certificate to be usable with the API, first you will need openssl (https://www.openssl.org/). Once you have that, you will need to convert the certificate to PEM format. The PEM format uses the header and footer lines “-----BEGIN CERTIFICATE-----“, and “-----END CERTIFICATE-----“.   2.   If your certificate is not in PEM format, you can convert it to the PEM format using the following OpenSSL command. “certificate-(certnumber).cer” is name of source certificate, and “certtest.pem” is name of target PEM certificate.                   openssl x509 -inform der -in certificate-(certnumber).cer -out certtest.pem   View the PEM file in a text editor. It should be a Base64 string starting with “-----BEGIN CERTIFICATE-----“, and ending with “-----END CERTIFICATE-----“.   3.   View the PEM file to find the CN (Common Name) using the following command:   openssl x509 -text -in certtest.pem   It should look something like "Subject: CN=(Anaplan login email)". Copy the Anaplan login email.   4.   Use a Base-64 encoder (e.g.   https://www.base64encode.org/   ) to encrypt the CN and PEM string, separated by a colon. For example, paste this in:   (Anaplan login email):-----BEGIN CERTIFICATE-----(PEM certificate contents)-----END CERTIFICATE----- 5.   You now have the encrypted string necessary to authenticate API calls. For example, using cURL to GET a list of the Anaplan workspaces for the user that the certificate belongs to:   curl -H "Authorization: AnaplanCertificate (encrypted string)" https://api.anaplan.com/1/3/workspaces  
View full article
Overview When changes occur to the primary model that need to be copied to the other models, careful coordination is necessary. There are several time-saving techniques that can make model changes across distributed models simple and quick. This depends on the complexity of the change, but generally changes are merely to fix an issue or add very small things such as views or reports. Some of the model change techniques are: Module update via export/import Primary module is updated Export of module blueprint to CSV format Import of new line items into receiving module blueprint Import of new formulas/dimensionality into receiving module Model blueprint update Model blueprints can also be updated on a batch basis where required Simple copy and paste. Anaplan supports full copy and paste from other applications where minor changes to model structure are needed List/dimension additions You can export new lists or dimensions to a CSV file from one model to another, or you can carry out a direct API model-to-model import to add new lists to multiple models. Changes to data or metadata happen in a different way. Item changes within existing lists or hierarchies occur via an import, which may take place in a specific model or models, or ideally within a master data hub. It is a best practice to use an Anaplan model as a master data hub, which will store the common lists and hierarchies and will be the unique point of maintenance. Model builders will then implement automated data imports from the master data hub to every single model, including primary models and satellite models. It is important to carefully consider the business processes and rules that surround changes to the primary model, and then the coordination of the satellite models, as well as clear governance. ALM application: When changes occur We highly recommend that clients utilize ALM if metadata changes, such as any dimension, may be required at any time during implementation or even after the deployment phase of Anaplan. ALM allows clients to add or remove metadata from models, as well as test their effects, in a safe environment without running the risk of losing data or altering functionality in a live production model.
View full article
This is step four of the model design process. Next, your focus shifts to the inputs available. Remember that sometimes a dashboard is used to add information. Using the information gathered in steps 1 through 3: Identify the systems that will supply the data Identify the lists and hierarchies, especially the hierarchies needed to parse out information for the needed dashboards/exports What data hub types are needed? Master data Transactional Why do this step?   During this step, you should be thinking about the data needed to get to your defined output modules. All of the data in the system or in lists may not be needed. In addition, some hierarchies needed for the output modules may not exist and may need to be created.  Results of this step: Lists needed in the model Hierarchies needed in the model Data and where it is coming from
View full article
“Back to the Future” Imagine this scenario: You are in the middle of making changes in your development model and have been doing so for the last few weeks. The changes are not complete and are not ready to synchronize. However, you just received a request for an urgent fix from the user community that is critical for the forthcoming monthly submission. What do you do? What you don’t want to do is take the model out of deployed mode! You also don’t want to lose all the development work you have been doing.  Don’t worry. Following the procedure below will ensure you can apply the hotfix quickly and keep your development work. The following diagram illustrates the procedure: It’s a two-stage process: Stage 1: Roll the development model back to a version that doesn’t contain any changes (is the same as production) and apply the hotfix to that version. Add a new revision tag to the development model as a temporary placeholder. (Note the History ID of the last structural change, you'll need it later.) On the development model, use History to restore to a point where development and production were identical (before any changes were made in development). Apply the hotfix. Save a new revision of the development model. Sync the development model with the production model. Production now has its hotfix. Stage 2: Restore the changes to development and apply the hotfix. On the development model, use the History ID from Stage 1 – Step 1 to restore to the version containing all of the development work (minus the hotfix). Reapply the hotfix to this version of development. Create a new revision of the development model. Development is now back to where it was, now with the hotfix applied. When your development work is complete, you can promote the new version to production using ALM best practice.   The procedure is documented here: https://community.anaplan.com/t5/Anapedia-Model-Building/Fixing-Production-Issues/ta-p/4839
View full article
Imagine the following scenario: You need to make regular structural changes to a deployed model (for example, weekly changes to the switchover date, or changing the current week). You can make these changes through setting revision tags in the development model. However, you also have a development cycle that spans the structural changes.   What do you do? What you don’t want to do is take the model out of deployed mode. You also don’t want to lose all the development work you have been doing or synchronize partially developed changes. Don’t worry, following the procedure below will ensure you can manage both. The following diagram illustrates the procedure (for switchover): It’s about planning ahead Before starting development activities: Change the relevant structural change and set the revision tag. Create the next revision tag for the next structural change. Repeat for as many revision tags as necessary. Give enough breathing space to allow for the normal development activities and probably allow for a couple more just in case. Now start developing: When needed, you can synchronize to the relevant revision tag without promoting the partial development changes. When the development activities are ready, ensure that the correct structural setting is made (e.g. the correct switchover period), create the revision tag and synchronize the model. Repeat steps 1–3 to set up the next “batch” of revision tags to cover the development window.
View full article
Master data hubs Master data hubs are used within the Anaplan platform to house an organization’s data in a single model. This hub imports data from the corporation’s data warehouse. If no single source is available, such as a data warehouse, then the master data hub will collect data from individual source systems instead. Once all data is consolidated into a single master data hub, it may then be distributed to multiple models throughout an organization’s workspace. Anaplan Data Architecture   Architecture best practices One or more Anaplan models may make up the data hub. It is a good practice to separate the master data (hierarchies, lists, and properties) from the transactional data. The business Anaplan applications will be synchronized from these data hub models using Anaplan native “model-to-model” internal imports. As a best practice, users should only implement incremental synchronization, which only synchronizes the data in the application that has changed since the last sync from the data hub. Doing this usually provides very fast synchronization. The graphic below displays best practices for doing this:   Another best practice organizations should follow when building a master data hub is to import a list with properties into a module rather than directly into a list. Using this method, line items are created to correspond with the properties and are imported using the text data type. This will import all of the data without errors or warnings, and allow for very smart dashboards, made of sorts and filters, to highlight integration issues. Once imported, the data in the master data hub module can then be imported to a list in the required model.   Data hub best practices The following list consists of best practices for establishing data architecture: Rationalize the metadata Balanced hierarchies (not ragged) will ease reporting and security settings Be driver-based Identify your metric and KPIs and what drives them Do not try to reconcile disconnected targets to bottom up plans entered at line item level. Example: Use cost per trip and number of trips for travel expenses, as opposed to inputting every line of travel expense Simplify the process Reduce the number of approval levels (threshold-based) Implement rolling forecasts Report within the planning tool; keep immediacy where needed Think outcome and options, not input Transform your existing process. Do not re-implement existing Excel ® -based processes in Anaplan Granularity Aggregate transactions to SKU level, customer ID Plan at higher level and cascade down Plan the number of TBH by role for TBH headcount expenses, as opposed to inputting every TBH employee. Sales: Sub-region level planning, cascade a rep level Plan at profit center level, allocate at cost center level based on drivers The Anaplan Way Always follow the phases of The Anaplan Way when establishing a master data hub, even in a federated approach: Pre-Release Phase Foundation Phase Implementation Phase Testing Phase Deployment Phase
View full article
The components involved in a Center of Excellence combine to promote self-sufficiency within a business. This may start as early as a business’ first release, and can continue on throughout each new release. There are eight key components that each business should expect to benefit from with the establishment of a Center of Excellence:       1. Skills and expertise The Center of Excellence provides an entire organization with the skills and expertise needed to develop the Anaplan platform within the business and provide training to the team. It creates functional Subject Matter Experts (SMEs), provides solution design, architecting, and technical model building skills, and offers project management capabilities. Furthermore, it provides ongoing training for an organization, including instructor lead (classroom) training and on-demand eLearning courses.      2. An implementation approach The Center of Excellence creates a known and understood approach to delivering and evolving solutions within Anaplan for an organization. Utilizing the benefits of the Anaplan Way Agile methodology, a  Center of Excellence encourages collaboration between all parties involved with the Anaplan platform, successful iterations of new and updated releases, and accurate visualization of each project and release.   3. Direction and governance The  Center of Excellence creates a governance framework that is used to steer and prioritize the Anaplan roadmap within an organization and drive the ROI of each release. This includes identifying an organization’s steering committee, executive sponsors, and the sign-off/approval approach and process. The  Center of Excellence may also act as the project management office (PMO), which is attached to each release.   4. Data governance and integration Establishing a  Center of Excellence helps to utilize the Master Data Hub concept within an organization. The  Center of Excellence will generally be responsible for the Master Data Hub, which will feed into most, if not all, models within the organization. Doing this creates a single point of data reference within the organization for all departments and regions to refer to. Also, the  Center of Excellence provides adherence to conventions, policies, and corporate definitions that are used with the Anaplan platform.   5. Access to knowledge and best practices The  Center of Excellence is responsible for providing a knowledge base and internal community to support an organization’s efforts in Anaplan. These internal resources should provide functional use case and technical model building best practices, as well as a shared practical knowledge surrounding the platform and the organization’s specific use within.   6. An Anaplan "savvy" The  Center of Excellence constantly holds an awareness of the "power of the platform." This awareness includes what the platform is currently doing for the organization and what it could be used for in the future with platform updates and improvements in mind. Additionally, the  Center of Excellence maintains a practical understanding of the Anaplan App Hub and how Apps can be leveraged for rapid prototyping and deployment of releases.   7. Access to Support The  Center of Excellence acts as a 24/7 customer support desk for the organization, and offers customized support when necessary.   8. Change Management The  Center of Excellence provides a support system to handle all change management surrounding the Anaplan platform. This includes clear and appropriate communications to drive and support user adoption, and alignment of upstream and downstream business processes.
View full article
The Anaplan platform can be configured and deployed in a variety of ways. Two configurations that should be examined prior to each organizations’ implementation of Anaplan are the Central Governance-Central Ownership configuration and Central Governance-Federated Ownership configuration. Central Governance-Central Ownership configuration This configuration focuses on using Agile methodology to develop and deploy the Anaplan platform within an organization. Development centers around a central delivery team that is responsible for maintaining a master data hub, as well as all models desired within the organization, such as sales forecasting, T&Q planning, etc.   Central delivery team In this configuration, the central delivery team is also responsible for many other steps and requirements, or business user inputs, which are carried out in Anaplan and delivered to the rest of the organization. These include: Building the central model Communicating release expectations throughout development Creating and managing hierarchies in data Data loads (data imports and inputs) Defect and bug fixes in all models Solution enhancements New use case project development   Agile methodology—The Anaplan Way As previously mentioned, this configuration also focuses on releasing, developing, and deploying new and improved releases using the Agile methodology. This strategy begins with the sprint planning step and moves to the final deployment step. Once a project reaches deployment, the process begins again for either the next release of the project or the first release of a new project. Following this methodology increases stake holder engagement in releases, promotes project transparency, and shows project results in shorter timeframes.   Central Governance-Federated Ownership configuration This configuration depends on a central delivery team to first produce a master data hub and/or master model, and then allow the individual departments within an organization to develop and deploy their own applications in Anaplan. These releases are small subsets of the master model that allow departments to perform “what-if” modeling and control their own models or independent applications needed for specific local business needs.   Central delivery team In this configuration, the central delivery team are only responsible for the following: Creating and managing hierarchies in data Data loads (data imports and inputs defect fixes) Capture and share modeling best practices with the rest of the teams   Federated model ownership In this model, each department and/or region is responsible for their own development. This includes: Small subsets of the master model for flexible “what if” modeling Custom or in depth analysis/metrics Independent use case models Loose or no integration with master model One-way on-demand data integration Optional data hub integration   Pros and cons Both of these configurations contain significant pros and cons for implementing them into an organization:   Central Governance-Central Ownership pros Modeling practices Modeling practices within an organization become standardized for all new and updated releases. Request process The request process for new projects becomes standardized. One single priority list of enhancement request is maintained and openly communicated. Clear communication Communication of platform releases, new build releases, downtime, and more comes from one source and is presented in a clear and consistent manner. Workspace and licenses This configuration requires the fewest number of workspaces, which saves on data used in Anaplan, as well as the fewest number of workspace admin licenses.   Central Governance-Central   Ownership cons Request queue All build requests, including new use cases, enhancements, and defect fixes, go into a queue to be prioritized by the central delivery team. Time commitment This configuration requires a significant weekly time commitment from the central delivery team to prioritize all platform requirements.   Central Governance-Federated   Ownership pros Business user development This configuration allows for true business development capabilities without comprising the integrity of the core solution developed by the central delivery team. Anaplan releases Maximizes the return of investment and reduce shadow IT processes by enabling the quick spread of the Anaplan platform across an organization as multiple parties are simultaneously developing. Request queue Reduces or completely eliminates wait queue wait times for new uses cases and/or functionality. Speed of implementation Having the central team take care of all data integration work via the data hub will speed up application design by enabling federated team to take their actuals and master data out of an Anaplan data hub model, as opposed to having to build their own data integration with source systems.   Central Governance-Federated   Ownership cons Workspace and licenses More workspace and workspace admin licenses may be necessary in the platform. Best practices In this configuration it is challenging to ensure that model building architecture procedures and best practices are being followed in each model. It requires the central Center of Excellence team to organize recurring meetings with each application builder to share experience and best practices. Build delays Business users without model building skills may have a difficult time building and maintaining their requirements.
View full article
Dynamic Cell Access (DCA) controls the access levels for line items within modules. It is simple to implement and provides modelers with a flexible way of controlling user inputs. Here are a few tips and tricks to help you implement DCA effectively. Access control Modules Any line item can be controlled by any other applicable Boolean line item. To avoid confusion over which line item(s) to use, it is recommended that you add a separate functional area, and create specific modules to hold the driver line items. These modules should be named appropriately (e.g. Access – Customers > Products, or Access – Time etc.). The advantage of this approach is the access driver can be used for multiple line items or modules and the calculation logic is in one place. In most cases, you will probably want read and write access. Therefore, within each module it is recommended that you add two line items (Write? and Read?). If the logic is being set for Write?, then set the formulas for the Read? line item to NOT WRITE? (or vice-versa). It may be necessary to add multiple line items to use for different target line items, but start with this a default. Start Simple You may not need to create a module that mirrors the dimensionality of the line item you wish to control. For example, if you have a line item dimensioned by customer, product, and time, and you wish to make actual months read only, you can use an access module just dimensioned by time. Think about what dimension the control needs to apply to and create an access module accordingly. What settings do I need? There are three different states of access that can be applied: READ, WRITE, and INVISIBLE or hidden. There are two blueprint controls (read control and write control) and there are two states for a driver (TRUE or FALSE). The combination of these determines which state is applied to the line item. The following table illustrates the options: Only the read access driver is set:   Read Access Driver Driver Status True False Target Line Item READ INVISIBLE Only the write access driver is set:   Write Access Driver Driver Status True False Target Line Item WRITE INVISIBLE Both read access and write access drivers are set:   Read Access Driver Write Access Driver Driver Status True False True False Target Line Item READ INVISIBLE WRITE Revert to Read* *When both access drivers are set, the write access driver takes precedence with write access granted if the status of the write access driver is true. If the status of the write access driver is false, the cell access is then taken from the read access driver status. The settings can also be expressed in the following table:   WRITE ACCESS DRIVER TRUE FALSE NOT SET READ ACCESS DRIVER TRUE Write Read Read FALSE Write Invisible Invisible NOT SET Write Invisible Write Note: If you want to have read and write access, it is necessary to set both access drivers within the module blueprint.  Totals Think about how you want the totals to appear. When you create a Boolean line item, the default summary option is NONE. This means that if you used this access driver line item, any totals within the target would be invisible. In most cases you will probably want the totals to be read only, so setting the access driver line item summary to ANY will provide this setting. If you are using the Invisible setting to “hide” certain items and you do not want the end user to compute hidden values, then it is best to use the ANY setting for the access driver line item. This means that only if all values in the list are visible then the totals show; otherwise the totals are hidden from view.
View full article
If you have a multi-year model where the data range for different parts of the model vary, (for example, history covering two years, current year forecast, and three planning years), then Time Ranges should be able to deliver significant gains in terms of model size and performance. But, before you rush headlong into implementing Time Ranges across all of your models, let me share a few considerations to ensure you maximise the value of the feature and avoid any unwanted pitfalls. Naming Convention Time Ranges As with all Anaplan models, there is no set naming convention, however we do advocate consistency and simplicity. As with lists and modules, short names are good. I like to describe the naming convention thus “as short as practical,” meaning you need to understand what it means, but don’t write an essay! We recommend the using the following convention: FYyy-FYyy. For example, FY16-FY18, or FY18 for a single year Time Ranges available are from 1981 to 2079, so the “19” or the “20” prefixes are not strictly necessary. Keeping the name as short as this has a couple of advantages: Clear indication of the boundaries for the Time Range It is short enough to see the name of the Time Range in the module and line items blueprint The aggregations available for Time Ranges can differ for each Time Range and also differ from the main model calendar. If you take advantage of this and have aggregations that differ from the model calendar, you should add a suffix to the description. For example: FY16-FY19 Q (to signify Quarter totals) FY16-FY19 QHY (Quarter and Half Year totals) FY16-FY19 HY (Half Year totals only) etc. Time Ranges are Static Time Ranges can span from 1981 to 2079. As a result, they can exist entirely outside, within, or overlap the model calendar. This means that there may likely be some additional manual maintenance to perform when the year changes. Let’s review a simple example: Assume the model calendar is FY18 with 2 previous years and 2 future years; the model calendar spans FY16-FY20. We have set up Time Ranges for historic data (FY16-FY17) and plan data (FY19-FY20) We also have modules that use the model calendar to pull all of the history, forecast, and plan data together, as seen below: At year end when we “roll over the model,” we amend the model calendar simply by amending the current year. What we have now is as follows: You see that the history and plan Time Ranges are now out of sync with the model calendar. How you change the history Time Range will depend on how much historic data you need or want to keep, but assuming you don’t need more than two year’s history, the Time Range should be re-named FY17-FY18 and the start period advanced to FY17 (from FY16). Similarly, the plan Time Range should be renamed FY20-FY21 and advanced to FY20 (from FY19). FY18 is then available for the history to be populated and FY21 is available for plan data entry. Time Ranges Pitfalls Potential Data Loss Time Ranges can bring massive space and calculation savings to your model(s), but be careful. In our example above, changing the Start Period of FY16-FY17 to FY17 would result in the data for FY16 being deleted for all line items using FY16-FY17 as a Time Range. Before you implement a Time Range that is shorter or lies outside the current model calendar, and especially when implementing Time Ranges for the first time, ensure that the current data stored in the model is not needed. If in doubt, do some or all of the suggestions below: Export out the data to a file Copy the existing data on the line item(s) to other line items that are using the model calendar Back up the whole model Formula References The majority of the formulae will update automatically when updating Time Ranges. However, if you have any hard coded SELECT statements referencing years or months within the Time Range, you will have to amend or remove the formula before amending the Time Range. Hard coded SELECT statements go against best practice for exactly this reason; they cause additional maintenance. We recommend replacing the SELECT with a LOOKUP formulae from a Time Settings module. There are other examples where the formulae may need to be removed/amended before the Time Range can be adjusted. See the Anapedia documentation for more details. When to use the Model Calendar This is a good question and one that we at Anaplan pondered during the development of the feature; Do Time Ranges make the model calendar redundant? Well, I think the answer is “no,” but as with so many constructs in Anaplan, the answer probably is “it depends!” For me, a big advantage of using the model calendar is that it is dynamic for the current year and the +/- years on either side. Change the current year and the model updates automatically along with any filters and calculations you have set up to reference current year periods, historic periods, future periods, etc.  (You are using a central time settings module, aren’t you??) Time ranges don’t have that dynamism, so any changes to the year will need to be made for each Time Range. So, our advice before implementing Time Ranges for the first time is to review each Module and: Assess the scope of the calculations Think about the reduction Time Ranges will give in terms of space and calculation savings, but compare that with annual maintenance For example: If you have a two-year model, with one history year (FY17) and the current year (FY18); you could set up a Time Range spanning one year for FY17 and another one year Time Range for FY18 and use these for the respective data sets. However, this would mean each year both Time Ranges would need to be updated. We advocate building models logically, so it is likely that you will have groups of modules where Time Ranges will fall naturally. The majority of the modules should reflect the model calendar. Once Time Ranges are implemented, it may be that you can reduce the scope of the model calendar. If you have a potential Time Range that reflects either the current or future model calendar, leave the timescale as the default for those modules and line items; why make extra work? SELECT Statements As outlined above, we don’t advocate hard-coded time selects of the majority of time items because of the negative impact on maintenance (the exceptions being All Periods, YTD, YTG, and CurrentPeriod) When implementing Time Ranges for the first time, take the opportunity to review the line item formula with time selects. These formulae can be replaced with lookups using a Time Settings module. Application Lifecycle Management (ALM) Considerations As with the majority of the Time settings, Time Ranges are treated as structural data. If you are using ALM, all of the changes must be made in the Development model and synchronised to Production. This gives increased importance to refer to the pitfalls noted above to ensure data is not inadvertently deleted. Best of luck! Refer to the Anapedia documentation for more detail. Please ask if you have any further questions and let us and your fellow Anaplanners know of the impact Time Ranges have had on your model(s).
View full article
Reducing the number of calculations will lead to quicker calculations and improve performance. But this doesn’t mean combining all your calculations into fewer line items, as breaking calculations into smaller parts has major benefits for performance. Learn more about this in the Formula Structure article. How is it possible to reduce the number of calculations? Here are three easy methods: Turn off unnecessary Summary method calculations. Avoid formula repetition by creating modules to hold formulas that are used multiple times. Ensure that you are not including more dimensions than necessary in your calculations. Turn off Summary method calculations Model builders often include summaries in a model without fully thinking through if they are necessary. In many cases the summaries can be eliminated. Before we get to how to eliminate them, let’s recap on how the Anaplan engine calculates. In the following example we have a Sales Volume line-item that varies by the following hierarchies: Region Hierarchy Product Hierarchy Channel Hierarchy City SKU Channel Country Product All Channels Region All Products   All Regions     This means that from the detail values at SKU, City, and Channel level, Anaplan calculates and holds all 23 of the aggregate combinations shown below—24 blocks in total. With the Summary options set to Sum, when a detailed item is amended (represented in the grey block), all the other aggregations in the hierarchies are also re-calculated. Selecting the None summary option means that no calculations happen when the detail item changes. The varying levels of hierarchies are quite often only there to ease navigation and the roll-up calculations are not actually needed, so there may be a number of redundant calculations being performed. The native summing of Anaplan is a faster option, but if all the levels are not needed it might be better to turn off the summary calculations and use a SUM formula instead.  For example, from the structure above, let’s assume that we have a detailed calculation for SKU, City, and Channel (SALES06.Final Volume). Let’s also assume we need a summary report by Region and Product, and we have a module (REP01) and a line item (Volume) dimensioned as such. REP01.Volume = SALES06 Volume Calculation.Final Volume is replaced with REP01.Volume = SALES06.Final Volume[SUM:H01 SKU Details.Product, SUM:H02 City Details.Region] The second formula replaces the native summing in Anaplan with only the required calculations in the hierarchy. How do you know if you need the summary calculations? Look for the following: Is the calculation or module user-facing? If it is presented on a dashboard, then it is likely that the summaries will be needed. However, look at the dashboard views used. A summary module is often included on a dashboard with a detail module below; effectively the hierarchy sub-totals are shown in the summary module, so the detail module doesn’t need the sum or all the summary calculations. Detail to Detail Is the line item referenced by another detailed calculation line item? This is very common, and if the line item is referenced by another detailed calculation the summary option is usually not required. Check the Referenced by column and see if there is anything referencing the line item. Calculation and staging modules If you have used the DISCO module design, you should have calculation/staging modules. These are often not user-facing and have many detailed calculations included in them. They also often contain large cell counts, which will be reduced if the summary options are turned off. Can you have different summaries for time and lists? The default option for Time Summaries is to be the same as the lists. You may only need the totals for hierarchies, or just for the timescales. Again, look at the downstream formulas. The best practice advice is to turn off the summaries when you create a line item, particularly if the line item is within a Calculation module (from the DISCO design principles). Avoid Formula Repetition An optimal model will only perform a specific calculation once. Repeating the same formula expression multiple times will mean that the calculation is performed multiple times. Model builders often repeat formulas related to time and hierarchies. To avoid this, refer to the module design principles (DISCO) and hold all the relevant calculations in a logical place. Then, if you need the calculation, you will know where to find it, rather than add another line item in several modules to perform the same calculation. If a formula construct always starts with the same condition evaluation, evaluate it once and then refer to the result in the construct. This is especially true where the condition refers to a single dimension but is part of line item that goes across multiple dimension intersections. A good example of this can be seen in the example below: START() <= CURRENTPERIODSTART() appears five times and similarly START() > CURRENTPERIODSTART() appears twice. To correct this, include these time-related formulas in their own module and then refer to them as needed in your modules. Remember, calculate once; reference many times! Taking a closer look at our example, not only is the condition evaluation repeated, but the dimensionality of the line items is also more than required. The calculation only changes by day, as per the diagram below: But the Applies To here also contains Organization, Hour Scale, and Call Center Type. Because the formula expression is contained within the line item formula, for each day the following calculations are also being performed: And, as above, it is repeated in many other line items. Sometimes model builders use the same expression multiple times within the same line item. To reduce this overcalculation, reference the expression from a more appropriate module; for example, Days of Week (dimensioned solely by day) which was shown above. The blueprint is shown below, and you can see that the two different formula expressions are now contained in two line items and will only be calculated by day; the other dimensions that are not relevant are not calculated. Substitute the expression by referencing the line items shown above. In this example, making these changes to the remaining lines in this module reduces the calculation cell count from 1.5 million to 1500. Check the Applies to for your formulas, and if there are extra dimensions, remove the formula and place it in a different module with the appropriate dimensionality .
View full article
Announcements


Join us in San Francisco, CA, to explore what’s possible with business leaders, industry visionaries, and your peers.
Take $200 off your registration with code COMMUNITYCPX200.


Anapedia

Review the official documentation of the Anaplan platform.

Share what you know!

Share what you know! Contribute your best practices and Anaplan expertise using our Contributor's Toolkit.