Sort by:
General recommendations First, the bigger your model is the more performance issues you are likely to experience. So a best practice is to use all the possible tools & features we have to make the model as small and dense as possible. This includes: Line Item Checks: summary calculations, dimensionality used Line Item Duplication Granularity of Hierarchies Use of subsets and line item subsets Numbered Lists More information on eliminating sparsity can be found in Learning Center courses 309 and 310. Customer requirements  General recommendations also include whenever possible, challenging your customer’s business requirements when customer require large list (>1M), big data history and high number of dimensions used at the same time for a line item (>5) Other practices Once these general and basic sparsity recommendations have been applied, you can further performance in different areas. The articles below will expand on each subject: Imports and exports and their effects on model performance Rule 1: Carefully decide if you let end-user import (and export) during business hours Rule 2: Mapping Objective = zero errors or warning Rule 3: Watch the formulas recalculated during the import Rule 4: Import List properties Rule 5: Get your Data HUB Rule 6: Incremental import/Export Dashboard settings that can help improve model performance Rule1: Large list = Filter these on a boolean, not on text Rule 2: Use the default Sort Rule 3: Reduce the amount of dashboard component Rule 4: Watch large page drop downs Formulas and their effect on model performance Model load, Model Save, Model Rollback and their effect on model performance User roles and their effect on model performance
View full article
Overview: Imports are blocking operation: The model is locked during the time of the import, and concurrent imports run by end-user will need to run one after the other, and will block the model for everyone else. Rule 1: Carefully decide if you let end-user import (and export) during business hours Imports executed by end-users should be carefully considered, and if possible executed once or twice a day. Customer easily accept model freeze at scheduled hours for a predefined time even if it takes 10+ minutes, and are frustrated when these imports are run randomly during business hours by anyone. Your first optimization is to adjust the process and run these imports by an admin, at scheduled time and let the user based know about the schedule. Rule 2: Mapping Objective = zero errors or warning Make sure your import returns with no errors or warning, every error takes processing time. Time to import into a medium to large list (>50k) is significantly reduced if no errors are to be processed. Here are the tips to reduce errors: Always import from a saved view - NEVER from the default view. And use the naming convention for easy maintenance Hide the line items that are not needed for import, do not bring extra columns that are not needed. In the import definition, always map all displayed line items (source→target) or use the "ignore" setting - don't leave any line item unmapped Rule 3: Watch the formulas recalculated during the import If your end-users encounter poor performance when clicking a button that triggers an import or a process, it is likely due to the recalculations that is triggered by the import, especially if the action creates or moves items within a hierarchy. You will likely need the help of Anaplan support (L3) to identify what formulas are triggered after the import is done, and get a performance check on these formulas to identify which one takes most of the time. Usually those fetching many cells such as SUM, ANY or FINDITEM() are likely to be responsible for the performance impact. To solve such situations, you will need to challenge the need of recalculating the formula identified each time a user calls the action. Often, for actions such as creations, moves, assignment done in WFP or Territory Planning, many calculations used for Reporting are triggered in real-time after the hierarchy is modified by the import, and are not necessarily needed by users. the recommendation is to challenge your customer and see if these formulas couldn't be calculated only once a day, instead of each a user runs the action. If yes, you'll need to rearchitect your modules so that these heavy formulas get to run through a different process run daily by an admin, and not by each end-users. Rule 4:  Import List properties Importing list properties takes more time than importing these as module line item. Review your model list impacted by imports, and envision replacing list properties by module line items when possible. Also, please refer to the Data Hub best practices, where we recommend to upload all list properties into a Data HUB module and not in the list property itself. Rule 5: Get your Data HUB HUB and SPOKE: Setup a HUB data model, which will feed the other production model used by stakeholders. Look at the white paper on how to build a Data HUB: Performance benefits: It will prevent production models to be blocked by a large import from External Data source. But since Data HUB to Production model imports will still be blocking operations, carefully filter what you import, and use the best practices rules listed above. All import, mapping/transformation modules required to prepare the data to be loaded into Planning modules can now be located in a dedicated Data HUB model and not in the Planning model. This model will then be smaller and will work more efficiently Reminder of the other Benefits not linked to performance: Better structure, easier maintenance: Data HUB help keep all the data organized in a central location. Better governance: Whenever possible put this Data HUB on a different WS. That will ease the separation of duties between Production models and Meta Data management, at least on Actual Data and production lists. IT department will love the idea to own the Data HUB, and have no one else be an admin in the WS Lower implementation costs: Data HUB is a way to reduce the implementation time of new projects. Assuming IT can load the data needed by the new project in the Data HUB, then business users do not have to integrated with complex source system, but with the Anaplan Data HUB instead. Rule 6: Incremental import/Export This can be the magic bullet in some cases. If you export on a frequent basis (daily ot more) from Anaplan model into a reporting system, or write back to the source system, or simply transfer data from one Anaplan model to another, you have ways to only import/exports the data that have changed since the last export. Use the concatenation + Change boolean technique explained in the Data HUB white paper.
View full article
This article outlines the requirements for Anaplan Technology Partners who want to integrate with Anaplan using Anaplan v2.0 REST APIs. Use Cases The following use cases are covered: Allow users to run integrations from the partner technology or application with or without an external integration tool to move data to and from Anaplan. Provide the ability to import data into Anaplan for planning and dashboarding and extract the planning results from Anaplan into the partner technology or application. Provide the ability to extract data from Anaplan modules and lists or import data into Anaplan modules and lists. Provide the ability to extract data from Anaplan into the partner technology or application to run specific planning scenarios or calculations. Requirements To integrate with Anaplan: Users must have a license for the partner technology and credentials to log in to Anaplan. Basic authentication or certificate authentication methods are supported. Users must have Import and/or Export actions configured in Anaplan or have the ability to create these actions in Anaplan. Assumptions Technology partners are familiar with Anaplan modelling concepts and Anaplan APIs. Information can be found on anaplan.com , help.anaplan.com , Anaplan Academy and Anaplan API reference material . Anaplan supports the deletion of items from very long lists using the Delete from List using Selection This can be invoked via a REST API. Import chunks are between 1 MB and 50 MB in size. Export chunks are 10 MB in size. Anaplan data exports and imports will run in batch mode. All Anaplan exports will be generated as .csv or .txt (tab delimited files). Anaplan imports will similarly accept .csv or .txt formatted data. All data movements will follow format and rules defined in Anaplan actions. Constraints Users can create an Anaplan Process to chain multiple Import / Export actions together and execute them in sequence. However, some functionality is not supported, e.g. files will not be output to UI for Export actions. Not in Scope Process action support is not required. OAuth File types other than .csv and tab-delimited files. Changes to Anaplan UI, login mechanism or Anaplan APIs. Guidelines  Authentication Support for Basic Authentication (user name and password). Support for Certificate Authentication (uploading an x509 cert). A custom header will be sent in the header for every API call to Anaplan to uniquely identify the partner technology and its version. For example, format "{Partner Prod name} {version}". Behavior The partner technology or application must allow users to log in to Anaplan with credentials and present list of Export or Import actions for user to select from: Get the workspace that the user has access to (present workspace name, not ID). Get the models that the user has access to (present model name, not ID) Workspace and model are used in the URL for other endpoints.  Export and Import actions Based on the Workspace and Model selected, present the Export / Import actions, with the name, to the user for selection. This list of actions will match what is presented in the Anaplan UI. Each action is associated with a Module or List in Anaplan. Execute the Export / Import action; by posting a task against the action, the action is run.  Export action: getting the file Assuming that the task succeeded, pull down the file (in some cases, in chunks). If the file is in chunks, Partner code will need to concatenate the chunks together.  Export action: parse the exported file The file should be in .csv or .txt format.  Invoke Anaplan Export API endpoint "GET https://api.anaplan.com/.../exports/<export id>" to get fields for the Export action.  Export action: analyze exported data Most users will want to analyze multiple modules and lists. Each export is for one module or list. Users will need to be able to execute more than one export in order to populate their partner technology environment.  Export action: multiple exports In Anaplan, a Process is a wrapper of multiple actions that are executed in sequential order. It is not possible to pull the export files using a process so individual exports are required. The partner technology must allow for more than one export to be selected by the user. The calls will need to be made independently as each export will need its own task ID. (This is assuming that exports run on different modules or lists.)   Export action: get the files from multiple exports This is the same as pulling files from a single export call, except that the code needs to ensure that it is pulling the correct file after the export is called. Files for all defined exports should already exist in the system so calling them will not result in the failure. However, calling them without executing a new export task or before the export task completes successfully can lead to downloading outdated information. If tasks are created against a single model in parallel, the actions will be queued and run in sequence.  Check that the task completes successfully before pulling the related file.  Import action: uploading data The Technology Partner will split data to be uploaded into chunks of certain size. Anaplan APIs support upload chunk sizes from 1MB to 50MB. These chunks will be uploaded to Anaplan in sequential manner. Once all chunks are uploaded, the Import action will be triggered by a separate REST API call.  Error handling The Anaplan API is REST so expect standard HTTP error codes for API failures. Import action failures are found by doing a GET on the TASK endpoint. The JSON response will have a summary and for error conditions, there will be a dump file that can be pulled to get more details.  The partner technology or application will need to fetch the dump file via a REST API call, save the file, and then process it. Export dump files are unusual - they are more common for imports. Ensuring that the task completes successfully before retrieving the file will avoid receiving outdated information from Anaplan. If a task fails, report the errors back to the user. Any automatic restarts should be very limited in scope and user configurable to prevent infinite loops and performance degradation of the model.  Labeling Labels should follow Anaplan naming conventions: Export Workspace Model File For example, executing an Export action should be called ‘Export’ not ‘Read’. Definitions  Workspace Each company (or autonomous workgroup) has its own Workspace. A workspace has its own set of users and may contain any number of models. Model A structure where a user can conduct planning. It contains all the objects needed for planning, such as modules and lists, but also the data values. Module Components of each Anaplan model, built up using line items, timescales, list dimensions, and pages. A module contains the metrics for planning. Lists Groups of similar items, such as people, products, and regions. They can be combined into Modules. Actions Operations defined by users to execute certain functions, such as imports, exports, or processes. Actions must be defined in Anaplan before they can be called in the API. Process Groups actions and executes them in sequential order.  Data Source Definition The configuration of an action that details how the data is handled. Task The job that executes actions and contains metadata regarding the job itself.
View full article
Note:  This article is meant to be a guide on converting an existing Anaplan Security Certificate to PEM format for the purpose of testing its functionality via cURL commands. Please work with your developers on any in more depth application of this process.  The current Production API version is v1.3. Using a certificate to authenticate will eliminate the need to update your script when you have to change your Anaplan password. To use a certificate for authentication with the API, it first has to be converted into a Base64 encoded string recognizable by Anaplan. Information on how to obtain a certificate can be found in Anapedia. This article assumes that you already have a valid certificate tied to your user name. Steps: 1.   To properly convert your Anaplan certificate to be usable with the API, first you will need openssl (https://www.openssl.org/). Once you have that, you will need to convert the certificate to PEM format. The PEM format uses the header and footer lines “-----BEGIN CERTIFICATE-----“, and “-----END CERTIFICATE-----“.   2.   If your certificate is not in PEM format, you can convert it to the PEM format using the following OpenSSL command. “certificate-(certnumber).cer” is name of source certificate, and “certtest.pem” is name of target PEM certificate.                   openssl x509 -inform der -in certificate-(certnumber).cer -out certtest.pem   View the PEM file in a text editor. It should be a Base64 string starting with “-----BEGIN CERTIFICATE-----“, and ending with “-----END CERTIFICATE-----“.   3.   View the PEM file to find the CN (Common Name) using the following command:   openssl x509 -text -in certtest.pem   It should look something like "Subject: CN=(Anaplan login email)". Copy the Anaplan login email.   4.   Use a Base-64 encoder (e.g.   https://www.base64encode.org/   ) to encrypt the CN and PEM string, separated by a colon. For example, paste this in:   (Anaplan login email):-----BEGIN CERTIFICATE-----(PEM certificate contents)-----END CERTIFICATE----- 5.   You now have the encrypted string necessary to authenticate API calls. For example, using cURL to GET a list of the Anaplan workspaces for the user that the certificate belongs to:   curl -H "Authorization: AnaplanCertificate (encrypted string)" https://api.anaplan.com/1/3/workspaces  
View full article
Overview When changes occur to the primary model that need to be copied to the other models, careful coordination is necessary. There are several time-saving techniques that can make model changes across distributed models simple and quick. This depends on the complexity of the change, but generally changes are merely to fix an issue or add very small things such as views or reports. Some of the model change techniques are: Module update via export/import Primary module is updated Export of module blueprint to CSV format Import of new line items into receiving module blueprint Import of new formulas/dimensionality into receiving module Model blueprint update Model blueprints can also be updated on a batch basis where required Simple copy and paste. Anaplan supports full copy and paste from other applications where minor changes to model structure are needed List/dimension additions You can export new lists or dimensions to a CSV file from one model to another, or you can carry out a direct API model-to-model import to add new lists to multiple models. Changes to data or metadata happen in a different way. Item changes within existing lists or hierarchies occur via an import, which may take place in a specific model or models, or ideally within a master data hub. It is a best practice to use an Anaplan model as a master data hub, which will store the common lists and hierarchies and will be the unique point of maintenance. Model builders will then implement automated data imports from the master data hub to every single model, including primary models and satellite models. It is important to carefully consider the business processes and rules that surround changes to the primary model, and then the coordination of the satellite models, as well as clear governance. ALM application: When changes occur We highly recommend that clients utilize ALM if metadata changes, such as any dimension, may be required at any time during implementation or even after the deployment phase of Anaplan. ALM allows clients to add or remove metadata from models, as well as test their effects, in a safe environment without running the risk of losing data or altering functionality in a live production model.
View full article
This is step four of the model design process. Next, your focus shifts to the inputs available. Remember that sometimes a dashboard is used to add information. Using the information gathered in steps 1 through 3: Identify the systems that will supply the data Identify the lists and hierarchies, especially the hierarchies needed to parse out information for the needed dashboards/exports What data hub types are needed? Master data Transactional Why do this step?   During this step, you should be thinking about the data needed to get to your defined output modules. All of the data in the system or in lists may not be needed. In addition, some hierarchies needed for the output modules may not exist and may need to be created.  Results of this step: Lists needed in the model Hierarchies needed in the model Data and where it is coming from
View full article
“Back to the Future” Imagine this scenario: You are in the middle of making changes in your development model and have been doing so for the last few weeks. The changes are not complete and are not ready to synchronize. However, you just received a request for an urgent fix from the user community that is critical for the forthcoming monthly submission. What do you do? What you don’t want to do is take the model out of deployed mode! You also don’t want to lose all the development work you have been doing.  Don’t worry. Following the procedure below will ensure you can apply the hotfix quickly and keep your development work. The following diagram illustrates the procedure: It’s a two-stage process: Stage 1: Roll the development model back to a version that doesn’t contain any changes (is the same as production) and apply the hotfix to that version. Add a new revision tag to the development model as a temporary placeholder. (Note the History ID of the last structural change, you'll need it later.) On the development model, use History to restore to a point where development and production were identical (before any changes were made in development). Apply the hotfix. Save a new revision of the development model. Sync the development model with the production model. Production now has its hotfix. Stage 2: Restore the changes to development and apply the hotfix. On the development model, use the History ID from Stage 1 – Step 1 to restore to the version containing all of the development work (minus the hotfix). Reapply the hotfix to this version of development. Create a new revision of the development model. Development is now back to where it was, now with the hotfix applied. When your development work is complete, you can promote the new version to production using ALM best practice.   The procedure is documented here: https://community.anaplan.com/t5/Anapedia-Model-Building/Fixing-Production-Issues/ta-p/4839
View full article
Imagine the following scenario: You need to make regular structural changes to a deployed model (for example, weekly changes to the switchover date, or changing the current week). You can make these changes through setting revision tags in the development model. However, you also have a development cycle that spans the structural changes.   What do you do? What you don’t want to do is take the model out of deployed mode. You also don’t want to lose all the development work you have been doing or synchronize partially developed changes. Don’t worry, following the procedure below will ensure you can manage both. The following diagram illustrates the procedure (for switchover): It’s about planning ahead Before starting development activities: Change the relevant structural change and set the revision tag. Create the next revision tag for the next structural change. Repeat for as many revision tags as necessary. Give enough breathing space to allow for the normal development activities and probably allow for a couple more just in case. Now start developing: When needed, you can synchronize to the relevant revision tag without promoting the partial development changes. When the development activities are ready, ensure that the correct structural setting is made (e.g. the correct switchover period), create the revision tag and synchronize the model. Repeat steps 1–3 to set up the next “batch” of revision tags to cover the development window.
View full article
Master data hubs Master data hubs are used within the Anaplan platform to house an organization’s data in a single model. This hub imports data from the corporation’s data warehouse. If no single source is available, such as a data warehouse, then the master data hub will collect data from individual source systems instead. Once all data is consolidated into a single master data hub, it may then be distributed to multiple models throughout an organization’s workspace. Anaplan Data Architecture   Architecture best practices One or more Anaplan models may make up the data hub. It is a good practice to separate the master data (hierarchies, lists, and properties) from the transactional data. The business Anaplan applications will be synchronized from these data hub models using Anaplan native “model-to-model” internal imports. As a best practice, users should only implement incremental synchronization, which only synchronizes the data in the application that has changed since the last sync from the data hub. Doing this usually provides very fast synchronization. The graphic below displays best practices for doing this:   Another best practice organizations should follow when building a master data hub is to import a list with properties into a module rather than directly into a list. Using this method, line items are created to correspond with the properties and are imported using the text data type. This will import all of the data without errors or warnings, and allow for very smart dashboards, made of sorts and filters, to highlight integration issues. Once imported, the data in the master data hub module can then be imported to a list in the required model.   Data hub best practices The following list consists of best practices for establishing data architecture: Rationalize the metadata Balanced hierarchies (not ragged) will ease reporting and security settings Be driver-based Identify your metric and KPIs and what drives them Do not try to reconcile disconnected targets to bottom up plans entered at line item level. Example: Use cost per trip and number of trips for travel expenses, as opposed to inputting every line of travel expense Simplify the process Reduce the number of approval levels (threshold-based) Implement rolling forecasts Report within the planning tool; keep immediacy where needed Think outcome and options, not input Transform your existing process. Do not re-implement existing Excel ® -based processes in Anaplan Granularity Aggregate transactions to SKU level, customer ID Plan at higher level and cascade down Plan the number of TBH by role for TBH headcount expenses, as opposed to inputting every TBH employee. Sales: Sub-region level planning, cascade a rep level Plan at profit center level, allocate at cost center level based on drivers The Anaplan Way Always follow the phases of The Anaplan Way when establishing a master data hub, even in a federated approach: Pre-Release Phase Foundation Phase Implementation Phase Testing Phase Deployment Phase
View full article
The components involved in a Center of Excellence combine to promote self-sufficiency within a business. This may start as early as a business’ first release, and can continue on throughout each new release. There are eight key components that each business should expect to benefit from with the establishment of a Center of Excellence:       1. Skills and expertise The Center of Excellence provides an entire organization with the skills and expertise needed to develop the Anaplan platform within the business and provide training to the team. It creates functional Subject Matter Experts (SMEs), provides solution design, architecting, and technical model building skills, and offers project management capabilities. Furthermore, it provides ongoing training for an organization, including instructor lead (classroom) training and on-demand eLearning courses.      2. An implementation approach The Center of Excellence creates a known and understood approach to delivering and evolving solutions within Anaplan for an organization. Utilizing the benefits of the Anaplan Way Agile methodology, a  Center of Excellence encourages collaboration between all parties involved with the Anaplan platform, successful iterations of new and updated releases, and accurate visualization of each project and release.   3. Direction and governance The  Center of Excellence creates a governance framework that is used to steer and prioritize the Anaplan roadmap within an organization and drive the ROI of each release. This includes identifying an organization’s steering committee, executive sponsors, and the sign-off/approval approach and process. The  Center of Excellence may also act as the project management office (PMO), which is attached to each release.   4. Data governance and integration Establishing a  Center of Excellence helps to utilize the Master Data Hub concept within an organization. The  Center of Excellence will generally be responsible for the Master Data Hub, which will feed into most, if not all, models within the organization. Doing this creates a single point of data reference within the organization for all departments and regions to refer to. Also, the  Center of Excellence provides adherence to conventions, policies, and corporate definitions that are used with the Anaplan platform.   5. Access to knowledge and best practices The  Center of Excellence is responsible for providing a knowledge base and internal community to support an organization’s efforts in Anaplan. These internal resources should provide functional use case and technical model building best practices, as well as a shared practical knowledge surrounding the platform and the organization’s specific use within.   6. An Anaplan "savvy" The  Center of Excellence constantly holds an awareness of the "power of the platform." This awareness includes what the platform is currently doing for the organization and what it could be used for in the future with platform updates and improvements in mind. Additionally, the  Center of Excellence maintains a practical understanding of the Anaplan App Hub and how Apps can be leveraged for rapid prototyping and deployment of releases.   7. Access to Support The  Center of Excellence acts as a 24/7 customer support desk for the organization, and offers customized support when necessary.   8. Change Management The  Center of Excellence provides a support system to handle all change management surrounding the Anaplan platform. This includes clear and appropriate communications to drive and support user adoption, and alignment of upstream and downstream business processes.
View full article
The Anaplan platform can be configured and deployed in a variety of ways. Two configurations that should be examined prior to each organizations’ implementation of Anaplan are the Central Governance-Central Ownership configuration and Central Governance-Federated Ownership configuration. Central Governance-Central Ownership configuration This configuration focuses on using Agile methodology to develop and deploy the Anaplan platform within an organization. Development centers around a central delivery team that is responsible for maintaining a master data hub, as well as all models desired within the organization, such as sales forecasting, T&Q planning, etc.   Central delivery team In this configuration, the central delivery team is also responsible for many other steps and requirements, or business user inputs, which are carried out in Anaplan and delivered to the rest of the organization. These include: Building the central model Communicating release expectations throughout development Creating and managing hierarchies in data Data loads (data imports and inputs) Defect and bug fixes in all models Solution enhancements New use case project development   Agile methodology—The Anaplan Way As previously mentioned, this configuration also focuses on releasing, developing, and deploying new and improved releases using the Agile methodology. This strategy begins with the sprint planning step and moves to the final deployment step. Once a project reaches deployment, the process begins again for either the next release of the project or the first release of a new project. Following this methodology increases stake holder engagement in releases, promotes project transparency, and shows project results in shorter timeframes.   Central Governance-Federated Ownership configuration This configuration depends on a central delivery team to first produce a master data hub and/or master model, and then allow the individual departments within an organization to develop and deploy their own applications in Anaplan. These releases are small subsets of the master model that allow departments to perform “what-if” modeling and control their own models or independent applications needed for specific local business needs.   Central delivery team In this configuration, the central delivery team are only responsible for the following: Creating and managing hierarchies in data Data loads (data imports and inputs defect fixes) Capture and share modeling best practices with the rest of the teams   Federated model ownership In this model, each department and/or region is responsible for their own development. This includes: Small subsets of the master model for flexible “what if” modeling Custom or in depth analysis/metrics Independent use case models Loose or no integration with master model One-way on-demand data integration Optional data hub integration   Pros and cons Both of these configurations contain significant pros and cons for implementing them into an organization:   Central Governance-Central Ownership pros Modeling practices Modeling practices within an organization become standardized for all new and updated releases. Request process The request process for new projects becomes standardized. One single priority list of enhancement request is maintained and openly communicated. Clear communication Communication of platform releases, new build releases, downtime, and more comes from one source and is presented in a clear and consistent manner. Workspace and licenses This configuration requires the fewest number of workspaces, which saves on data used in Anaplan, as well as the fewest number of workspace admin licenses.   Central Governance-Central   Ownership cons Request queue All build requests, including new use cases, enhancements, and defect fixes, go into a queue to be prioritized by the central delivery team. Time commitment This configuration requires a significant weekly time commitment from the central delivery team to prioritize all platform requirements.   Central Governance-Federated   Ownership pros Business user development This configuration allows for true business development capabilities without comprising the integrity of the core solution developed by the central delivery team. Anaplan releases Maximizes the return of investment and reduce shadow IT processes by enabling the quick spread of the Anaplan platform across an organization as multiple parties are simultaneously developing. Request queue Reduces or completely eliminates wait queue wait times for new uses cases and/or functionality. Speed of implementation Having the central team take care of all data integration work via the data hub will speed up application design by enabling federated team to take their actuals and master data out of an Anaplan data hub model, as opposed to having to build their own data integration with source systems.   Central Governance-Federated   Ownership cons Workspace and licenses More workspace and workspace admin licenses may be necessary in the platform. Best practices In this configuration it is challenging to ensure that model building architecture procedures and best practices are being followed in each model. It requires the central Center of Excellence team to organize recurring meetings with each application builder to share experience and best practices. Build delays Business users without model building skills may have a difficult time building and maintaining their requirements.
View full article
Dynamic Cell Access (DCA) controls the access levels for line items within modules. It is simple to implement and provides modelers with a flexible way of controlling user inputs. Here are a few tips and tricks to help you implement DCA effectively. Access control Modules Any line item can be controlled by any other applicable Boolean line item. To avoid confusion over which line item(s) to use, it is recommended that you add a separate functional area, and create specific modules to hold the driver line items. These modules should be named appropriately (e.g. Access – Customers > Products, or Access – Time etc.). The advantage of this approach is the access driver can be used for multiple line items or modules and the calculation logic is in one place. In most cases, you will probably want read and write access. Therefore, within each module it is recommended that you add two line items (Write? and Read?). If the logic is being set for Write?, then set the formulas for the Read? line item to NOT WRITE? (or vice-versa). It may be necessary to add multiple line items to use for different target line items, but start with this a default. Start Simple You may not need to create a module that mirrors the dimensionality of the line item you wish to control. For example, if you have a line item dimensioned by customer, product, and time, and you wish to make actual months read only, you can use an access module just dimensioned by time. Think about what dimension the control needs to apply to and create an access module accordingly. What settings do I need? There are three different states of access that can be applied: READ, WRITE, and INVISIBLE or hidden. There are two blueprint controls (read control and write control) and there are two states for a driver (TRUE or FALSE). The combination of these determines which state is applied to the line item. The following table illustrates the options: Only the read access driver is set:   Read Access Driver Driver Status True False Target Line Item READ INVISIBLE Only the write access driver is set:   Write Access Driver Driver Status True False Target Line Item WRITE INVISIBLE Both read access and write access drivers are set:   Read Access Driver Write Access Driver Driver Status True False True False Target Line Item READ INVISIBLE WRITE Revert to Read* *When both access drivers are set, the write access driver takes precedence with write access granted if the status of the write access driver is true. If the status of the write access driver is false, the cell access is then taken from the read access driver status. The settings can also be expressed in the following table:   WRITE ACCESS DRIVER TRUE FALSE NOT SET READ ACCESS DRIVER TRUE Write Read Read FALSE Write Invisible Invisible NOT SET Write Invisible Write Note: If you want to have read and write access, it is necessary to set both access drivers within the module blueprint.  Totals Think about how you want the totals to appear. When you create a Boolean line item, the default summary option is NONE. This means that if you used this access driver line item, any totals within the target would be invisible. In most cases you will probably want the totals to be read only, so setting the access driver line item summary to ANY will provide this setting. If you are using the Invisible setting to “hide” certain items and you do not want the end user to compute hidden values, then it is best to use the ANY setting for the access driver line item. This means that only if all values in the list are visible then the totals show; otherwise the totals are hidden from view.
View full article
If you have a multi-year model where the data range for different parts of the model vary, (for example, history covering two years, current year forecast, and three planning years), then Time Ranges should be able to deliver significant gains in terms of model size and performance. But, before you rush headlong into implementing Time Ranges across all of your models, let me share a few considerations to ensure you maximise the value of the feature and avoid any unwanted pitfalls. Naming Convention Time Ranges As with all Anaplan models, there is no set naming convention, however we do advocate consistency and simplicity. As with lists and modules, short names are good. I like to describe the naming convention thus “as short as practical,” meaning you need to understand what it means, but don’t write an essay! We recommend the using the following convention: FYyy-FYyy. For example, FY16-FY18, or FY18 for a single year Time Ranges available are from 1981 to 2079, so the “19” or the “20” prefixes are not strictly necessary. Keeping the name as short as this has a couple of advantages: Clear indication of the boundaries for the Time Range It is short enough to see the name of the Time Range in the module and line items blueprint The aggregations available for Time Ranges can differ for each Time Range and also differ from the main model calendar. If you take advantage of this and have aggregations that differ from the model calendar, you should add a suffix to the description. For example: FY16-FY19 Q (to signify Quarter totals) FY16-FY19 QHY (Quarter and Half Year totals) FY16-FY19 HY (Half Year totals only) etc. Time Ranges are Static Time Ranges can span from 1981 to 2079. As a result, they can exist entirely outside, within, or overlap the model calendar. This means that there may likely be some additional manual maintenance to perform when the year changes. Let’s review a simple example: Assume the model calendar is FY18 with 2 previous years and 2 future years; the model calendar spans FY16-FY20. We have set up Time Ranges for historic data (FY16-FY17) and plan data (FY19-FY20) We also have modules that use the model calendar to pull all of the history, forecast, and plan data together, as seen below: At year end when we “roll over the model,” we amend the model calendar simply by amending the current year. What we have now is as follows: You see that the history and plan Time Ranges are now out of sync with the model calendar. How you change the history Time Range will depend on how much historic data you need or want to keep, but assuming you don’t need more than two year’s history, the Time Range should be re-named FY17-FY18 and the start period advanced to FY17 (from FY16). Similarly, the plan Time Range should be renamed FY20-FY21 and advanced to FY20 (from FY19). FY18 is then available for the history to be populated and FY21 is available for plan data entry. Time Ranges Pitfalls Potential Data Loss Time Ranges can bring massive space and calculation savings to your model(s), but be careful. In our example above, changing the Start Period of FY16-FY17 to FY17 would result in the data for FY16 being deleted for all line items using FY16-FY17 as a Time Range. Before you implement a Time Range that is shorter or lies outside the current model calendar, and especially when implementing Time Ranges for the first time, ensure that the current data stored in the model is not needed. If in doubt, do some or all of the suggestions below: Export out the data to a file Copy the existing data on the line item(s) to other line items that are using the model calendar Back up the whole model Formula References The majority of the formulae will update automatically when updating Time Ranges. However, if you have any hard coded SELECT statements referencing years or months within the Time Range, you will have to amend or remove the formula before amending the Time Range. Hard coded SELECT statements go against best practice for exactly this reason; they cause additional maintenance. We recommend replacing the SELECT with a LOOKUP formulae from a Time Settings module. There are other examples where the formulae may need to be removed/amended before the Time Range can be adjusted. See the Anapedia documentation for more details. When to use the Model Calendar This is a good question and one that we at Anaplan pondered during the development of the feature; Do Time Ranges make the model calendar redundant? Well, I think the answer is “no,” but as with so many constructs in Anaplan, the answer probably is “it depends!” For me, a big advantage of using the model calendar is that it is dynamic for the current year and the +/- years on either side. Change the current year and the model updates automatically along with any filters and calculations you have set up to reference current year periods, historic periods, future periods, etc.  (You are using a central time settings module, aren’t you??) Time ranges don’t have that dynamism, so any changes to the year will need to be made for each Time Range. So, our advice before implementing Time Ranges for the first time is to review each Module and: Assess the scope of the calculations Think about the reduction Time Ranges will give in terms of space and calculation savings, but compare that with annual maintenance For example: If you have a two-year model, with one history year (FY17) and the current year (FY18); you could set up a Time Range spanning one year for FY17 and another one year Time Range for FY18 and use these for the respective data sets. However, this would mean each year both Time Ranges would need to be updated. We advocate building models logically, so it is likely that you will have groups of modules where Time Ranges will fall naturally. The majority of the modules should reflect the model calendar. Once Time Ranges are implemented, it may be that you can reduce the scope of the model calendar. If you have a potential Time Range that reflects either the current or future model calendar, leave the timescale as the default for those modules and line items; why make extra work? SELECT Statements As outlined above, we don’t advocate hard-coded time selects of the majority of time items because of the negative impact on maintenance (the exceptions being All Periods, YTD, YTG, and CurrentPeriod) When implementing Time Ranges for the first time, take the opportunity to review the line item formula with time selects. These formulae can be replaced with lookups using a Time Settings module. Application Lifecycle Management (ALM) Considerations As with the majority of the Time settings, Time Ranges are treated as structural data. If you are using ALM, all of the changes must be made in the Development model and synchronised to Production. This gives increased importance to refer to the pitfalls noted above to ensure data is not inadvertently deleted. Best of luck! Refer to the Anapedia documentation for more detail. Please ask if you have any further questions and let us and your fellow Anaplanners know of the impact Time Ranges have had on your model(s).
View full article
Reducing the number of calculations will lead to quicker calculations and improve performance. But this doesn’t mean combining all your calculations into fewer line items, as breaking calculations into smaller parts has major benefits for performance. Learn more about this in the Formula Structure article. How is it possible to reduce the number of calculations? Here are three easy methods: Turn off unnecessary Summary method calculations. Avoid formula repetition by creating modules to hold formulas that are used multiple times. Ensure that you are not including more dimensions than necessary in your calculations. Turn off Summary method calculations Model builders often include summaries in a model without fully thinking through if they are necessary. In many cases the summaries can be eliminated. Before we get to how to eliminate them, let’s recap on how the Anaplan engine calculates. In the following example we have a Sales Volume line-item that varies by the following hierarchies: Region Hierarchy Product Hierarchy Channel Hierarchy City SKU Channel Country Product All Channels Region All Products   All Regions     This means that from the detail values at SKU, City, and Channel level, Anaplan calculates and holds all 23 of the aggregate combinations shown below—24 blocks in total. With the Summary options set to Sum, when a detailed item is amended (represented in the grey block), all the other aggregations in the hierarchies are also re-calculated. Selecting the None summary option means that no calculations happen when the detail item changes. The varying levels of hierarchies are quite often only there to ease navigation and the roll-up calculations are not actually needed, so there may be a number of redundant calculations being performed. The native summing of Anaplan is a faster option, but if all the levels are not needed it might be better to turn off the summary calculations and use a SUM formula instead.  For example, from the structure above, let’s assume that we have a detailed calculation for SKU, City, and Channel (SALES06.Final Volume). Let’s also assume we need a summary report by Region and Product, and we have a module (REP01) and a line item (Volume) dimensioned as such. REP01.Volume = SALES06 Volume Calculation.Final Volume is replaced with REP01.Volume = SALES06.Final Volume[SUM:H01 SKU Details.Product, SUM:H02 City Details.Region] The second formula replaces the native summing in Anaplan with only the required calculations in the hierarchy. How do you know if you need the summary calculations? Look for the following: Is the calculation or module user-facing? If it is presented on a dashboard, then it is likely that the summaries will be needed. However, look at the dashboard views used. A summary module is often included on a dashboard with a detail module below; effectively the hierarchy sub-totals are shown in the summary module, so the detail module doesn’t need the sum or all the summary calculations. Detail to Detail Is the line item referenced by another detailed calculation line item? This is very common, and if the line item is referenced by another detailed calculation the summary option is usually not required. Check the Referenced by column and see if there is anything referencing the line item. Calculation and staging modules If you have used the DISCO module design, you should have calculation/staging modules. These are often not user-facing and have many detailed calculations included in them. They also often contain large cell counts, which will be reduced if the summary options are turned off. Can you have different summaries for time and lists? The default option for Time Summaries is to be the same as the lists. You may only need the totals for hierarchies, or just for the timescales. Again, look at the downstream formulas. The best practice advice is to turn off the summaries when you create a line item, particularly if the line item is within a Calculation module (from the DISCO design principles). Avoid Formula Repetition An optimal model will only perform a specific calculation once. Repeating the same formula expression multiple times will mean that the calculation is performed multiple times. Model builders often repeat formulas related to time and hierarchies. To avoid this, refer to the module design principles (DISCO) and hold all the relevant calculations in a logical place. Then, if you need the calculation, you will know where to find it, rather than add another line item in several modules to perform the same calculation. If a formula construct always starts with the same condition evaluation, evaluate it once and then refer to the result in the construct. This is especially true where the condition refers to a single dimension but is part of line item that goes across multiple dimension intersections. A good example of this can be seen in the example below: START() <= CURRENTPERIODSTART() appears five times and similarly START() > CURRENTPERIODSTART() appears twice. To correct this, include these time-related formulas in their own module and then refer to them as needed in your modules. Remember, calculate once; reference many times! Taking a closer look at our example, not only is the condition evaluation repeated, but the dimensionality of the line items is also more than required. The calculation only changes by day, as per the diagram below: But the Applies To here also contains Organization, Hour Scale, and Call Center Type. Because the formula expression is contained within the line item formula, for each day the following calculations are also being performed: And, as above, it is repeated in many other line items. Sometimes model builders use the same expression multiple times within the same line item. To reduce this overcalculation, reference the expression from a more appropriate module; for example, Days of Week (dimensioned solely by day) which was shown above. The blueprint is shown below, and you can see that the two different formula expressions are now contained in two line items and will only be calculated by day; the other dimensions that are not relevant are not calculated. Substitute the expression by referencing the line items shown above. In this example, making these changes to the remaining lines in this module reduces the calculation cell count from 1.5 million to 1500. Check the Applies to for your formulas, and if there are extra dimensions, remove the formula and place it in a different module with the appropriate dimensionality .
View full article
Overview There is not a switch to “turn on” ALM.  ALM is based on entitlements described in your subscription agreement Discuss your subscription with your Anaplan Account Executive and Business Partner Workspace administrators can check the feature availability: Log in to Anaplan Click on your name in the top-right-hand corner Select Manage Models Look for the Compare/Sync button Button is greyed out: Speak to your Anaplan Account Executive regarding your subscription agreement Button is available: You currently have access to ALM functionality on your workspace.     Additional information is available in the 313 Application Lifecycle Management (ALM) class, located in the education section.
View full article
A revision tag is a snapshot of a model’s structural information at a point in time. Revision tags save all of the structural changes made in an application since the last revision tag was stored. By default, Anaplan allows you to add a title and description when creating a revision tag. This article covers:   Suggestions for naming revision tags Creating a revisions tracking list and module Note: For guidance on when to add revision tags, see When should I add revision tags?   Suggestions for naming revision tags It’s best to define a standard naming convention for your revision tags early in the model-building process. You may want to discuss with your Anaplan Business Partner or IT group if there is an existing naming convention that would be best to follow. The following suggestions are designed to ensure consistency when there are large number of changes or model builders as well as allow the team to better choose which revision tag to choose when syncing a production application. Option 1: 1.0 = Major revision/release 1 = Minor changes within a release In this option, 1.0 indicates the first major release. As subsequent minor changes are tagged, they will be noted as 1.2, 1.3, etc until the next major release: 2.0. Option 2: X = Major revision/release X.1 = Minor changes within a release In this option, YYYY indicates the year and X indicates the release number. For example, the first major release of 2017, would be: 2017.1. Subsequent minor changes would be tagged: 2017.1.1, 2017.1.2, etc until the next major release of the year: 2017.2.   Creating a revisions tracking list and module Revision tag descriptions are only visible from within Settings. That means that it can be difficult for an end user to know what changes have been made in the current release. Additionally, there may be times where you want to store additional information about revisions beyond what is in the revision tag description. To provide release visibility in a production application, consider creating a revisions list and module to store key information about revisions. Revisions list: In your Development application, create a list called: Revisions Do not set this list as Production. You want these list members to be visible in your production model    Revisions details module: In your Development application, create a list called: Revisions Details Add your Revisions List Remove Time Add your Line Items Since this module will be used to document release updates and changes, consider which of the following may be appropriate: Details: What changes were made Date: What date was this revision tag created Model History ID: What was the Model History ID when this tag was created Requested By: Who requested these changes? Tested By: Who tested these changes? Tested Date: When were these changes tested? Approved By: Who signed off on these changes? Note: Standard Selective Access rules apply to your production application. Consider who should be able to see this list and module as part of your application deployment.
View full article
Little and often Would you spend weeks on your budget submission spreadsheet or your college thesis without once saving it? Probably not. The same should apply to making developments and setting revision tags. Anaplan recommends that during the development cycle, you set revision tags at least once per day. We also advise testing the revision tags against a dummy model if possible. The recommended procedure is as follows: After a successful sync to your production model, create a dummy model using the ‘Create from Revision’ feature. This will create a small test model with no production list items. At the end of each day (as a minimum), set a revision tag and attempt to synchronize the test model to this revision tag. The whole process should only take a couple of minutes. Repeat step 2 until you are ready to promote the changes to your production model. Why do we recommend this? There are a very small number of cases where combinations of structural changes cause a synchronization error (99 percent of synchronizations are successful). The Anaplan team is actively working to provide a resolution within the product, but in most cases, splitting changes between revision tags allows the synchronization to complete. In order to understand the issue when a synchronization fails, our support team needs to analyze the structural changes between the revisions. Setting revision tags frequently provides the following benefits: The number of changes between revisions is reduced, resulting in easier and faster issue diagnosis  It provides an early warning of any problems so that someone can investigate them before they become critical The last successful revision tag allows you to promote some, if not most, of the changes if appropriate In some cases, a synchronization may fail initially, but when applying the changes in sequence the synchronization completes. Using the example from above: Synchronizations to the test model for R1, R2 and R3 were all successful, but R3 fails when trying to synchronize to production. Since the test model successfully synchronized from R2 and then R3, you can repeat this process for the production model. The new comparison report provides clear visibility of the changes between revision tags.   Click here to watch a 7:00 video on this topic
View full article
Table of Contents   Overview A data hub is a separate model that holds an organization’s data. Data can be shared with all your models, making expands easier to implement and ensuring data integrity across models. The data hub model can be placed in a different workspace, allowing for role segregation. This allows you to assign administrator rights to users to manage the data hub without allowing those users access to the production models. The method for importing to the data hub (into modules, rather than lists) allows you to reconcile properties using formulas. One type of data hub can be integrated with an organization’s data warehouse and hold ERP, CRM, HR, and other data as shown in this example. Anaplan Data Architecture But this isn’t the only type of data hub. Some organizations may require a data hub for transactional data, such as bookings, pipeline, or revenue. Whether you will be using a single data hub or multiple hubs, it is a good idea to plan your approach for importing from the organization’s systems into the data hub(s) as well as how you will synchronize the imports from the data hub to the appropriate model. The graphic below shows best practices.   High level best practices   When building a data hub, the best practice is to import a list with properties into a module rather than directly into a list. Using this method, you set up line items to correspond with the properties and import them using the text data type. This imports all the data without errors or warnings. The data in the data hub module can be imported to a list in the required model. The exception for importing into a module is if you are using a numbered list without a unique code (or in other words, you are using combination of properties). In that case, you will need to import the properties into the list.   Implementation steps Here are the steps to create the basics of a hub and spoke architecture. 1) Create a model and name it master data hub You can create the data hub in the same workspace where all the other models are, but a better option is to put the data hub in a different workspace. The advantage is role segregation; you can assign administrator rights to users to manage the Hub and not provide them with access to the actual production models, which are in a different workspace. Large customers may require this segregation of duties. Note: This functionality became available in release 2016.2.   2) Import your data files into the data hub Set up your lists. Identify the lists that are required in the data hub. Create these lists using good naming conventions. Set up any needed hierarchies, working from the top level down. Import data into the list from the source files, mapping only the unique name, the parent (if the name rolls up into a hierarchy), and code, if available. Do not import any list properties. These will be imported into a module. Create corresponding modules for those lists that include properties. For each list, create a module. Name the module [List Name] Properties. In the module, create a line item for each property and use the data type TEXT. Import the source file into the corresponding module. There should be no errors or warnings. Automate the process with actions. Each time you imported, an action was created. Name your actions using the appropriate naming conventions. Note: Indicate the name of the source in the name of the import action. To automate the process, you’ll want to create one process that includes all your imports. For hierarchies, it is important to get the actions in the correct order. Start with the highest level of the hierarchy list import, then the next level list and on down the hierarchy. Then add the module imports. (The order of the module imports is not critical.) Now, let's look at an example: You have a four-level hierarchy to load, such as 1) Employee→ 2) State → 3) Region → 4) Country   Lists Create lists with the right naming conventions. For this example, create these lists: G1 Country G2 Region G3 State Employee G4 Set the parent hierarchy to create the composite hierarchy. Import into each list from the source file(s), and only map name and parent. The exception is the employee list, which includes a code (employee ID) which should be mapped. Properties will be added to the data hub later.   Properties → Modules Create one module for each list that includes properties. Name the module [List Name] Properties. For this example, only the Employees list includes properties, so create one module named Employee Properties. In each module, create as many line items as you have properties. For this example, the line items are Salary and Bonus. Open the Blueprint view of the module and in the Format column, select Text. Pivot the module so that the line items are columns. Import the properties. In the grid view of the module, click on the property you are going to import into. Set up the source as a fixed line item. Select the appropriate line item from the Line Item tab and on the Mapping tab, select the correct column for the data values. You’ll need to import each property (line item) separately. There should be no errors or warnings.     Actions  Each time you run an import, an action is created. You can view these actions by selecting Actions from the Model Settings tab. The previous imports into lists and modules have created one import action per list. You can combine these actions into a process that will run each action in the correct order. Name your actions following the naming conventions. Note, the source is included in the action name.   Create one process that includes the imports. Name your process Load [List Name]. Make sure the order is correct: Put the list imports first, starting with the top hierarchy level (numbered as 1) and working down the module imports in any order.   3) Reconcile These list imports should be running with zero errors because imports are going into text formatted items. If some properties should match with items in lists, it's recommended to use FINDITEM formulas to match text to list items: FINDITEM simply looks at the text formatted line item, and finds the match in the list that you specify. Every time data is uploaded into Anaplan, you just need to make sure all items from the text formatted line item are being loaded into the list. This will be useful as you will be able to always compare the "raw data" to the "Anaplan data," and not have to load that data more than once if there are concerns about the data quality in Anaplan. If there is not a list of the properties included in your data hub model, first, create that list. Let’s use the example of Territory. Add a line item to the module and select list as the format type, then select the list name of your list of properties—in this case, Territory from the drop-down. Add the FINDITEM formula FINDITEM(x,y) where x is the name of your list (Territory for our example) and y is the line item. You can then filter this line item so that it shows all of the blank items. Correct the data in the source system. If you will be importing frequently, you may want to set up a dashboard to allow users to view the data so they can make corrections in the source system. Set up a saved view for the errors and add conditional formatting to highlight the missing (blank items) data. You can also include a counter to show the number of errors and add that information to the dashboard.   4) Split models: Filter and Set up Saved Views If the architecture of your model includes spoke models by regions, you need one master hierarchy that covers all regions and a corresponding module that stores the properties. Use that module and create as many saved views as you have spoke region models. For example, filter on Country GI = Canada if you want to import only Canadian accounts into the spoke model. You will need to create a saved view for each hierarchy and spoke model.   5) Import to the spoke module Use the cross-workspace imports if you have decided to put your Master data hub in a separate workspace. Create the lists that correspond to the hierarchy levels in each spoke model. There is no way to create a list via import for now. Create the properties in the list where needed. Keep in mind that the import of properties into the data hub as line items is an exception. List properties generally do not vary, unlike a line item in a module, which are often measured over time. Note: Properties can also be housed in modules and there are some benefits to this. See Anapedia - Model Building (specifically, the "List Attributes" and "List attributes in a module" topics). If you decide to use a module to hold the properties, you will need to create a line item for each property type and then import the properties into the module. To simplify the mapping, make sure the property names in each spoke model match the line item names of the data hub model. In each spoke model, create an import from the filtered module view of the data hub model into the lists you created in step 1. In the Actions window, name your imports using naming conventions. Create a process that includes these actions (imports). Begin with the highest level in the hierarchy and work down to the lowest. Well done! You have imported your hierarchy from a data hub model.   6) Incremental list imports When you are in the midst of your peak planning cycle and your large lists are changing frequently, you’ll want to update the data hub and push the changes to the spoke models. Running imports of several thousand list members, may cause performance issues and block users during the import activity. In a best case scenario, your data warehouse provides a date field that shows when the item was added or modified, and is able to deliver a flat file or table that includes only the changes. Your import into the HUB model will just take few seconds, and you can filter on this date field to only export the changes to the spoke models. But in most cases, all you have is a full list from the data warehouse, regardless of what has changed. To mitigate this, we'll use a technique to export only the list items that have changed (edited, deleted, updated) since the last export, using the logic in Anaplan.   Setting up the incremental loads: In the data hub model: Create a text formatted line item in your module. Name it CHECKSUM, set the format as Text, and enter a formula to concatenate of all the properties that you want to track changes for. These properties will form the base of the incremental import. Example: CHECKSUM = State & Segment & Industry & Parent & Zip Code Create a second line item, name it CHECKSUM OLD, set the format as Text, and create an import that imports CHECKSUM into CHEKSUM_OLD. Ignore any other mappings. Name this import: 1/2 im DELTA and put it in a process called "RESET DELTA" Create a line item and name it "DELTA" and set the format as Boolean. Enter this formula: IF CHECKSUM <> CHECKSUM OLD THEN TRUE ELSE FALSE. Update the filtered view that you created to export only the hierarchy for a specific region or geography. Add a filter criteria "DELTA = true". You will only see the list items which differ from the last time you imported into the data hub In the example above, we'll import into a spoke model only the list items that are in US East, and that have changed since the last import. Execute the import from the source into the data hub and then into the spoke models. In the data hub model, upload the new files and run the process import. In the spoke models, run the process import that takes the list from the data hub's filtered view. → Check the import logs and verify that only the number of items that have changed are actually imported. Back in the data hub model, run the RESET DELTA process (1/2 im DELTA import). The RESET DELTA process resets the changes, so you are ready for the next set of changes. Your source, data hub model and spoke models are all in sync.   7) Import actuals (transaction data) into the data hub and then into the spoke models Rather than importing actuals or transactions directly into a working model, import them to the data hub to make it easier for business users (with workspace admin rights) to easily select the imports they want to add to their spoke models There is one requirement: the file must include a transaction or primary key (identification code) that uniquely identifies each transaction. If there is not a transaction key, your options are as follows: Option 1: Work with the IT team to determine if it is possible to include a transaction ID in the source. This is the best option, but not always possible. Option 2: Create the transaction ID in Excel ® . Keep in mind there is a limit of 1 million rows in Excel. Also be careful about how you create the transaction ID in Excel, as some methods may delete leading zeros. Option 3: Create a numbered list in Anaplan.   Creating a numbered list and importing transaction IDs: Add a Transaction list (follow your naming conventions!) to the data hub model. In the General Lists window, select the Numbered option to change the Transaction list to a numbered list   In the Transaction list, create a property called "transaction ID", set the format to text. In the General Lists window, select Transaction ID in the Display Name Property field. Open the Transaction list and add the formula: CODE(ITEM('Transaction')) to the Transaction ID property. It will be used as the display name of the numbered list. When importing into the Transaction list, set it up as indicated below    Map the Transaction ID of the source file to the Code. Remove any selection from the Transactions drop-down list (first source field). If duplicates on the transaction ID are found, reject the import. Otherwise you will introduce corrupted data into the model. Import the transaction IDs into the Transactions list.   Import transactions Create the Actuals module. Include the transaction list and as many line items as you have fields (columns) in your source file. Set up the format of your line items. They should be set up as format type text, with the exception of columns that include values that are numbers. For those, the format should be number and include any further definitions needed (for example decimal places, units.) Add a line item called "Transaction ID" and set the format as text. Enter the formula: CODE(ITEM(Transactions)). This will be used when importing the numbered list into the spoke models. Run the import of the source file into the Actuals module. Name your two actions (imports): Import into Transactions (this was the import of the transaction IDs into the Transactions list) and Import into Actuals (this was the import from the source file into the Actuals module). Create a process that includes both imports: first, Import into Transactions, then Import into Actuals. Why a 2-dimensional module? It is important to understand that the Actuals module is a staging module with two dimensions only: transaction and line items. You can load multiple millions of these transactions and have 50+ line items, which corresponds to the properties of each transaction including version and time. Anaplan will scale without any issues. Do not create a multi dimensional module at this stage. This will be done in the spoke models, and you will carefully pick what properties will become dimensions. This will impact the spoke model size significantly if you have large lists. In the Actuals module, create a view that you will use for importing into the spoke model. Create as many saved views as required, based on how you have split the spoke models.   Reconcile The import into the module will run without errors or warnings. It does not mean that all is clean, as we just loaded some text. The reconciliation in the data hub consists of verifying that every field of the source system matches an existing item of the list of values for that field. In the module, create a list formatted line item that corresponds to each field, and use the FINDITEM() function to lookup the actual item. If the name does not match, it will return a blank cell. These cells needs to be tracked in a reconciliation dashboard. The source file will need to be fixed until all transactions actually have a corresponding item in a list. If there is not a list of the fields included in your data hub model, first create that list. Add a line item to the module and select list as the format type, then select the list name of your list of fields. Add the FINDITEM formula FINDITEM(x,y) where x is the name of your list and y is the line item. See example below: transaction 0001 is clean, transaction 0002 has an account A4 code that does not match Set up a dashboard to allow users to view the data so they can make corrections in the source system. Set up a saved view for the errors and add conditional formatting to highlight the missing (blank items) data. You can also include a counter to show the number of errors and add that information to the dashboard.   Import into the spoke models In the spoke models: Create the transaction numbered list. Import into this list from the transaction module saved view that you created in the data hub, filtered on any property you need to limit the transactions you want to push. Map the Code of the numbered list of the spoke model to the calculated Transaction ID of the Master data hub model. Create a transaction flat module. Import into this module from the same transaction module, filtered on any property you need to limit the transactions you want to push that were created in the data hub. Make sure you select the Calculated Transaction ID as your source. Do not use the Transaction name as it will be different for the same transaction in the data hub model and the spoke model. Create a target multi dimensional module, using SUM functions from the Transactional module across the line items formatted as list or time. Simple 2 dimensional module Account, Product Use SUM functions as much as possible, as it will enable users to use the drill to transaction feature that shows the transaction that make up an aggregated number.   8) Incremental data load The Actual transaction file might need to be imported several times into the data hub model and from there into the spoke models during the planning peak cycle. If the file is large, it can create performance issues for end users. Since not all transactions will change as the data is imported several times a day, there is a strong opportunity to optimize this process. In the data hub model transaction module, create the same CHECKSUM, CHECKSUM OLD and DELTA line items. CHECKSUM should concatenate all the fields you want to track the delta on, including the values. "DELTA" line item will actually catch new transactions, as well as modified transactions. See 6. Incremental List Imports above for more information   Filter the view using DELTA to only import transaction list items into the list, and the actuals transaction into the module. Create an import from CHECKSUM to CHECKSUM OLD, to be able to reset the delta after the imports have run, name this import: 2/2 im DELTA, and add it to the DELTA process created for the list. In the spoke model, import into the transaction list and into the transaction module, from the transaction filtered view. Run the DELTA import or process.   9) Automation You can semi-automate this process and have it run automatically on a frequent basis if incremental loads have been implemented. That provides immediacy of master data and actuals across all models during a planning cycle. It's semi-automatic because it requires a review of the reconciliation dashboards before pushing the data to the spoke models. There are a few ways to automate, all requiring an external tool: Anaplan Connect or the customer's ETL. The automation script needs to execute in this order: Connect to the master data hub model. Load the external files into the master data hub model. Execute the process that imports the list into the data hub. Execute the process that imports actuals (transactions) into the data hub. Manual step: Open your reconciliation dashboards, and check that data and the list are clean. Again, these imports should run with zero errors or warnings. Connect to the spoke model. Execute the list import process. Execute the transaction import models. Repeat 5, 6, and 7 for all spoke models. Connect to the master data hub model. Run the Clear DELTA process to reset the incremental checks.   Other best practices Create deletes for all your lists Create a module called Clear Lists. In the module, create a line item of type Boolean in the module where you have list and properties, call it "CLEAR ALL" and set a formula to TRUE. In Actions, create a "delete from list using selection" action and set it as below: Repeat this for all lists and create one process that executes all these delete actions.   Example of a maintenance/reconcile dashboard Use a maintenance/reconcile dashboard when manual operations are required to update applications from the hub. One method that works well is to create a module that highlights if there are errors in each data source. In that module, create a line item message that displays on the dashboard if there are errors, for example: There are errors that need correcting. A link on this dashboard to the error status page will make it easy for users to check on errors. A best practice is to automate the list refresh. Combine this with a modeling solution that only exports what has changed.   Dev-test-prod considerations There should be two saved views: One for development and one for production. That way, the hub can feed the development models with shortened versions of the lists and the production models will get the full lists. ALM considerations: The development (DEV) model will need the imports set up for DEV and production (PROD) if the different saved view option is taken. The additional ALM consideration is that the lists that are imported into the spoke models from the hub need to be marked as production data.   Development DATA HUB The data hub houses all global data needed to execute the Anaplan use case. The data hub often houses complex calculations and readies data for downstream models. DEVELOPMENT MODEL The development model is built to the 80/20 rule. It is built upon a global process, regional specific functionality is added in the deployment phase. The model is built to receive data from the data hub. DATA INTEGRATION During this stage, Anaplan Connect or a 3rd party tool is used to automate data integration. Data feeds are built from the source system into the data hub and from the data hub to downstream models. PERFORMANCE TESTING The application is put through rigorous performance testing, including automated and end user testing. These tests mimic real world usage and exceptionally heavy traffic to see how the system will perform.   Deployment DATA HUB The data hub is refreshed with the latest information from the source systems. The data hub readies data for downstream models. DEPLOYMENT  MODEL The development model is copied and the appropriate data is loaded from the data hub. Regional specific functionality is added during this phase. DATA INTEGRATION Additional data feeds from the data hub to downstream models are finalized. The integrations are tested and timed to establish baseline SLA. Automatic feeds are placed on timed schedules to keep the data up to date. PERFORMANCE TESTING The application is again put through rigorous performance testing.   Expansion DATA HUB The need for additional data for new use cases is often handled by splitting the data hub into regional data hubs. This helps the system perform more efficiently. MODEL  DEVELOPMENT The models built for new use cases are developed and thoroughly tested. Additional functionality can be added to the original models deployed. DATA INTEGRATION Data integration is updated to reflect the new system architecture. Automatic feeds are tested and scheduled according to business needs. PERFORMANCE TESTING At each stage, the application is put through rigorous performance testing. These tests mimic real world usage and exceptionally heavy traffic to see how the system will perform.
View full article
If you’re familiar with Anaplan, you’ve probably heard the buzz about having a data hub and wondered why it’s considered a “best practice” within the Anaplan community. Wonder no more. Below, I will share four reasons why you should spend the time to build a data hub before Anaplan takes your company by storm.   1. Maintain consistent hierarchies Hierarchies are a common list structure built by Anaplan and come in a variety of options depending on use case, e.g., product hierarchy, cost center hierarchy, and management hierarchy, just to name a few. These hierarchies should be consistent across the business whether you’re doing demand planning or financial planning. With a data hub, your organization has a higher likelihood of keeping hierarchies consistent over time since everyone is pulling the same structure from one source of truth: the data hub.   2. Avoid sparsity As you expand the use of Anaplan across multiple departments, you may find that you only need a segment of a list rather than the entire list. For instance, you may want the full list of employees for workforce planning purposes, but only a portion of the employees for incentive compensation calculations. With a data hub, you can distribute only the pertinent information. You can filter the list of employees to build the employee hierarchy in the incentive compensation model, while having the full list of employees in the workforce planning model. Keep them both in sync using the data hub as your source of truth.   3. Separate duties by roles and responsibilities An increasing number of customers have asked about roles and responsibilities with Anaplan as they expand internally. In Anaplan, we recommend each model have a separate owner. For example, an IT owner for the data hub, an operations owner for the demand planning model, and a finance owner for the financial planning model. The three owners combined would be your Center of Excellence, but each has their separate roles and responsibilities for development and maintenance in the individual models.   4. Accelerate future builds One of the main reasons many companies choose Anaplan is for the platform’s flexibility. Its use can easily and quickly expand across an entire organization. Development rarely stops after the first implementation. Model builders are enabled and excited to continue to bring Anaplan into other areas of the business. If you start by building the data hub as your source of truth for data and metadata, you can accelerate the development of future models since you already have defined the foundation of the model, the lists, and dimensions. As you begin to implement, build, and roll out Anaplan, starting with a data hub is a key consideration. In addition to this, there are many other fundamental Anaplan best practices to consider when rolling out a new technology and driving internal adoption.
View full article
Assume the following Non-Composite list, ragged hierarchy that needs to be set to Production Data   We need to refer to the parent to define the logic calculation. In the example, we have assumed that children of Parent 1 and Parent 3 need to return the value 100 and those under Parent 2 and Child 3.1 return 200 and we need to show the proportion of the children. Select Calculation: IF PARENT(ITEM('Non-Composite List')) = 'Non-Composite List'.'Parent 1' OR PARENT(ITEM('Non-Composite List')) = 'Non-Composite List'.'Parent 3' THEN 100 ELSE IF PARENT(ITEM('Non-Composite List')) = 'Non-Composite List'.'Parent 2' OR PARENT(ITEM('Non-Composite List')) = 'Non-Composite List'.'Child 3.1' THEN 200 ELSE 0 Select Proportion: Select Calculation / IF PARENT(ITEM('Non-Composite List')) = 'Non-Composite List'.'Parent 1' THEN Select Calculation[SELECT: 'Non-Composite List'.'Parent 1'] ELSE IF PARENT(ITEM('Non-Composite List')) = 'Non-Composite List'.'Parent 2' THEN Select Calculation[SELECT: 'Non-Composite List'.'Parent 2'] ELSE IF PARENT(ITEM('Non-Composite List')) = 'Non-Composite List'.'Parent 3' THEN Select Calculation[SELECT: 'Non-Composite List'.'Parent 3'] ELSE IF PARENT(ITEM('Non-Composite List')) = 'Non-Composite List'.'Child 3.1' THEN Select Calculation[SELECT: 'Non-Composite List'.'Child 3.1'] ELSE 0 These “hard references” will prevent the list being set as a production list SOLUTION: Create a Parents Only list (this could be imported from the Non-Composite list) Parent Logic? Module Add Boolean line items for each of the “logic” types Then you can refer to the logic above     Lookup Calculation: IF Parent Logic?.'Logic 1?'[LOOKUP: Parent Mapping.Parents Only List] THEN 100 ELSE IF Parent Logic?.'Logic 2?'[LOOKUP: Parent Mapping.Parents Only List] THEN 200 ELSE 0 To calculate the proportion calculation without the SELECT, a couple of intermediate modules are needed: Parent Mapping module This module maps the Non-Composite parent to the Parents Only list. In this example, the mapping is automatic because the items in the Parents Only list have the same name as those in the Non-Composite list. The mapping could be a manual entry if needed. The formula and “applies to” are: Non Composite Parent: PARENT(ITEM('Non-Composite List')) Applies to: Non-Composite List Parents Only List FINDITEM(Parents Only List, NAME(Non Composite Parent)) Applies to: Parents Only List Parents Only subtotals An intermediary module is needed hold the subtotals   Calculation: Parent Logic Calc.Lookup Calculation[SUM: Parent Mapping.Parents Only List] The final piece is to reference this line item in the original module Lookup Proportion: Lookup Calculation / Parents Only Subtotals.Calculation[LOOKUP: Parent Mapping.Parents Only List] The list can now be set as a production list as there are no “hard references” Appendix: Blueprints:    
View full article