We explain here a dynamic way to filter specific levels of a hierarchy. This provides a better way to filter & visualize hierarchies.
This tutorial explains how to calculate the level of a list in a hierarchy in order to apply specific calculations (custom summary) or filters by level.
In this example we have an organization hierarchy of 4 levels (Org L1 to Org L4). For each item in the hierarchy we want to calculate a filtering module value that returns the associated level.
Context and notes
This technique addresses a specific limitation within dashboards where a composite hierarchy's level cannot be selected if the list is synchronized to multiple module objects on the dashboard.
We show the technique of creating a static filtering module based on the levels of the composite structure.
The technique utilizes the Summary method Ratio of line items corresponding to the list levels to define the value of the filtering line items. Note that it is not a formula calculation but a use of the summary method Ratio applied to the composite hierarchy.
We defined in this example a 4-levels list as follows:
Defining the level of each list
In order to calculate the level of each item in the lists, we need to create a module that calculates it by:
Creating as many line items as level of hierarchy + one technical line item.
Changing the settings in the blueprint of those line items according to the following table:
Summary method Setting Ratio
Technical line item
Level or L4 (lowest level)
L3 / Technical
L2 / Technical
L1 / Technical
L1 / Technical
When applying these settings, the calculation module looks like this:
*Note that the Technical line item Summary method is using Formula, Minimum Summary method can be used but will return an error when a level of the hierarchy does not have any children and the level calculated is blank.
We can now use the line item at the lowest level—“Level (or L4)” in the example—as the basis of filters or calculations.
Applying a filter on specific levels in case of synchronization
When synchronization is enabled, the option “Select levels to show” is not available. Instead, a filter based on the level calculated can be used to show only specific levels.
In the example, we apply a filter on the level 4 and 1:
This gives the following result:
Personal dashboards is a great new feature that enables end users to save a personalized view of a dashboard. To get the most out of this feature here are a few tips and tricks.
Tidy up dashboards
Any change to a master dashboard (using the Dashboard Designer) will reset all personal views of a dashboard, so before enabling personal dashboards, take some time to ensure that the current dashboards are up to date:
Implement any pending development changes (including menu options)
Turn on the Dashboard Quick Access toolbar (if applicable)
Check and amend all text box headings and comments for size, alignment, spelling and grammar
Delete or disable any redundant dashboards to ensure end users don’t create personal views of obsolete dashboards
Use filters rather than show/hide
It’s best practice to use a filter rather than show and hide for the rows and/or columns on a grid.
This is now more beneficial because amending the items shown or hidden on a master dashboard will reset the personal views. For example, suppose you want to display just the current quarter of a timescale. You could manually show/hide the relevant periods but, at quarter end, when the Current Period is updated, the dashboard will need to be amended and all those personal views will be reset. If you use a filter, referencing a time module, the filter criteria will update automatically, as will the dashboard. No changes are made to the master dashboard and all the personal views are preserved.
Create a communication and migration strategy
Inevitably there are going to be changes that must be made to master dashboards. To minimize the disruption for end users, create a communication plan and follow a structured development program . These can include the following:
Bundle up dashboard revisions into logical set of changes
Publish these changes at regular intervals (e.g., on a monthly cycle)
Create a regular communication channel to inform users of changes and the implications of those changes
Create a new dashboard and ask end users to migrate to the new dashboard over a period of time before switching off the old dashboard
Application Lifecycle Management (ALM)
If you are using ALM: any structural changes to master dashboards will reset all personal views of dashboards.
Dimension Order affects Calculation Performance
Ensuring consistency in the order of dimensions will help improve performance of your models. This consistency is relevant for modules and individual line items. Why does the order matter? Anaplan creates and uses indexes to perform calculations. Each cell in a module where dimensions intersect is given an index number.
Here are two simple modules dimensioned by Customer and Product. In the first module, Product comes first and Customer second and in the second module, Customer is first and Product second.
In this model, there is a third module that calculates revenue as Prices * Volumes.
Anaplan assigns indexes to the intersections in the module. Here are the index values for the two modules. Note that some of the intersections are indexed the same for both modules: Customer 1 and Product 1, Customer 2 and Product 2 and Customer 3 and Product 3, and that the remainder of the cells have a different index number. Customer 1 and Product 2 is indexed with the value of 4 in the top module and the value of 2 in the bottom module.
The calculation is Revenue = Price * Volume.
To run the calculation, Anaplan performs the following operations by matching the index values from the two modules.
Since the index values are not aligned the processor scans the index values to find a match before performing the calculation.
When the dimensions in the module are reordered, these are the index values:
The index values for each of the modules are now aligned. As the line-items of the same dimensional structure have an identical layout, the data is laid out linearly in memory. the calculation process accesses memory in a completely linear and predictable way. Anaplan’s microprocessors and memory sub-systems are optimized to recognise this pattern of access and to pre-emptively fetch the required data.
How does the dimension order become different between modules?. When you build a module, Anaplan uses the order that you drag the lists onto the Create Module dialog The order is also dependent on where the lists are added. The lists that you add to the pages area are first, then the lists that you add to the rows area, and finally the lists added to the columns area.
It is simple to re-order the lists and ensure consistency. Follow these steps:
On the Modules pane, (Model Settings>Modules) look for lists that are out of order in the Applies To column. Click the Applies To row that you want to re-order, then click the ellipsis.
In the Select Lists dialog, click OK.
In the Confirm dialog, click OK.
The lists will be in the order that they appear in General Lists.
When you have completed checking the list order in the modules, click the Line Items tab and check the line items. Follow steps 1 through 3 to re-order the lists.
Subsets and Line Item Subsets
One word of caution about Subsets and Line Item subsets. In the example below, we have added a subset and a Line Item Subset to the module:
The Applies To is as follows:
Clicking on the ellipsis, the dimensions are re-ordered to:
The general lists are listed in order first, followed by subsets and then line item subsets You still can re-order the dimensions by double clicking in the Applies to column and manually copying or typing the dimensions in the correct order.
The calculation performance relates to the common lists between the source(s) and the target. The order of separate lists in one or other doesn’t have any bearing on the calculation speed.
Note: This article is meant to be a guide on converting an existing Anaplan Security Certificate to PEM format for the purpose of testing its functionality via cURL commands. Please work with your developers on any in more depth application of this process. The current Production API version is v1.3.
Using a certificate to authenticate will eliminate the need to update your script when you have to change your Anaplan password. To use a certificate for authentication with the API, it first has to be converted into a Base64 encoded string recognizable by Anaplan. Information on how to obtain a certificate can be found in Anapedia.
This article assumes that you already have a valid certificate tied to your user name.
1. To properly convert your Anaplan certificate to be usable with the API, first you will need openssl (https://www.openssl.org/). Once you have that, you will need to convert the certificate to PEM format. The PEM format uses the header and footer lines “-----BEGIN CERTIFICATE-----“, and “-----END CERTIFICATE-----“.
2. If your certificate is not in PEM format, you can convert it to the PEM format using the following OpenSSL command. “certificate-(certnumber).cer” is name of source certificate, and “certtest.pem” is name of target PEM certificate.
openssl x509 -inform der -in certificate-(certnumber).cer -out certtest.pem
View the PEM file in a text editor. It should be a Base64 string starting with “-----BEGIN CERTIFICATE-----“, and ending with “-----END CERTIFICATE-----“.
3. View the PEM file to find the CN (Common Name) using the following command:
openssl x509 -text -in certtest.pem
It should look something like "Subject: CN=(Anaplan login email)". Copy the Anaplan login email.
4. Use a Base-64 encoder (e.g. https://www.base64encode.org/ ) to encrypt the CN and PEM string, separated by a colon. For example, paste this in:
(Anaplan login email):-----BEGIN CERTIFICATE-----(PEM certificate contents)-----END CERTIFICATE-----
5. You now have the encrypted string necessary to authenticate API calls. For example, using cURL to GET a list of the Anaplan workspaces for the user that the certificate belongs to:
curl -H "Authorization: AnaplanCertificate (encrypted string)" https://api.anaplan.com/1/3/workspaces
“Back to the Future”
Imagine this scenario:
You are in the middle of making changes in your development model and have been doing so for the last few weeks. The changes are not complete and are not ready to synchronize. However, you just received a request for an urgent fix from the user community that is critical for the forthcoming monthly submission. What do you do?
What you don’t want to do is take the model out of deployed mode! You also don’t want to lose all the development work you have been doing.
Don’t worry. Following the procedure below will ensure you can apply the hotfix quickly and keep your development work.
The following diagram illustrates the procedure:
It’s a two-stage process:
Roll the development model back to a version that doesn’t contain any changes (is the same as production) and apply the hotfix to that version.
Add a new revision tag to the development model as a temporary placeholder. (Note the History ID of the last structural change, you'll need it later.)
On the development model, use History to restore to a point where development and production were identical (before any changes were made in development).
Apply the hotfix.
Save a new revision of the development model.
Sync the development model with the production model.
Production now has its hotfix.
Restore the changes to development and apply the hotfix.
On the development model, use the History ID from Stage 1 – Step 1 to restore to the version containing all of the development work (minus the hotfix).
Reapply the hotfix to this version of development.
Create a new revision of the development model.
Development is now back to where it was, now with the hotfix applied.
When your development work is complete, you can promote the new version to production using ALM best practice.
The procedure is documented here:
Master data hubs
Master data hubs are used within the Anaplan platform to house an organization’s data in a single model. This hub imports data from the corporation’s data warehouse. If no single source is available, such as a data warehouse, then the master data hub will collect data from individual source systems instead. Once all data is consolidated into a single master data hub, it may then be distributed to multiple models throughout an organization’s workspace.
Anaplan Data Architecture
Architecture best practices
One or more Anaplan models may make up the data hub. It is a good practice to separate the master data (hierarchies, lists, and properties) from the transactional data.
The business Anaplan applications will be synchronized from these data hub models using Anaplan native “model-to-model” internal imports.
As a best practice, users should only implement incremental synchronization, which only synchronizes the data in the application that has changed since the last sync from the data hub. Doing this usually provides very fast synchronization.
The graphic below displays best practices for doing this:
Another best practice organizations should follow when building a master data hub is to import a list with properties into a module rather than directly into a list. Using this method, line items are created to correspond with the properties and are imported using the text data type. This will import all of the data without errors or warnings, and allow for very smart dashboards, made of sorts and filters, to highlight integration issues.
Once imported, the data in the master data hub module can then be imported to a list in the required model.
Data hub best practices
The following list consists of best practices for establishing data architecture:
Rationalize the metadata
Balanced hierarchies (not ragged) will ease reporting and security settings
Identify your metric and KPIs and what drives them
Do not try to reconcile disconnected targets to bottom up plans entered at line item level.
Example: Use cost per trip and number of trips for travel expenses, as opposed to inputting every line of travel expense
Simplify the process
Reduce the number of approval levels (threshold-based)
Implement rolling forecasts
Report within the planning tool; keep immediacy where needed
Think outcome and options, not input
Transform your existing process. Do not re-implement existing Excel ® -based processes in Anaplan
Aggregate transactions to SKU level, customer ID
Plan at higher level and cascade down
Plan the number of TBH by role for TBH headcount expenses, as opposed to inputting every TBH employee.
Sales: Sub-region level planning, cascade a rep level
Plan at profit center level, allocate at cost center level based on drivers
The Anaplan Way
Always follow the phases of The Anaplan Way when establishing a master data hub, even in a federated approach:
The Anaplan platform can be configured and deployed in a variety of ways. Two configurations that should be examined prior to each organizations’ implementation of Anaplan are the Central Governance-Central Ownership configuration and Central Governance-Federated Ownership configuration.
Central Governance-Central Ownership configuration
This configuration focuses on using Agile methodology to develop and deploy the Anaplan platform within an organization. Development centers around a central delivery team that is responsible for maintaining a master data hub, as well as all models desired within the organization, such as sales forecasting, T&Q planning, etc.
Central delivery team
In this configuration, the central delivery team is also responsible for many other steps and requirements, or business user inputs, which are carried out in Anaplan and delivered to the rest of the organization. These include:
Building the central model
Communicating release expectations throughout development
Creating and managing hierarchies in data
Data loads (data imports and inputs)
Defect and bug fixes in all models
New use case project development
Agile methodology—The Anaplan Way
As previously mentioned, this configuration also focuses on releasing, developing, and deploying new and improved releases using the Agile methodology. This strategy begins with the sprint planning step and moves to the final deployment step. Once a project reaches deployment, the process begins again for either the next release of the project or the first release of a new project. Following this methodology increases stake holder engagement in releases, promotes project transparency, and shows project results in shorter timeframes.
Central Governance-Federated Ownership configuration
This configuration depends on a central delivery team to first produce a master data hub and/or master model, and then allow the individual departments within an organization to develop and deploy their own applications in Anaplan. These releases are small subsets of the master model that allow departments to perform “what-if” modeling and control their own models or independent applications needed for specific local business needs.
Central delivery team
In this configuration, the central delivery team are only responsible for the following:
Creating and managing hierarchies in data
Data loads (data imports and inputs defect fixes)
Capture and share modeling best practices with the rest of the teams
Federated model ownership
In this model, each department and/or region is responsible for their own development. This includes:
Small subsets of the master model for flexible “what if” modeling
Custom or in depth analysis/metrics
Independent use case models
Loose or no integration with master model
One-way on-demand data integration
Optional data hub integration
Pros and cons
Both of these configurations contain significant pros and cons for implementing them into an organization:
Central Governance-Central Ownership pros
Modeling practices within an organization become standardized for all new and updated releases.
The request process for new projects becomes standardized. One single priority list of enhancement request is maintained and openly communicated.
Communication of platform releases, new build releases, downtime, and more comes from one source and is presented in a clear and consistent manner.
Workspace and licenses
This configuration requires the fewest number of workspaces, which saves on data used in Anaplan, as well as the fewest number of workspace admin licenses.
Central Governance-Central Ownership cons
All build requests, including new use cases, enhancements, and defect fixes, go into a queue to be prioritized by the central delivery team.
This configuration requires a significant weekly time commitment from the central delivery team to prioritize all platform requirements.
Central Governance-Federated Ownership pros
Business user development
This configuration allows for true business development capabilities without comprising the integrity of the core solution developed by the central delivery team.
Maximizes the return of investment and reduce shadow IT processes by enabling the quick spread of the Anaplan platform across an organization as multiple parties are simultaneously developing.
Reduces or completely eliminates wait queue wait times for new uses cases and/or functionality.
Speed of implementation
Having the central team take care of all data integration work via the data hub will speed up application design by enabling federated team to take their actuals and master data out of an Anaplan data hub model, as opposed to having to build their own data integration with source systems.
Central Governance-Federated Ownership cons
Workspace and licenses
More workspace and workspace admin licenses may be necessary in the platform.
In this configuration it is challenging to ensure that model building architecture procedures and best practices are being followed in each model. It requires the central Center of Excellence team to organize recurring meetings with each application builder to share experience and best practices.
Business users without model building skills may have a difficult time building and maintaining their requirements.
Little and often
Would you spend weeks on your budget submission spreadsheet or your college thesis without once saving it?
The same should apply to making developments and setting revision tags. Anaplan recommends that during the development cycle, you set revision tags at least once per day. We also advise testing the revision tags against a dummy model if possible.
The recommended procedure is as follows:
After a successful sync to your production model, create a dummy model using the ‘Create from Revision’ feature. This will create a small test model with no production list items.
At the end of each day (as a minimum), set a revision tag and attempt to synchronize the test model to this revision tag. The whole process should only take a couple of minutes.
Repeat step 2 until you are ready to promote the changes to your production model.
Why do we recommend this?
There are a very small number of cases where combinations of structural changes cause a synchronization error (99 percent of synchronizations are successful). The Anaplan team is actively working to provide a resolution within the product, but in most cases, splitting changes between revision tags allows the synchronization to complete. In order to understand the issue when a synchronization fails, our support team needs to analyze the structural changes between the revisions.
Setting revision tags frequently provides the following benefits:
The number of changes between revisions is reduced, resulting in easier and faster issue diagnosis
It provides an early warning of any problems so that someone can investigate them before they become critical
The last successful revision tag allows you to promote some, if not most, of the changes if appropriate
In some cases, a synchronization may fail initially, but when applying the changes in sequence the synchronization completes. Using the example from above:
Synchronizations to the test model for R1, R2 and R3 were all successful, but R3 fails when trying to synchronize to production.
Since the test model successfully synchronized from R2 and then R3, you can repeat this process for the production model.
The new comparison report provides clear visibility of the changes between revision tags.
Click here to watch a 7:00 video on this topic
Recently, I used Anaplan Connect for the first time; I used it to import Workday and Jobvite data into my Anaplan model. This was my first serious data integration. After my experience I put together some tips and tricks to help other first-timers succeed.
Firstly, there are a few things you can do to set yourself up for success:
Download the most up-to-date version of Java.
Download Anaplan Connect from Anaplan's Download Center.
Make sure you can run Terminal (Mac) or the Command Prompt (Windows).
Make sure you have a plaintext editor to edit your script (TextEdit or Notepad are available by default, but I recommend Sublime Text).
Read through the Anaplan Connect User Guide in the "doc" folder of the Anaplan Connect folder you downloaded in step #2.
Once you have these items completed then you’re ready to start writing your script.
In the Anaplan Connect folder that you downloaded, there are some example script files, “example.bat” for Windows and “example.sh” for Mac. The best way to start is to copy the right example file for your operating system, then alter it.
When you’re first navigating the example script, the section contains what are called variables (e.g. ModelId, WorkspaceId, AnaplanUser). If you keep your variables at the top, then use them in your script, it's easier to edit those components because they are only in one place. I highly recommend adding a variable for your Anaplan certificate. Then you don’t have to manually enter your password every time the script runs.
When you begin to piece together your own script, it will include some combination of Anaplan Connect Commands (you can check out the full list in an appendix of the Quick Start Guide for Anaplan Connect, on Anapedia). Because my script was focused on importing data from an outside source into Anaplan, it included the following components: file, put, import, execute, output. Each of these has a different function:
File identifies the File Name (i.e. Workday.csv).
Put identifies the File Path of the file you’re importing (i.e. User/Admin/Documents/Workday.csv).
Import identifies the action Anaplan is supposed to run (i.e. Workday_Import).
Execute is what runs the process; nothing needs to follow this.
Output identifies what happens to errors. If you would like those to go to a file then you include the location of the file following the output (i.e. User/Admin/Documents/ErrorLog.csv).
It’s worth noting that you can have multiple actions behind a file. For instance, I can have a command sequence like this: file-put-import-execute-output-put-import-execute-output. I found this useful when I used a single file to update multiple lists and modules; it saved me from needing to upload a file over and over again.
When you are identifying the file path for the script, it is easiest to keep terminal open. When you drag and drop a file in terminal it will automatically populate the file path. This will assist in avoiding syntax errors since you can copy and paste from terminal into the script.
Once you assemble your commands, it’s time to start testing your script! When you start testing the script, it is helpful to break it into small pre-built test chunks that build on one another. That way if something goes wrong, it won’t take as long to find out where the error is. Additionally, it makes the script more digestible in the event that it needs to be edited in the future.
As you test each of these chunks, you may run into some errors, so here are a few troubleshooting tips to get you started.
If your terminal reports that there is a syntax error, then there is most likely a pesky apostrophe, a space, or some other special character in your script that is causing the error. Comb through the code, especially your filenames, and find the error before attempting to run it again.
Secondly, you may run into a permissions error. These typically arise when your file is not currently an executable file. When I encountered this error, changing the permissions on the file to give me write access solved it.
Overall once you know these basics of Anaplan Connect you can build a script—even a complicated one! When in doubt, see if somebody else has asked about a similar issue in the discussion section; if you don’t find something there, you can always create your own question. Sometimes a second set of eyes is all you need, and our integrations site has some of the best in biz contributing!
Best of luck to the other rookies out there!
Tableau Connector for Anaplan
The Tableau Anaplan native integration provides an easy way to see and understand your Anaplan data using Tableau. Using the Tableau Connector for Anaplan, you can directly connect to Anaplan in few easy steps.
The connector is native to Tableau and built using the Anaplan API. It enables you to import Anaplan data into Tableau’s in-memory query engine using export actions created and saved in Anaplan. With a direct connection to Anaplan, people within your organization can effectively work with Tableau and get actionable insights on their data. Users can publish their Anaplan extract as a data source to Tableau Online or Tableau Server and keep their data refreshed on a regular basis.
To start using the Tableau - Anaplan connector, you need to have an Anaplan account with workspace and model, and a license for Tableau Desktop. You will also need to configure the Export actions that you plan to use with Tableau in Anaplan. Tableau supports only extract connections for Anaplan, not live connections. You can update the data by refreshing the extract.
To try the Tableau Connector for Anaplan visit https://www.tableau.com/products/trial.
For an introduction to the Tableau - Anaplan integration, refer to the page below:
More details about configuring the connector in Tableau are here:
Information on configuring Anaplan to use the Tableau Connector, as well as frequently asked questions, is available on Anapedia.
It is important to understand what Application Lifecycle Management, or ALM, enables clients to do within Anaplan.
In short, ALM enables clients to effectively manage the development, testing, deployment, and ongoing maintenance of applications in Anaplan. With ALM, it is possible to introduce changes without disrupting business operations by securely and efficiently managing and updating your applications with governance across different environments and quickly deploying changes to run more “what-if” scenarios in your planning cycles as you test and release development changes into production.
Learn more here: Understanding model synchronization in Anaplan ALM
Training on ALM is also available in the Education section 313 Application Lifecycle Management (ALM)
Anaplan has built several connectors to work with popular ETL (Extract, Translate, and Load) tools. These tools provide a graphical interface through which you can set up and manage your integration. Each of the tools that we connect to has a growing library of connectors – providing a wide array of possibilities for integration with Anaplan. These ETL tools require subscriptions to take advantage of all their features, making them an especially appealing option for integration if you already have a sub.
Anaplan has a connector available in MuleSoft's community library that allows for easy connection to cloud systems such as Netsuite, Workday, and Salesforce.com as well as on-premise systems like Oracle and SAP. Any of these integrations can be scheduled to recur on any period needed, easily providing hands-off integration. MuleSoft uses the open-source AnyPoint studio and Java to manage its integrations between any of its available connectors. Anaplan has thorough documentation relating to our MuleSoft connector on the Anaplan MuleSoft github.
SnapLogic has a Snap Pack for Anaplan that leverages our API to import and export data. The Anaplan Snap Pack provides components for reading data from and writing data to the Anaplan server using SnapLogic, as well as executing actions on the Anaplan server. This Snap Pack empowers you to use connect your data and organization on the Anaplan Platform without missing a beat.
Anaplan has a connector available on the Boomi marketplace that will empower you to create a local Atom and transfer data to or from any other source with a Boomi connector. You can use Boomi to import or export data using any of your pre-configured actions within Anaplan. This technology removes any need to store files as an intermediate step, as well as facilitating automation.
Anaplan has partnered with Informatica to build a connector on the Informatica platform. Informatica has connectors for hundreds of applications and databases, giving you the ability to leverage their integration platform for many other applications when you integrate these applications with Anaplan. You can search for the Anaplan Connector on the Informatica marketplace or request it from your Informatica sales representative.