A large and complex model such as 10B cells can take up to 10 minutes to load the first time it's in use after a period of inactivity of 60 minutes.
The only way to reduce the load time is by identifying what formula takes most of the time. This requires the Anaplan L3 support (ask for a Model Opening Analysis), but you can reduce the time yourself by applying the formula best practices listed above.
One other possible leverage is on list setup: Text properties on a list can increase the load times, and subsets on lists can disproportionately increase load times. It is best practice not to use List Properties but house the attributes in a System model dimensioned by the list. See Best practice for Module design for more details.
A model will save when the amount of changes made by end-users exceeds a certain threshold. This action can take several minutes and will be a blocking operation. Administrators have no leverage on model save besides formula optimization and reducing model complexity. Using ALM and Deployed mode increases this threshold, so it is best to use Deployed mode whenever possible.
A model will roll back in some cases of an invalid formula, or when a model builder attempts to adjust a setting that would result in an invalid state. In some large models, the rollback takes approximately the time to open the model, and up to 10 minutes worth of accumulated changes, followed by a model save.
The recommendation is to use ALM and have a DEV model which size does not exceed 500M cells, with a production list limited to a few dozen items, and have TEST and PROD models with the full size and large lists. Since no formula editing will happen in TEST or PROD, the model will never rollback after a user action. It can roll back on the DEV model but will take a few seconds only if the model is small.
Details of known issues
Performance issues with long nested formulas
Need to have a long formula on time as a result of nested intermediate calculations.
If the model size does not prevent from adding extra line items, it's a better practice to create multiple intermediate line items and reduce the size of the formula, as opposed to nesting all intermediate calculations into one gigantic formula.
This applies to summary formulae (SUM, LOOKUP, SELECT).
Combining SUM and LOOKUP in the same line item formula can cause performance issues in some cases. If you have noticed a drop in performance after adding a combined SUM and LOOKUP to a single line item, then split it into two line items.
RANKCUMULATE causes slowness
A current issue with the RANKCUMULATE formula can mean that the time to open the model, including rollback, can be up to five times slower than they should be.
There is currently no suitable workaround. Our recommendations are to stay within the constraints defined in Anapedia.
SUM/LOOKUP with large cell count
Separate formulas into different line items to reduce calculation time (fewer cells need to recalculate parts of a formula that would only affect a subset of the data).
A known issue with SUM/LOOKUP combinations within a formula can lead to slow model open and calculation times, particularly if the line item has a large cell count.
All line items do not apply to time or versions.
Y = X[SUM: R, LOOKUP: R]
Y Applies to [A,B]
X Applies to [A,B]
R Applies to [B] list formatted [C]
Add a new line item 'intermediate' that must have 'Applies To' set to the 'Format' of 'R'
intermediate = X[SUM: R]
Y = intermediate[LOOKUP: R]
This issue is currently being worked on by Development and a fix will be available in a future release
Calculations are over non-common dimensions
Anaplan calculates quicker if calculations are over common dimensions. Again, best seen in an example. If you have, List W, X Y = A + B Y Applies To W, X A Applies To W B Applies To W This performs slower than, Y = Intermediate Intermediate = A + B Intermediate Applies To W All other dimensions are the same as above. Similarly, you can substitute A & B above for a formula, e.g. SUM/LOOKUP calculations.
Cell history truncated
Currently, history generation has a time limit of 60 seconds set. The history generation is split into three stages with 1/3 of time allocated to each.
The first stage is to build a list of columns required for the grid. This involves reading all the history. If this takes more than 20 seconds, then the user receives the message "history truncated after x seconds - please modify the date range," where X is how many seconds it took. No history is generated.
If the first stage completes within 20 seconds, it goes on to generate the full list of history.
In the grid only the first 1000 rows are displayed; the user must Export history to get a full history. This can take significant time depending on volume.
The same steps are taken for model and cell history. The cell history is generated from loading the entire model history and searching through the history for the relevant cell information. When the model history gets too large then it is currently truncated to prevent performance issues. Unfortunately, this can make it impossible to retrieve the cell history that is needed.
Make it real time when needed
Do not make it real time unless it needs to be.
By this we mean, do not have line items where users input data being referenced by other line items unless they have to be. A way around this could be to have users have their data input sections, which is not referenced anywhere, or as little as possible, and, say, at the end of the day when no users are in the model, run an import which would update into cells where calculations are then done. This may not always be possible if the end user needs to see resulting calculations from his inputs, but if you can limit these to just do the calculations that he needs to see and use imports during quiet times then this will still help.
We see this often when not all reporting modules need to be recalculated real time. In many cases, many of these modules are good to be calculated the day after.
Don't have line items that are dependent on other line items unnecessarily.This can cause Anaplan to not utilize the maximum number of calculations it can do at once. This happens where a line items formula cannot be calculated because it is waiting on results of other line items. A basic example of this can be seen with line item's A, B, and C having the formulas: A - no formula B= A C = B Here B would be calculated, and then C would be calculated after this. Whereas if the setup was: A - no formula B = A C = A Here B and C can be calculated at the same time. This also helps if line item B is not needed it can then be removed, further reducing the number of calculations and the size of the model. This needs to considered on a case-by-case basis and is a tradeoff between duplicating calculations and utilizing as many threads as possible. If line item B was referenced by a few other line items, it may indeed be quicker to have this line item.
Summary cells often take processing time even if they are not actually recalculated because they must check all the lower level cells.
Reduce summaries to ‘None’ wherever possible. This not only reduces aggregations, but also the size of the model.
It is important to understand what Application Lifecycle Management (ALM) enables clients to do within Anaplan.
In short, ALM enables clients to effectively manage the development, testing, deployment, and ongoing maintenance of applications in Anaplan. With ALM, it is possible to introduce changes without disrupting business operations by securely and efficiently managing and updating your applications with governance across different environments and quickly deploying changes to run more “what-if” scenarios in your planning cycles as you test and release development changes into production.
Learn more here: Understanding model synchronization in Anaplan ALM
Training on ALM is also available in the Education section 313 Application Lifecycle Management (ALM)
This guide assumes you have set up your runtime environment in Informatica Cloud (Anaplan Hyperconnect) and the agent is up and running. This guide focusses solely on how to configure the ODBC connection and setting up a simple synchronization task importing data from one table in PostgreSQL to Anaplan. Informatica Cloud has richer features that are not covered in this guide.
The built-in help is contextual and helpful as you go along should you need more information than I have included in this guide.
The intention of this guide is to help you set up a simple import from PostgreSQL to Anaplan and this guide is therefore kept short and is not covering all related areas.
This guide assumes you have ran an import using a csv file as this needs to be referenced when the target connection is set up, described under section 2.2 below. To prepare, I exported the data I wanted to use for the import from PostgreSQL to a csv file. I then mapped this csv file to Anaplan and ran an initial import to create the import action that is needed.
1. Set up the ODBC connection for PostgreSQL
In this example I am using the 64-bit version of the ODBC connection running on my local laptop. I have set it up for User DSN rather than System DSN, but the process is very similar should you need to set up a System DSN.
You will need to download the relevant ODBC driver from PostgreSQL and install it to be able to add it to your ODBC Data Sources as per below (click the Add…button and you should be able to select the downloaded driver).
Clicking the configuration button for the ODBC Data Source opens the configuration dialogue. The configurations needed are:
Database is the name of your PostgreSQL database.
Server is the address to your server. As I am setting this up on my laptop, it’s localhost.
User Name is the username for the PostgreSQL database.
The password is the password for the PostgreSQL database.
Port is the port used by PostgreSQL. You will find this if you open PostgreSQL.
Testing the connection should not return any errors.
2. Configuring source and target connections
After setting up the ODBC connection as described above, you will need to set up two connections, one to PostgreSQL and one to Anaplan. Follow the steps below to do this.
2.1 Source connection – PostgreSQL ODBC
Select Configure > connection in the menu bar to configure a connection.
Name your connection and add a description
Select type – ODBC
Select the runtime environment that will be used to run this. In this instance I am using my local machine.
Insert the username for the database (same as you used to set up the ODBC connection).
Insert the password for the database (same as you used to set up the ODBC connection).
Insert the data source name. This is the name of the ODBC connection you configured earlier.
Code page would need to correspond to the character set you are using.
Testing the connection should give you below confirmation. If so, you can click Done.
2.2 Set up target connection – Anaplan
The second connection that needs to be set up is the connection from Informatica Cloud to Anaplan.
Name your connection and add a description if needed
Select type – AnaplanV2
Select the runtime environment that will be used to run this. In this instance I am using my local machine.
Auth type – I am using Basic Auth which will require your Anaplan user credentials
Insert the Anaplan username
Insert the Anaplan password
Certification Path location – leave blank if you use Basic Auth
Insert the workspace ID (open your Anaplan model and select help and about)
Insert the model ID (find in the same way as for workspace ID)
I have left the remaining fields as per default setting.
Testing the connection should not pass any errors.
3 Task wizard – Data synchronization
The next step is to set up a data synchronization task to connect the PostgreSQL source to the Anaplan target. Select Task Wizards in the menu bar and navigate to Data Synchronization as per below screen shot.
This will open the task wizard, starting with defining the Data Synchronization task as per below. Name the task and select the relevant task operation. In this example I have selected Insert, but other task operations are available like update and upsert.
Click Next for the next step in the workflow which is to set up the connection to the source. Start by selecting the connection you defined above under section 2.1. In this example I am using a single table as source and have therefore selected single source. With this connection you can select the source object with the Source Object drop down. This will give you a data preview so you can validate the source is defined correctly. The source object corresponds to the table you are importing from.
The next step is to define the target connection and you will be using the connection that was set up under section 2.1 above.
The target object is the import process that you ran from the csv file in the preparation step described under section 1 above. This action is referred to below as target object. The wizard will show a preview of the target module columns.
The next step in the process is the Data Filters that has both a Simple and an Advanced mode.
I am not using any data filters in this example and please refer to the built-in help for further information on how to use this.
In the field mapping you will either need to manually map or get the fields automatically mapped depending on if the names in the source and target correspond. If you map manually, you will need to drag and drop the fields from the source to the target. Once done, select Validate Mapping to check no errors are generated from the mapping.
The last step is to define whether to use a schedule to run the connection or not. You will also have the option to insert pre-processing commands and post-processing commands and any parameters for your mapping. Please refer to the built-in help for guidance on this.
After running the task, the activity log will confirm whether the import ran without errors or warnings.
As I mentioned initially, this is a simple guide to help you to set up a simple, single source import. Informatica Cloud does have more advanced options as well, both for mappings and transformations.
L'application Bring Your Own Key (BYOK) vous permet maintenant de vous approprier les clés de chiffrement de vos données de modèle. Si vous avez accès à l'outil Anaplan Administration, vous pouvez chiffrer et déchiffrer des espaces de travail sélectionnés à l'aide de vos propres clés AES-256. À la différence des clés principales système, les clés créées par BYOK vous appartiennent et vous en assurez la sécurité. Aucun mécanisme ne permet au personnel Anaplan d'accéder à vos clés. Bring Your Own Key (BYOK) - Guide de l'utilisateur Bring Your Own Key (BYOK) est un produit complémentaire que votre organisation peut acheter si elle possède l'édition Enterprise.
Making sure that production data lists are correctly marked within a model is a key step to setting up and using ALM . This guide will provide a solution to how someone can make revisions to their model to allow for the tagging of a list as a production data list. Please note: this solution doesn’t work if there are hard-coded references on non-composite summary items. For more information on working with production lists and ragged hierarchies, please visit Production lists and ragged hierarchies logic.
The issue arises as a model administrator needs to tag a production data list, but there are hard-coded references in the model that won’t allow the person to do so. When this occurs and the model administrator tries to tag it as a production list, they will get a warning similar to this:
See Formula Protection for more details.
To fix this issue, all direct formula references to production data lists need to be changed to be indirect references to lists using either LOOKUPs or Boolean formatted conditional logic. Below, you will find a step-by-step guide to replacing these formulas.
Identify formulas with hard-coded references
There is now an easy way to identify all of the formulas which are hard-coded to production data lists.
Check the 'Referenced in Formula' column in the General Lists section.
This will show the line items where the list is used. Check the respective formula for hard-coded references. If there are no hard-coded references, then it is OK to check the list as a production data list. This is the recommended approach, as just setting the lists without prior checking may lead to a rollback error being generated, which could be time-consuming for large models (as well as frustrating).
It is possible to just export the General Lists grid to help where there are multiple references for the same list and then use formulas and filters to identify all offenders in the same effort. This option will save significant amounts of time if there are many line items that would need to be changed.
You are looking for direct references on the list members:
[SELECT: List Name.list member]
ITEM(List Name) =List Name.List member
The following constructs are valid, but not recommended, as any changes to the names or codes could change the result of calculations:
IF CODE(ITEM(List Name))=
IF NAME(ITEM(List Name))=
After following those steps, you should have a list of all of the line items that need to be changed in the model in order for production data list to be open to being checked. Please note: There may still be list properties that have hard-coded references to items. You will need to take note of these as well, but as per D.I.S.C.O., (Best practice for Module design) we recommend that List Properties are replaced with Line Items in System Modules.
Replacing model formulas:
The next step is to replace these formulas within the model. For this, there are two recommended options.
The first option (Option 1 below) is to replace your SELECT statements with a LOOKUP formula that is referencing a list drop-down. Use this option when there are 1:1 mappings between list items and your formula logic. For example, if you were building out a P&L variance report and needed to select from a specific revenue account, you might use this option.
The second option (Option 2 below) for replacing these formulas is to build a logic module that allows you to use Booleans to select list items and reference these Boolean fields in your formulas. Use this option when there is more complex modeling logic than a 1:1 mapping. For example, you might use this option if you are building a variance report by region and you have different logic for all items under Region 1 (ex: budget – actual) than the items under Region 2 (ex: budget – forecast).
(Option 1) Add List Selections module to be used in LOOKUPs for 1:1 mappings:
From here you should make a module called List Selections, with no lists applied to it and a line item for each list item reference that you previously used in the formulas that will be changed. Each of these line items will be formatted as the list that you are selecting to be production data.
Afterward, you should have a module that looks similar to this: An easy and effective way to stay organized is to partition and group your line items of similar list formats into the same sections with a section header line item formatted as No Data and a style of "Heading 1."
After the line items have been created, the model administrator should use the list drop-downs to select the appropriate items which are being referenced. As new line items are created in a standard mode model, the model administrator will need to open the deployed model downstream to reselect or copy and paste the list formatted values in this module since this is considered production data.
Remove hard-coding and replace with LOOKUPs:
Once you have created the List Selections module with all of the correct line items, you will begin replacing old formulas, which you’ve identified in Excel, with new references.
For formulas where there is a SELECT statement, you will replace the entire SELECT section of the formula with a LOOKUP to the correct line item in the list selections.
Old Formula = Full PL.Amount[SELECT: Accounts.Product Sales] New Formula = Full PL.Amount[LOOKUP: List Selections.Select Product Sales]
For formulas where there is an IF ITEM (List Name) = List Name Item, you will replace the second section of the formula after the ‘=’ to directly reference the correct line item in the list selections.
Old Formula = If ITEM(Accounts) = Accounts.Product Sales THEN Full PL.Amount ELSE 0 New Formula = IF ITEM(Accounts) = List Selections.Select Product Sales THEN Full PL.Amount ELSE 0
(Option 2) Modeling for complex logic and many to many relationship:
In the event that you are building more complex modeling logic in your model, you should start by building Boolean references that you can use in your formulas. To accomplish this, you will create a new module with Boolean line items for each logic type that you need. Sticking with the same example as above, if you need to build a variance report where you have different logic depending on the region, start by creating a module by region that has different line items for each different logic that you need similar to the view below:
Once you have the Boolean module set up, you can then change your hard-coded formulas to reference these Boolean formatted line items to write your logic. The formula may look similar to this:
IF Region Logic.Logic 1 THEN logic1 ELSE IF Region Logic.Logic 2 THEN logic2 ELSE IF Region Logic.Logic 3 THEN logic3 ELSE 0
Here is a screenshot of what the end result may look like:
This method can be used across many different use cases and will provide a more efficient way of writing complex formulas while avoiding hard-coding for production data lists.
Selecting production data list:
After all of the hard-coded formulas have been changed in the model, you can navigate back to the Settings tab, and open General Lists. In the Production Data column, check the box for the list that you want to set as a production data list.
Repeat for each list in the model that needs to be a production data list:
For each list in the model that you need to make a production data list, you can repeat the steps throughout this process to successfully remove all hard-coded list references.
Anaplan has built several connectors to work with popular ETL (Extract, Translate, and Load) tools. These tools provide a graphical interface through which you can set up and manage your integration. Each of the tools that we connect to has a growing library of connectors – providing a wide array of possibilities for integration with Anaplan. These ETL tools require subscriptions to take advantage of all their features, making them an especially appealing option for integration if you already have a sub.
Anaplan has a connector available in MuleSoft's community library that allows for easy connection to cloud systems such as Netsuite, Workday, and Salesforce.com as well as on-premise systems like Oracle and SAP. Any of these integrations can be scheduled to recur on any period needed, easily providing hands-off integration. MuleSoft uses the open-source AnyPoint studio and Java to manage its integrations between any of its available connectors. Anaplan has thorough documentation relating to our MuleSoft connector on the Anaplan MuleSoft github.
SnapLogic has a Snap Pack for Anaplan that leverages our API to import and export data. The Anaplan Snap Pack provides components for reading data from and writing data to the Anaplan server using SnapLogic, as well as executing actions on the Anaplan server. This Snap Pack empowers you to use connect your data and organization on the Anaplan Platform without missing a beat.
Anaplan has a connector available on the Boomi marketplace that will empower you to create a local Atom and transfer data to or from any other source with a Boomi connector. You can use Boomi to import or export data using any of your pre-configured actions within Anaplan. This technology removes any need to store files as an intermediate step, as well as facilitating automation.
Anaplan has partnered with Informatica to build a connector on the Informatica platform. Informatica has connectors for hundreds of applications and databases, giving you the ability to leverage their integration platform for many other applications when you integrate these applications with Anaplan. You can search for the Anaplan Connector on the Informatica marketplace or request it from your Informatica sales representative.