Recently, I used Anaplan Connect for the first time; I used it to import Workday and Jobvite data into my Anaplan model. This was my first serious data integration. After my experience I put together some tips and tricks to help other first-timers succeed.
Firstly, there are a few things you can do to set yourself up for success:
Download the most up-to-date version of Java.
Download Anaplan Connect from Anaplan's Download Center.
Make sure you can run Terminal (Mac) or the Command Prompt (Windows).
Make sure you have a plaintext editor to edit your script (TextEdit or Notepad are available by default, but I recommend Sublime Text).
Read through the Anaplan Connect User Guide in the "doc" folder of the Anaplan Connect folder you downloaded in step #2.
Once you have these items completed then you’re ready to start writing your script.
In the Anaplan Connect folder that you downloaded, there are some example script files, “example.bat” for Windows and “example.sh” for Mac. The best way to start is to copy the right example file for your operating system, then alter it.
When you’re first navigating the example script, the section contains what are called variables (e.g. ModelId, WorkspaceId, AnaplanUser). If you keep your variables at the top, then use them in your script, it's easier to edit those components because they are only in one place. I highly recommend adding a variable for your Anaplan certificate. Then you don’t have to manually enter your password every time the script runs.
When you begin to piece together your own script, it will include some combination of Anaplan Connect Commands (you can check out the full list in an appendix of the Quick Start Guide for Anaplan Connect, on Anapedia). Because my script was focused on importing data from an outside source into Anaplan, it included the following components: file, put, import, execute, output. Each of these has a different function:
File identifies the File Name (i.e. Workday.csv).
Put identifies the File Path of the file you’re importing (i.e. User/Admin/Documents/Workday.csv).
Import identifies the action Anaplan is supposed to run (i.e. Workday_Import).
Execute is what runs the process; nothing needs to follow this.
Output identifies what happens to errors. If you would like those to go to a file then you include the location of the file following the output (i.e. User/Admin/Documents/ErrorLog.csv).
It’s worth noting that you can have multiple actions behind a file. For instance, I can have a command sequence like this: file-put-import-execute-output-put-import-execute-output. I found this useful when I used a single file to update multiple lists and modules; it saved me from needing to upload a file over and over again.
When you are identifying the file path for the script, it is easiest to keep terminal open. When you drag and drop a file in terminal it will automatically populate the file path. This will assist in avoiding syntax errors since you can copy and paste from terminal into the script.
Once you assemble your commands, it’s time to start testing your script! When you start testing the script, it is helpful to break it into small pre-built test chunks that build on one another. That way if something goes wrong, it won’t take as long to find out where the error is. Additionally, it makes the script more digestible in the event that it needs to be edited in the future.
As you test each of these chunks, you may run into some errors, so here are a few troubleshooting tips to get you started.
If your terminal reports that there is a syntax error, then there is most likely a pesky apostrophe, a space, or some other special character in your script that is causing the error. Comb through the code, especially your filenames, and find the error before attempting to run it again.
Secondly, you may run into a permissions error. These typically arise when your file is not currently an executable file. When I encountered this error, changing the permissions on the file to give me write access solved it.
Overall once you know these basics of Anaplan Connect you can build a script—even a complicated one! When in doubt, see if somebody else has asked about a similar issue in the discussion section; if you don’t find something there, you can always create your own question. Sometimes a second set of eyes is all you need, and our integrations site has some of the best in biz contributing!
Best of luck to the other rookies out there!
Note: This article is meant to be a guide on converting an existing Anaplan Security Certificate to PEM format for the purpose of testing its functionality via cURL commands. Please work with your developers on any in more depth application of this process. The current Production API version is v1.3.
Using a certificate to authenticate will eliminate the need to update your script when you have to change your Anaplan password. To use a certificate for authentication with the API, it first has to be converted into a Base64 encoded string recognizable by Anaplan. Information on how to obtain a certificate can be found in Anapedia.
This article assumes that you already have a valid certificate tied to your user name.
1. To properly convert your Anaplan certificate to be usable with the API, first you will need openssl (https://www.openssl.org/). Once you have that, you will need to convert the certificate to PEM format. The PEM format uses the header and footer lines “-----BEGIN CERTIFICATE-----“, and “-----END CERTIFICATE-----“.
2. If your certificate is not in PEM format, you can convert it to the PEM format using the following OpenSSL command. “certificate-(certnumber).cer” is name of source certificate, and “certtest.pem” is name of target PEM certificate.
openssl x509 -inform der -in certificate-(certnumber).cer -out certtest.pem
View the PEM file in a text editor. It should be a Base64 string starting with “-----BEGIN CERTIFICATE-----“, and ending with “-----END CERTIFICATE-----“.
3. View the PEM file to find the CN (Common Name) using the following command:
openssl x509 -text -in certtest.pem
It should look something like "Subject: CN=(Anaplan login email)". Copy the Anaplan login email.
4. Use a Base-64 encoder (e.g. https://www.base64encode.org/ ) to encrypt the CN and PEM string, separated by a colon. For example, paste this in:
(Anaplan login email):-----BEGIN CERTIFICATE-----(PEM certificate contents)-----END CERTIFICATE-----
5. You now have the encrypted string necessary to authenticate API calls. For example, using cURL to GET a list of the Anaplan workspaces for the user that the certificate belongs to:
curl -H "Authorization: AnaplanCertificate (encrypted string)" https://api.anaplan.com/1/3/workspaces
A large and complex model such as 10B cells can take up to 10 minutes to load the first time it's in use after a period of inactivity of 60 minutes.
The only way to reduce the load time is by identifying what formula takes most of the time. This requires the Anaplan L3 support (ask for a Model Opening Analysis), but you can reduce the time yourself by applying the formula best practices listed above.
One other possible leverage is on list setup: Text properties on a list can increase the load times, and subsets on lists can disproportionately increase load times. It is best practice not to use List Properties but house the attributes in a System model dimensioned by the list. See Best practice for Module design for more details.
A model will save when the amount of changes made by end-users exceeds a certain threshold. This action can take several minutes and will be a blocking operation. Administrators have no leverage on model save besides formula optimization and reducing model complexity. Using ALM and Deployed mode increases this threshold, so it is best to use Deployed mode whenever possible.
A model will roll back in some cases of an invalid formula, or when a model builder attempts to adjust a setting that would result in an invalid state. In some large models, the rollback takes approximately the time to open the model, and up to 10 minutes worth of accumulated changes, followed by a model save.
The recommendation is to use ALM and have a DEV model which size does not exceed 500M cells, with a production list limited to a few dozen items, and have TEST and PROD models with the full size and large lists. Since no formula editing will happen in TEST or PROD, the model will never rollback after a user action. It can roll back on the DEV model but will take a few seconds only if the model is small.
Details of known issues
Performance issues with long nested formulas
Need to have a long formula on time as a result of nested intermediate calculations.
If the model size does not prevent from adding extra line items, it's a better practice to create multiple intermediate line items and reduce the size of the formula, as opposed to nesting all intermediate calculations into one gigantic formula.
This applies to summary formulae (SUM, LOOKUP, SELECT).
Combining SUM and LOOKUP in the same line item formula can cause performance issues in some cases. If you have noticed a drop in performance after adding a combined SUM and LOOKUP to a single line item, then split it into two line items.
RANKCUMULATE causes slowness
A current issue with the RANKCUMULATE formula can mean that the time to open the model, including rollback, can be up to five times slower than they should be.
There is currently no suitable workaround. Our recommendations are to stay within the constraints defined in Anapedia.
SUM/LOOKUP with large cell count
Separate formulas into different line items to reduce calculation time (fewer cells need to recalculate parts of a formula that would only affect a subset of the data).
A known issue with SUM/LOOKUP combinations within a formula can lead to slow model open and calculation times, particularly if the line item has a large cell count.
All line items do not apply to time or versions.
Y = X[SUM: R, LOOKUP: R]
Y Applies to [A,B]
X Applies to [A,B]
R Applies to [B] list formatted [C]
Add a new line item 'intermediate' that must have 'Applies To' set to the 'Format' of 'R'
intermediate = X[SUM: R]
Y = intermediate[LOOKUP: R]
This issue is currently being worked on by Development and a fix will be available in a future release
Calculations are over non-common dimensions
Anaplan calculates quicker if calculations are over common dimensions. Again, best seen in an example. If you have, List W, X Y = A + B Y Applies To W, X A Applies To W B Applies To W This performs slower than, Y = Intermediate Intermediate = A + B Intermediate Applies To W All other dimensions are the same as above. Similarly, you can substitute A & B above for a formula, e.g. SUM/LOOKUP calculations.
Cell history truncated
Currently, history generation has a time limit of 60 seconds set. The history generation is split into three stages with 1/3 of time allocated to each.
The first stage is to build a list of columns required for the grid. This involves reading all the history. If this takes more than 20 seconds, then the user receives the message "history truncated after x seconds - please modify the date range," where X is how many seconds it took. No history is generated.
If the first stage completes within 20 seconds, it goes on to generate the full list of history.
In the grid only the first 1000 rows are displayed; the user must Export history to get a full history. This can take significant time depending on volume.
The same steps are taken for model and cell history. The cell history is generated from loading the entire model history and searching through the history for the relevant cell information. When the model history gets too large then it is currently truncated to prevent performance issues. Unfortunately, this can make it impossible to retrieve the cell history that is needed.
Make it real time when needed
Do not make it real time unless it needs to be.
By this we mean, do not have line items where users input data being referenced by other line items unless they have to be. A way around this could be to have users have their data input sections, which is not referenced anywhere, or as little as possible, and, say, at the end of the day when no users are in the model, run an import which would update into cells where calculations are then done. This may not always be possible if the end user needs to see resulting calculations from his inputs, but if you can limit these to just do the calculations that he needs to see and use imports during quiet times then this will still help.
We see this often when not all reporting modules need to be recalculated real time. In many cases, many of these modules are good to be calculated the day after.
Don't have line items that are dependent on other line items unnecessarily.This can cause Anaplan to not utilize the maximum number of calculations it can do at once. This happens where a line items formula cannot be calculated because it is waiting on results of other line items. A basic example of this can be seen with line item's A, B, and C having the formulas: A - no formula B= A C = B Here B would be calculated, and then C would be calculated after this. Whereas if the setup was: A - no formula B = A C = A Here B and C can be calculated at the same time. This also helps if line item B is not needed it can then be removed, further reducing the number of calculations and the size of the model. This needs to considered on a case-by-case basis and is a tradeoff between duplicating calculations and utilizing as many threads as possible. If line item B was referenced by a few other line items, it may indeed be quicker to have this line item.
Summary cells often take processing time even if they are not actually recalculated because they must check all the lower level cells.
Reduce summaries to ‘None’ wherever possible. This not only reduces aggregations, but also the size of the model.
NOTE: The following information is also attached as a PDF for downloading and using off-line.
The process of designing a model will help you:
Understand the customer’s problem more completely
Bring to light any incorrect assumptions you may have made, allowing for correction before building begins
Provide the big picture view for building. (If you were working on an assembly line building fenders, wouldn’t it be helpful to see what the entire car looked like?)
Understand the requirements and the customer’s technical ecosystem when designing a model
When you begin a project, gather information and requirements using a number of tools. These include:
Statement of Work (SOW): Definition of the project scope and project objectives/high level requirements
Project Manifesto: Goal of the project – big picture view of what needs to be accomplished
IT ecosystem: Which systems will provide data to the model and which systems will receive data from the model? What is the Anaplan piece of the ecosystem?
Current business process: If the current process isn’t working, it needs to be fixed before design can start.
Business logic: What key pieces of business logic will be included in the model?
Is a distributed model needed?
High user concurrency
Security where the need is a separate model
Regional differences that are better handled by a separate model
Is the organization using ALM, requiring split or similar models to effectively manage development, testing, deployment, and maintenance of applications? (This functionality requires a premium subscription or above.)
User stories: These have been written by the client—more specifically, by the subject matter experts (SMEs) who will be using the model.
Why do this step?
To solve a problem, you must completely understand the current situation. Performing this step provides this information and the first steps toward the solution.
Results of this step:
Understand the goal of the project
Know the organizational structure and reporting relationships (hierarchies)
Know where data is coming from and have an idea of how much data clean-up might be needed
If any of the data is organized into categories (for example, product families) or what data relationships exist that need to be carried through to the model (for example, salespeople only sell certain products)
What lists currently exist and where are they are housed
Know which systems the model will either import from or export to
Know what security measures are expected
Know what time and version settings are needed
Document the user experience
Front to back design has been identified as the preferred method for model design. This approach puts the focus on the end user experience. We want that experience to align with the process so users can easily adapt to the model. During this step focus on:
User roles. Who are the users?
Identifing the business process that will be done in Anaplan.
Reviewing and documenting the process for each role.
The main steps. If available, utilize user stories to map the process. You can document this in any way that works for you. Here is a step-by-step process you can try:
What are the start and end points of the process?
What is the result or output of the process? What does each role need to see/do in the process?
What are the process inputs and where do they come from?
What are the activities the user needs to engage in? Verb/object—approve request, enter sales amount, etc. Do not organize during this step. Use post-its to capture them.
Take the activities from step 4 and put them in the correct sequence.
Are there different roles for any of these activities? If no, continue with step 8. If yes, assign a role to each activity.
Transcribe process using PowerPoint ® or Lucid charts. If there are multiple roles, use swim lanes to identify the roles.
Check with SMEs to ensure accuracy.
Once the user process has been mapped out, do a high level design of the dashboards
Include: Information needed
What data does the user need to see?
What the user is expected to do or decisions that the user makes
Share the dashboards with the SMEs. Does the process flow align?
Why do this step?
This is probably the most important step in the model design process. It may seem as though it is too early to think about the user experience, but ultimately the information or data that the user needs to make a good business decision is what drives the entire structure of the model.
On some projects, you may be working with a project manager or a business consultant to flesh out the business process for the user. You may have user stories, or it may be that you are working on design earlier in the process and the user stories haven’t been written. In any case, identify the user roles, the business process that will be completed in Anaplan, and create a high level design of the dashboards. Verify those dashboards with the users to ensure that you have the correct starting point for the next step.
Results of this step:
List of user roles
Process steps for each user role
High level dashboard design for each user role
Use the designed dashboards to determine what output modules are necessary
Here are some questions to help you think through the definition of your output modules:
What information (and in what format) does the user need to make a decision?
If the dashboard is for reporting purposes, what information is required?
If the module is to be used to add data, what data will be added and how will it be used?
Are there modules that will serve to move data to another system? What data and in what format is necessary?
Why do this step?
These modules are necessary for supporting the dashboards or exporting to another system. This is what should guide your design—all of the inputs and drivers added to the design are added with the purpose of providing these output modules with the information needed for the dashboards or export.
Results of this step:
List of outputs and desired format needed for each dashboard
Determine what modules are needed to transform inputs to the data needed for outputs
Typically, the data at the input stage requires some transformation. This is where business rules, logic, and/or formulas come into play:
Some modules will be used to translate data from the data hub. Data is imported into the data hub without properties, and modules are used to import the properties. Reconciliation of items takes place before importing the data into the spoke model.
These are driver modules that include business logic, rules.
Why do this step?
Your model must translate data from the input to what is needed for the output
Results of this step:
Business rules/calculations needed
Create a model schema
You can whiteboard your schema, but at some point in your design process, your schema must be captured in an electronic format. It is one of the required pieces of documentation for the project and is also used during the Model Design Check-in, where a peer checks over your model and provides feedback.
Identify the inputs, outputs, and drivers for each functional area
Identify the lists used in each functional area
Show the data flow between the functional areas
Identify time and versions where appropriate
Why do this step?
It is required as part of The Anaplan Way process. You will build your model design skills by participating in a Model Design Check-in, which allows you to talk through the tougher parts of design with a peer.
More importantly, designing your model using a schema means that you must think through all of the information you have about the current situation, how it all ties together, and how you will get to that experience that meets the exact needs of the end user without fuss or bother.
Result of this step:
Model schema that provides the big picture view of the solution. It should include imports from other systems or flat files, the modules or functional areas that are needed to take the data from current state to what is needed to support the dashboards that were identified in Step 2. Time and versions should be noted where required. Include the lists that will be used in the functional areas/modules.
Your schema will be used to communicate your design to the customer, model builders, and others. While you do not need to include calculations and business logic in the schema, it is important that you understand the state of the data going into a module, the changes or calculations that are performed in the module and the state of the data leaving the module, so that you can effectively explain the schema to others.
For more information, check out 351 Schemas. This 10 to 15 minute course provides basic information about creating a model schema.
Verify that the schema aligns with basic design principles
When your schema is complete, give it a final check to ensure:
It is simple.
“Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius — and a lot of courage to move in the opposite direction.” ― Ernst F. Schumacher
“Design should be easy in the sense that every step should be obviously and clearly identifiable. Simplify elements to make change simple so you can manage the technical risk.” — Kent Beck
The model aligns with the manifesto.
The business process is defined and works well within the model.
It is important to understand what Application Lifecycle Management, or ALM, enables clients to do within Anaplan.
In short, ALM enables clients to effectively manage the development, testing, deployment, and ongoing maintenance of applications in Anaplan. With ALM, it is possible to introduce changes without disrupting business operations by securely and efficiently managing and updating your applications with governance across different environments and quickly deploying changes to run more “what-if” scenarios in your planning cycles as you test and release development changes into production.
Learn more here: Understanding model synchronization in Anaplan ALM
Training on ALM is also available in the Education section 313 Application Lifecycle Management (ALM)
Personal dashboards is a great new feature that enables end users to save a personalized view of a dashboard. To get the most out of this feature here are a few tips and tricks.
Tidy up dashboards
Any change to a master dashboard (using the Dashboard Designer) will reset all personal views of a dashboard, so before enabling personal dashboards, take some time to ensure that the current dashboards are up to date:
Implement any pending development changes (including menu options)
Turn on the Dashboard Quick Access toolbar (if applicable)
Check and amend all text box headings and comments for size, alignment, spelling and grammar
Delete or disable any redundant dashboards to ensure end users don’t create personal views of obsolete dashboards
Use filters rather than show/hide
It’s best practice to use a filter rather than show and hide for the rows and/or columns on a grid.
This is now more beneficial because amending the items shown or hidden on a master dashboard will reset the personal views. For example, suppose you want to display just the current quarter of a timescale. You could manually show/hide the relevant periods but, at quarter end, when the Current Period is updated, the dashboard will need to be amended and all those personal views will be reset. If you use a filter, referencing a time module, the filter criteria will update automatically, as will the dashboard. No changes are made to the master dashboard and all the personal views are preserved.
Create a communication and migration strategy
Inevitably there are going to be changes that must be made to master dashboards. To minimize the disruption for end users, create a communication plan and follow a structured development program . These can include the following:
Bundle up dashboard revisions into logical set of changes
Publish these changes at regular intervals (e.g., on a monthly cycle)
Create a regular communication channel to inform users of changes and the implications of those changes
Create a new dashboard and ask end users to migrate to the new dashboard over a period of time before switching off the old dashboard
Application Lifecycle Management (ALM)
If you are using ALM: any structural changes to master dashboards will reset all personal views of dashboards.
This guide assumes you have set up your runtime environment in Informatica Cloud (Anaplan Hyperconnect) and the agent is up and running. This guide focusses solely on how to configure the ODBC connection and setting up a simple synchronization task importing data from one table in PostgreSQL to Anaplan. Informatica Cloud has richer features that are not covered in this guide.
The built-in help is contextual and helpful as you go along should you need more information than I have included in this guide.
The intention of this guide is to help you set up a simple import from PostgreSQL to Anaplan and this guide is therefore kept short and is not covering all related areas.
This guide assumes you have ran an import using a csv file as this needs to be referenced when the target connection is set up, described under section 2.2 below. To prepare, I exported the data I wanted to use for the import from PostgreSQL to a csv file. I then mapped this csv file to Anaplan and ran an initial import to create the import action that is needed.
1. Set up the ODBC connection for PostgreSQL
In this example I am using the 64-bit version of the ODBC connection running on my local laptop. I have set it up for User DSN rather than System DSN, but the process is very similar should you need to set up a System DSN.
You will need to download the relevant ODBC driver from PostgreSQL and install it to be able to add it to your ODBC Data Sources as per below (click the Add…button and you should be able to select the downloaded driver).
Clicking the configuration button for the ODBC Data Source opens the configuration dialogue. The configurations needed are:
Database is the name of your PostgreSQL database.
Server is the address to your server. As I am setting this up on my laptop, it’s localhost.
User Name is the username for the PostgreSQL database.
The password is the password for the PostgreSQL database.
Port is the port used by PostgreSQL. You will find this if you open PostgreSQL.
Testing the connection should not return any errors.
2. Configuring source and target connections
After setting up the ODBC connection as described above, you will need to set up two connections, one to PostgreSQL and one to Anaplan. Follow the steps below to do this.
2.1 Source connection – PostgreSQL ODBC
Select Configure > connection in the menu bar to configure a connection.
Name your connection and add a description
Select type – ODBC
Select the runtime environment that will be used to run this. In this instance I am using my local machine.
Insert the username for the database (same as you used to set up the ODBC connection).
Insert the password for the database (same as you used to set up the ODBC connection).
Insert the data source name. This is the name of the ODBC connection you configured earlier.
Code page would need to correspond to the character set you are using.
Testing the connection should give you below confirmation. If so, you can click Done.
2.2 Set up target connection – Anaplan
The second connection that needs to be set up is the connection from Informatica Cloud to Anaplan.
Name your connection and add a description if needed
Select type – AnaplanV2
Select the runtime environment that will be used to run this. In this instance I am using my local machine.
Auth type – I am using Basic Auth which will require your Anaplan user credentials
Insert the Anaplan username
Insert the Anaplan password
Certification Path location – leave blank if you use Basic Auth
Insert the workspace ID (open your Anaplan model and select help and about)
Insert the model ID (find in the same way as for workspace ID)
I have left the remaining fields as per default setting.
Testing the connection should not pass any errors.
3 Task wizard – Data synchronization
The next step is to set up a data synchronization task to connect the PostgreSQL source to the Anaplan target. Select Task Wizards in the menu bar and navigate to Data Synchronization as per below screen shot.
This will open the task wizard, starting with defining the Data Synchronization task as per below. Name the task and select the relevant task operation. In this example I have selected Insert, but other task operations are available like update and upsert.
Click Next for the next step in the workflow which is to set up the connection to the source. Start by selecting the connection you defined above under section 2.1. In this example I am using a single table as source and have therefore selected single source. With this connection you can select the source object with the Source Object drop down. This will give you a data preview so you can validate the source is defined correctly. The source object corresponds to the table you are importing from.
The next step is to define the target connection and you will be using the connection that was set up under section 2.1 above.
The target object is the import process that you ran from the csv file in the preparation step described under section 1 above. This action is referred to below as target object. The wizard will show a preview of the target module columns.
The next step in the process is the Data Filters that has both a Simple and an Advanced mode.
I am not using any data filters in this example and please refer to the built-in help for further information on how to use this.
In the field mapping you will either need to manually map or get the fields automatically mapped depending on if the names in the source and target correspond. If you map manually, you will need to drag and drop the fields from the source to the target. Once done, select Validate Mapping to check no errors are generated from the mapping.
The last step is to define whether to use a schedule to run the connection or not. You will also have the option to insert pre-processing commands and post-processing commands and any parameters for your mapping. Please refer to the built-in help for guidance on this.
After running the task, the activity log will confirm whether the import ran without errors or warnings.
As I mentioned initially, this is a simple guide to help you to set up a simple, single source import. Informatica Cloud does have more advanced options as well, both for mappings and transformations.
Each time a user runs an import or an export it affects platform performance, as they will block all other users of the model from performing any tasks while the import or export runs. This creates what is called a toaster message: basically a blue box at the top of the Anaplan screen that indicates to every connected user that the platform is processing an action.
Any person who frequently exports out of Anaplan will likely become very unpopular among the users of the model, especially if exports last more than a few seconds. Users who are not workspace administrators can:
Export data out of a module within a dashboard
Run an import prepared by an administrator
Run a process that an administrator has prepared.
The process can combine a number of imports and exports
L'application Bring Your Own Key (BYOK) vous permet maintenant de vous approprier les clés de chiffrement de vos données de modèle. Si vous avez accès à l'outil Anaplan Administration, vous pouvez chiffrer et déchiffrer des espaces de travail sélectionnés à l'aide de vos propres clés AES-256. À la différence des clés principales système, les clés créées par BYOK vous appartiennent et vous en assurez la sécurité. Aucun mécanisme ne permet au personnel Anaplan d'accéder à vos clés. Bring Your Own Key (BYOK) - Guide de l'utilisateur Bring Your Own Key (BYOK) est un produit complémentaire que votre organisation peut acheter si elle possède l'édition Enterprise.
Making sure that production data lists are correctly marked within a model is a key step to setting up and using ALM . This guide will provide a solution to how someone can make revisions to their model to allow for the tagging of a list as a production data list. Please note: this solution doesn’t work if there are hard-coded references on non-composite summary items. For more information on working with production lists and ragged hierarchies, please visit Production lists and ragged hierarchies logic.
The issue arises as a model administrator needs to tag a production data list, but there are hard-coded references in the model that won’t allow the person to do so. When this occurs and the model administrator tries to tag it as a production list, they will get a warning similar to this:
See Formula Protection for more details.
To fix this issue, all direct formula references to production data lists need to be changed to be indirect references to lists using either LOOKUPs or Boolean formatted conditional logic. Below, you will find a step-by-step guide to replacing these formulas.
Identify formulas with hard-coded references
There is now an easy way to identify all of the formulas which are hard-coded to production data lists.
Check the 'Referenced in Formula' column in the General Lists section.
This will show the line items where the list is used. Check the respective formula for hard-coded references. If there are no hard-coded references, then it is OK to check the list as a production data list. This is the recommended approach, as just setting the lists without prior checking may lead to a rollback error being generated, which could be time-consuming for large models (as well as frustrating).
It is possible to just export the General Lists grid to help where there are multiple references for the same list and then use formulas and filters to identify all offenders in the same effort. This option will save significant amounts of time if there are many line items that would need to be changed.
You are looking for direct references on the list members:
[SELECT: List Name.list member]
ITEM(List Name) =List Name.List member
The following constructs are valid, but not recommended, as any changes to the names or codes could change the result of calculations:
IF CODE(ITEM(List Name))=
IF NAME(ITEM(List Name))=
After following those steps, you should have a list of all of the line items that need to be changed in the model in order for production data list to be open to being checked. Please note: There may still be list properties that have hard-coded references to items. You will need to take note of these as well, but as per D.I.S.C.O., (Best practice for Module design) we recommend that List Properties are replaced with Line Items in System Modules.
Replacing model formulas:
The next step is to replace these formulas within the model. For this, there are two recommended options.
The first option (Option 1 below) is to replace your SELECT statements with a LOOKUP formula that is referencing a list drop-down. Use this option when there are 1:1 mappings between list items and your formula logic. For example, if you were building out a P&L variance report and needed to select from a specific revenue account, you might use this option.
The second option (Option 2 below) for replacing these formulas is to build a logic module that allows you to use Booleans to select list items and reference these Boolean fields in your formulas. Use this option when there is more complex modeling logic than a 1:1 mapping. For example, you might use this option if you are building a variance report by region and you have different logic for all items under Region 1 (ex: budget – actual) than the items under Region 2 (ex: budget – forecast).
(Option 1) Add List Selections module to be used in LOOKUPs for 1:1 mappings:
From here you should make a module called List Selections, with no lists applied to it and a line item for each list item reference that you previously used in the formulas that will be changed. Each of these line items will be formatted as the list that you are selecting to be production data.
Afterward, you should have a module that looks similar to this: An easy and effective way to stay organized is to partition and group your line items of similar list formats into the same sections with a section header line item formatted as No Data and a style of "Heading 1."
After the line items have been created, the model administrator should use the list drop-downs to select the appropriate items which are being referenced. As new line items are created in a standard mode model, the model administrator will need to open the deployed model downstream to reselect or copy and paste the list formatted values in this module since this is considered production data.
Remove hard-coding and replace with LOOKUPs:
Once you have created the List Selections module with all of the correct line items, you will begin replacing old formulas, which you’ve identified in Excel, with new references.
For formulas where there is a SELECT statement, you will replace the entire SELECT section of the formula with a LOOKUP to the correct line item in the list selections.
Old Formula = Full PL.Amount[SELECT: Accounts.Product Sales] New Formula = Full PL.Amount[LOOKUP: List Selections.Select Product Sales]
For formulas where there is an IF ITEM (List Name) = List Name Item, you will replace the second section of the formula after the ‘=’ to directly reference the correct line item in the list selections.
Old Formula = If ITEM(Accounts) = Accounts.Product Sales THEN Full PL.Amount ELSE 0 New Formula = IF ITEM(Accounts) = List Selections.Select Product Sales THEN Full PL.Amount ELSE 0
(Option 2) Modeling for complex logic and many to many relationship:
In the event that you are building more complex modeling logic in your model, you should start by building Boolean references that you can use in your formulas. To accomplish this, you will create a new module with Boolean line items for each logic type that you need. Sticking with the same example as above, if you need to build a variance report where you have different logic depending on the region, start by creating a module by region that has different line items for each different logic that you need similar to the view below:
Once you have the Boolean module set up, you can then change your hard-coded formulas to reference these Boolean formatted line items to write your logic. The formula may look similar to this:
IF Region Logic.Logic 1 THEN logic1 ELSE IF Region Logic.Logic 2 THEN logic2 ELSE IF Region Logic.Logic 3 THEN logic3 ELSE 0
Here is a screenshot of what the end result may look like:
This method can be used across many different use cases and will provide a more efficient way of writing complex formulas while avoiding hard-coding for production data lists.
Selecting production data list:
After all of the hard-coded formulas have been changed in the model, you can navigate back to the Settings tab, and open General Lists. In the Production Data column, check the box for the list that you want to set as a production data list.
Repeat for each list in the model that needs to be a production data list:
For each list in the model that you need to make a production data list, you can repeat the steps throughout this process to successfully remove all hard-coded list references.
Anaplan has built several connectors to work with popular ETL (Extract, Translate, and Load) tools. These tools provide a graphical interface through which you can set up and manage your integration. Each of the tools that we connect to has a growing library of connectors – providing a wide array of possibilities for integration with Anaplan. These ETL tools require subscriptions to take advantage of all their features, making them an especially appealing option for integration if you already have a sub.
Anaplan has a connector available in MuleSoft's community library that allows for easy connection to cloud systems such as Netsuite, Workday, and Salesforce.com as well as on-premise systems like Oracle and SAP. Any of these integrations can be scheduled to recur on any period needed, easily providing hands-off integration. MuleSoft uses the open-source AnyPoint studio and Java to manage its integrations between any of its available connectors. Anaplan has thorough documentation relating to our MuleSoft connector on the Anaplan MuleSoft github.
SnapLogic has a Snap Pack for Anaplan that leverages our API to import and export data. The Anaplan Snap Pack provides components for reading data from and writing data to the Anaplan server using SnapLogic, as well as executing actions on the Anaplan server. This Snap Pack empowers you to use connect your data and organization on the Anaplan Platform without missing a beat.
Anaplan has a connector available on the Boomi marketplace that will empower you to create a local Atom and transfer data to or from any other source with a Boomi connector. You can use Boomi to import or export data using any of your pre-configured actions within Anaplan. This technology removes any need to store files as an intermediate step, as well as facilitating automation.
Anaplan has partnered with Informatica to build a connector on the Informatica platform. Informatica has connectors for hundreds of applications and databases, giving you the ability to leverage their integration platform for many other applications when you integrate these applications with Anaplan. You can search for the Anaplan Connector on the Informatica marketplace or request it from your Informatica sales representative.