theplanual

The definitive set of standards for Anaplan model building.

Read It

Let's talk about it

Discuss what you learned from these best practices and your own experiences in the Forums.

Visit Forums
The Planual provides a systematic set of standards for model building on the Anaplan platform. The rules in it are designed produce the most efficient, usable, and scalable Anaplan models, while dramatically increasing performance for models in all contexts. We highly recommend that all model builders familiarize themselves with these standards, and start incorporating them into their model-building practices. (The results will be significant!)
View full article
PLANS is the new standard for Anaplan modeling—“the way we model.” This covers more than just the formulas and includes and evolves existing best practices around user experience and data hubs. It is a set of rules on the structure and detailed design of Anaplan models. This set of rules will provide both a clear route to good model design for the individual Anaplanner and common guidance on which Anaplanners and reviewers can rely when passing models amongst themselves.  In defining the standard, everything we do will consider or be based around: Performance – Use the correct structures and formula to optimize the Hyperblock Logical – Build the models and formula more logically – See D.I.S.C.O. below Auditable – Break up the formula for better understanding, performance, and maintainability Necessary – Don’t duplicate expressions. Store and calculate data and attributes once and reference them many times. Don't have calculations on more dimensions than needed Sustainable – Build with the future in mind, thinking about process cycles and updates        The standards will be based around three axes: Performance - How do the structures and formula impact the performance of the system? Usability/Auditability - Is the user able to understand how to interact with the functionality? Sustainability - Can the solution be easily maintained by model builders and support? We will define the techniques to use that balance on the three areas to ensure the optimal design of Anaplan models and architecture.       D.I.S.C.O As part of model and module design, we recommend categorizing modules as follows: Data – Data hubs, transactional modules, source data; reference everywhere Inputs – Design for user entry, minimize the mix of calculations and outputs System – Time management, filters, list attributes modules, mappings, etc.; reference everywhere Calculations – Optimize for performance (turn summaries off, combine structures) Outputs -  Reporting modules, minimize data flow out   Why build this way?   Performance Fewer repeated calculations Optimized structures and formulas Logical Data and calculations reside in logical places Model data flows can be easily understood Auditable Model structure can be easily understood Simplified formula (no need for complex expressions) Necessary Formulas and structures are not repeated Data is stored and calculated once, referenced many times, leading to efficient calculations Sustainable Models can be adapted and maintained more easily Expansion and scaling simplified     Recommended Content: Performance Dimension Order Formula Optimization in Anaplan Formula Structure for Performance The Truth About Sparsity: Part 1 The Truth About Sparsity: Part 2 Data Hubs: Purpose and Peak Performance To Version or Not to Version? Line Item Subsets Demystified Logical Best Practices for Module Design Data Hubs: Purpose and Peak Performance Auditable Formula Structure for Performance Necessary Reduce Calculations for Better Performance Formula Optimization in Anaplan Sustainable Dynamic Cell Access Tips and Tricks Dynamic Cell Access - Learning App Personal Dashboards Tips and Tricks Time Range Application Ask Me Anything (AMA) sessions The Planual The Planual Rises
View full article
Line item subsets are one of the most powerful and efficient features in Anaplan, yet one of the least understood. The COLLECT() function is probably the only “black box” function within Anaplan as is it not immediately apparent what it is doing or where the source values are coming from. In the following article, I will cover how to understand line item subsets more easily, and also explain their many uses, some of which do not need COLLECT() at all. For more information on creating line item subsets see Line Item Subsets in Anapedia.  A line-item subset is a list of items drawn from one or more line items from one or more modules. Put simply, it converts line items into a list on which calculations can be performed. There are some restrictions: Line item subsets can only contain numeric formatted line items. Only one line item subset can be used as a dimension in a module. Although line items can contain formulas, the items in a line item subset can only aggregate to a simple subtotal.  Styles on the line items are not transferred over to the line item subset. Line item subsets can be used for many different areas of functionality. For the examples used, I have based them on the final model from the new Level 1 training. Download the model and follow the instructions to practice on the same structures.  These examples are deliberately simplified, but I hope you find these insightful and easy to transfer into your models to simplify the formulae and provide more flexibility to your users. Table of Contents: Calculations on Calculations This is the classic use of line item subsets. A source module contains line items, and subsequently, you need to perform additional calculations on these line items. While in some cases this can be managed through complex formulae, normally these workarounds break most of the best practice guidelines and should be avoided. Use Case example: The source module contains forecast data with line items for the profit and loss lines in U.S. dollars. We need to convert these values into local currency based on the Country dimension. The source modules are as follows: The first step is to create the line item subset, and for this report, we only want summary values. In the settings tab, choose Line Item Subsets and click insert. We recommend prefixing with LIS: the name of the module and simple description. Clicking on the Line Item Subset header item (in settings) will display the Line Item Subsets screen. Click on the newly created line item subset and the … and select the module(s) required; in this case, it is REP03. Select which line items you wish to include in the line item subset. Now that the line item subset has been created, it is available to be used in a module. Create a module with the following dimensions: LIS: REP03 Currency G2 Country Time (Years) Add the following line items: Base Currency Exchange Rate Local Currency In the Base Currency line item, enter the formula: COLLECT() Note that the values are the same as those in REP03 and the line items are now shown in list format (no formatting). Also note that these values are from the Forecast version, as the target module does not have versions, so the Current Version is used as the source automatically. Add the following formulae to the remaining line items to complete the calculation. Exchange Rate = 'DATA02 Exchange Rates'.Rate[LOOKUP: 'SYS03 Country Details'.Currency Code] Local Currency = Base Currency / Exchange Rate Note that the Exchange Rate line item should be set as a Subsidiary view (excluding the line item subset from the applies to) because we are showing it on the report for clarity. If this display was not required, the calculation could be combined with the Local Currency formula. Transformation You can also use a line item subset to help with the transformation between source and target modules. Use Case example: We want to summarize costs (from the reporting P&L) into Central and Locally controlled costs. Create a list (Controllable Costs) containing two members. Central Local Create a line item subset (as before) using just REP03 as the source module. Create a staging module with the following dimensions: LIS: REP03 Cost Reporting G2 Country Time (Years) Add a line item (Data) and enter COLLECT() as the formula. Set the Summary method to None; we do not need subtotals in this module. Create a mapping module, dimensioned by LIS: REP03 Cost Reporting. Add a line item (Mapping) formatted as the Controllable Costs list. Map the lines as applicable. Create a reporting module with the following dimensions. Controllable Costs G2 Country Time (Years) Add a line item called Costs. Add the formula: 'REP07 Cost Reporting Staging'.Data[SUM: 'SYS14 Cost Mapping'.Mapping] We use the SUM formula because the source dimension and the mapping dimension are the same. So, “If the source is the same, it’s a SUM.” Multiple Source Modules Line item subsets can contain line items from multiple modules. There is a caveat though; all modules must share at least one common dimension/hierarchy and/or have a Top Level set for non-matching dimensions. Use case example: Based on user-entered settings, we want to compare the values from two time periods for metrics from three different modules and calculate the absolute and % variances. The source modules all share a common dimension: REV03 Margin Calculation: G2 Countries, P2 Products, Month EMP03 Employee Expenses by Country: G2 Countries, Month OTH01 Non-Employee Expenses: G3 Location, E1 Departments, Month Note: G3 Location has a G2 Country as a parent The module for the user parameters is: And the metrics required are: Margin Salary Bonus Rent Utilities We could solve this problem without using a line item subset: Create a list (Reporting Metrics) containing the list items above. Create a module with the following dimensions. Reporting Metrics G2 Country Users The formula for Month 1 is: IF ITEM(Reporting Metrics) = Reporting Metrics.Margin THEN 'REV03 Margin Calculation'.Margin[LOOKUP: 'SYS11 Time Variance Reporting'.'Month 1'] ELSE IF ITEM(Reporting Metrics) = Reporting Metrics.Salary THEN 'EMP03 Employee Expenses by Country'.Salary[LOOKUP: 'SYS11 Time Variance Reporting'.'Month 1'] ELSE IF ITEM(Reporting Metrics) = Reporting Metrics.Bonus THEN 'EMP03 Employee Expenses by Country'.Bonus[LOOKUP: 'SYS11 Time Variance Reporting'.'Month 1'] ELSE IF ITEM(Reporting Metrics) = Reporting Metrics.Rent THEN 'OTH01 Non Employee Expenses'.Rent[LOOKUP: 'SYS11 Time Variance Reporting'.'Month 1'] ELSE IF ITEM(Reporting Metrics) = Reporting Metrics.Utilities THEN 'OTH01 Non Employee Expenses'.Utilities[LOOKUP: 'SYS11 Time Variance Reporting'.'Month 1'] ELSE 0 I won’t repeat the formula for Month 2, as it is effectively the same, just referencing the Month 2 line item in the source. You can see, that even for a small set of metrics, this is a large complex formula, going against best practices. So, let’s not do that. Create the line item subset as before. For multi-module line item subsets, it is best practice to use Multi> to represent the various modules. Open the line item subset and chose the three modules. Create a staging module (this is best practice following the DISCO principle), with the following dimensions. LIS: Multi>Variance Reporting G2 Country Time (Months) Add a line item (Data) and enter COLLECT() as the formula. Set the Summary method to None; we do not need subtotals in this module. Create a mapping module, dimensioned by Reporting Metrics. Add a line item formatted LIS: Multi>Variance Reporting. Map the lines accordingly. In the reporting module from above, change the Month 1 and Month 2 line item formulae to. 'REP05 Variance Report Staging'.Data[LOOKUP: 'SYS11 Time Variance Reporting'.'Month 1', LOOKUP: 'SYS12a Reporting Metrics Mapping'.Mapping] 'REP05 Variance Report Staging'.Data[LOOKUP: 'SYS11 Time Variance Reporting'.'Month 2', LOOKUP: 'SYS12a Reporting Metrics Mapping'.Mapping] Note, this time we are using LOOKUP rather than SUM because the source dimension doesn’t match the dimension of the mapping module. I think you’ll agree that the formula is much easier to read and it is more efficient. However, we can do even better; but note that there now are two ‘lookups’ in the formula. The more “transformations” there are in the formulae, the more work the engine needs to do. We can remove one of these by changing the target module dimensionality. Copy the reporting module from above. Remove the formulae for Month 1 and Month 2. Replace Reporting Metrics with LIS: Multi>Variance Reporting as the dimension (applies to). Add the following formulae for Month 1 and Month 2 respectively. Month 1 = 'REP05 Variance Report Staging'.Data[LOOKUP: 'SYS11 Time Variance Reporting'.'Month 1'] Month 2 = 'REP05 Variance Report Staging'.Data[LOOKUP: 'SYS11 Time Variance Reporting'.'Month 2'] Note, only one lookup is needed in the formula. Filters Another use case that line item subsets can be used for is filtering. And this functionality has nothing to do with staging data or mapping modules. It is possible to filter line items and these can also be filtered based on other dimensions too. Use Case example: Based on user-entered settings, for the reporting module (REP03) we want to show different line items for each year and version. We already have set up the Years to Versions filter module We now want to set up the user-driven parameters. To ensure that the users’ settings do not affect each other, we need to use the system generated Users’ list. Create a line item subset based on REP03 Select all line items Create a new module with the following dimensions: LIS: REP03 Filters Users Versions Add a single line item (Show?) formatted as a Boolean Enter values as you wish Note that Employee expenses and Other Costs are not available to check. This is because, in REP03, they are a simple aggregation and are shown as Parents of the other line items.  So, how do we resolve this? You can “trick” the model by turning these setting off. The subtotals are now available to check in the filter module. It is worth noting, be careful when doing this. If you are using the line item subsets as a dimension in a data entry module, the totals will not calculate correctly. See Final Thoughts for more details. To set up the filter In REP03, set the following filters The module will now filter line items and years when the version page selector is changed. Note the subtotals work correctly in this module because it is not data entry. Dynamic Cell Access Line item subsets can be used in conjunction with Dynamic Cell Access to provide very fine-grained control over data; again, without any mapping modules or COLLECT() statements Use Case Example: In the following module The following rules apply: Bonus % is set by the central team so it needs to be read only. All metrics for Exec are not allowed to be edited. Car Allowances are not applicable for Production. Phone Allowances are not applicable for Production, Finance or HR, and the allowances for Sales should be read only. To set up the access: Create a line item subset based on EMP01 Select all line items Create an Access Driver module with the following dimensions: LIS: EMP01 DCA G2 Country E1 Departments Add two Boolean formatted line items Read? Write? Enter the values as below  Now in EMP01 assign the Read Access and Write Access drivers to the module The module now looks like this: Line Items Subsets with Line Item Subsets I mentioned at the outset that you can lose formatting when using a line item subset. However, in some cases, it is possible to keep formatting along with calculations Use Case Example: Using the values from REP03, we want to classify Sales and Costs and then calculate a cost % to Sales. Yes, we could do this in the module itself as a separate line item, but we also want to be able to reclassify the source line items from a dashboard using mappings rather than change the blueprint formula. We also want to maintain formatting.  For this example, I have just changed the styles to illustrate the point Create a line item subset based on REP03. Create a staging module with the following dimensions: LIS: REP03 Cost% G2 Country Time (Years) Add a line item call Data and enter COLLECT() as the formula and set the Summary method to None. Create a second line item subset based on REP10 (the target module). Create a mapping module dimensioned by the LIS: REP03 Cost% Create a line item formatted as LIS: REP10 Map the lines accordingly In the target module set following formula for both Sales and Costs line items (Yes, it is the same formula for both!) 'REP09 LISS vs LISS - Staging'.Data[SUM: 'SYS20 Cost% Mapping'.Mapping]  Note the formatting is preserved. Version Formula Finally, I want to mention a piece of functionality that is not well known but very powerful; Version Formula. Utilizing line item subsets in conjunction with versions, Version Formula extends the ‘Formula scope” functionality. It is possible to control formulae using Formula Scope, but there are limited options. Use Case example: Let’s assume that we have actuals data in one module, the budget data in another and we want to enable the forecast to be writeable. The current version (in the versions setting) is set to Forecast For this example, there is only one line item in the target module, but this functionality allows the flexibility to set different rules per version for each line item Create a line item subset based on the above and select the line item(s). Now in the blueprint view of the target module click Edit>Add Version Formula. Now choose the Version to which the formula applies. You will now see a different formula bar at the top of the blueprint view. Enter the following formula: 'DATA01 P&L Actuals & Budget'.Revenue Repeat the above for Budget with the following formula: 'REV03 Margin Calculation'.Revenue  Note that now at the top, you can see that there is a Version Formula set. Final Thoughts We mentioned the aggregation behavior and the ‘Is Summary’ setting earlier. Let me show you how this and the construction of the formulas affect the behavior of the line item subset We will use the following module as an example. This module is only used to set up the line item subset, so no dimensions are needed. Note that the subtotal formulae are simple aggregations. This means the subtotal lines: Calculate correctly when used as a dimension in a module. Are not available for data entry. The following module is dimensioned by the line item subset to highlight 1. and 2. above. If we decide we don’t want the Employee costs in the line item subset, two things happen: The indentation changes for the detailed cost lines because they are now not part of a parent hierarchy on display. The Costs subtotal doesn’t calculate. This is because the Costs subtotal needs the intermediate subtotals to exist within the line item subset. To mitigate the latter point there are two remedies. Include the subtotals and hide them – The lines are still calculating and taking space. If possible, adjust the formula structure. Remove the subtotals formula. Add in the Costs formula as to use the detailed items; no intermediate totals. Re-add the subtotal formulas. Note the 'Parent' and 'Is Summary' settings, the Costs subtotal now calculates correctly. If we change the formulae to be something other than simple addition, you will see that calculation is fine in the source module, but not in the line item subset module. Why is this? Remember the 'Is Summary' setting we changed in the Filters section when we adjusted the formula the 'Is Summary' is now unchecked This means that the line item subset doesn’t treat the line as a calculation, hence the data entry 0 shown instead. If your costs need to be positive (as in this example), it is possible to calculate correctly using a ratio formula. This works for normal line items/lists as well as line item subsets. See Changing the sign for Aggregation for more details
View full article
Learn how using ratio can solve the problem of showing costs as positive numbers whilst subtracting them from totals
View full article
General Recommendations It is important as model calculations increase to ensure that the calculations and structures are as optimal as possible. Easy checks include: Ensure the dimension order in the "applies to" is consistent. Consider if all of the dimensions for the calculations are necessary. Reduce the number of line items with the same formulas (calculate once, reference many times). Consider the use of subsets and line item subsets. Use a balanced approach to sparsity. Ensure "text joins" are optimal. See the following articles for more information on the above: The Truth about Sparsity (part 1) The Truth About Sparsity (part 2) Best practice for Dimension ordering Reduce calculations for better performance Formula Optimization in Anaplan Customer Requirements  General recommendations also include, whenever possible, challenging your customer’s business requirements when the customer requires a large list (>1M), a large amount of data history (this should be kept out of the planning model if possible), and high number of dimensions used at the same time for a line item (>5) Other Practices Once these general recommendations have been applied, you can optimize performance in different areas. The articles below will expand on each subject: Imports and exports and their effects on model performance Dashboard settings that can help improve model performance Formulas and their effect on model performance Model load, Model Save, Model Rollback and their effect on model performance
View full article
Imagine the Following Scenario: You need to make regular structural changes to a deployed model (for example, weekly changes to the switchover date). You can make these changes by setting revision tags in the development model. However, you also have a development cycle that spans the structural changes. What Do You Do? Let's start with what you don't want to do —take the model out of deployed mode, lose all the development work you have been doing, or synchronize partially developed changes. Don’t worry! Manage these things by following the procedure below. The following diagram illustrates the procedure (for switchover): It’s all about planning ahead.  Before Starting Development Activities: Make the regular structural change (e.g. switchover period) and set a revision tag to preserve this. Create the next revision tag for the next structural change. Repeat for as many revision tags as necessary. Remember to give enough breathing space to allow for the normal development activities and probably allow for a couple more—just in case. Start Developing: When needed, you can synchronize to the relevant revision tag without promoting the partial development changes. When the development activities are ready, ensure that the correct structural setting is made (e.g. the correct switchover period), create the revision tag, and synchronize the model. Repeat steps 1–3 to set up the next “batch” of revision tags to cover the development window.
View full article
As described in the Authentication API documentation, an authentication token is needed to issue requests with API 2.0.  The request for a token is made to: https://auth.anaplan.com/token/authenticate API 2.0 supports both Basic Authentication and CA Certificates.  Basic Authentication is achieved in the same manner it was for API 1.3. The only difference is that it is used with the authentication request to get the token, not on each individual request. CA Certificate authentication requires that the public key and private key be used in the request to obtain an authentication token. The following steps are needed to create the request using a CA Certificate.   Add the contents of the public key to the authorization header Authorization: CACertificate {your_CA_certificate} {you_CA_certificate} should be replaced by the contents of the public key.  This should include the contents between the "--- BEGIN CERTIFICATE ---" and "--- END CERTIFICATE ---" lines and not including them.  The Content-Type header should be "application/json". The body of the request should have the following json string. { "encodedData" : "{encodedString}", "encodedSignedData" : "{signedString}" } The {encodedString} value should be a randomly generated base-64 encoded string of at least 100 bytes. The {signedString} value is the {encodedString} value that has been signed by the private key and then base-64 encoded.   Ideally, each time an application needs to connect to Anaplan it should generate a new random string and signed string. If this is not possible, then it can be generated once and used each time. Examples: The documentation has some sample Java code that can be added to an existing API implementation.  A full python library has been published in community. This can be used as needed to develop an authentication process. Below is a simple chunk of python code based on the library above that will output the strings needed as well. In this code, the path to the public key and private key files will need to be entered into the certfile and keyfile variables respectively.  This code requires the following: Python 3. Request. pyOpenSSL.   from base64 import b64encode import os import requests from OpenSSL import crypto import random import string certfile = "{path_to_public_key_file}" keyfile = "{path_to_private_key_file}" """ docstring """ st_cert=open(certfile, 'rt').read() cert=crypto.load_certificate(crypto.FILETYPE_PEM, st_cert) st_key=open(keyfile, 'rt').read() key=crypto.load_privatekey(crypto.FILETYPE_PEM, st_key) pem = crypto.dump_certificate(crypto.FILETYPE_TEXT, cert) print (type(pem)) random_str = os.urandom(100) signed_str = crypto.sign(key, random_str, "sha512") auth_headers = "'authorization': 'CACertificate %s'" % (st_cert.replace("\n", "").replace("-----BEGIN CERTIFICATE-----", "").replace("-----END CERTIFICATE-----", "")) print(auth_headers, '\n') encodedstr = b64encode(random_str) signedstr = b64encode(signed_str) print("{") print(" 'encodedData': %s " %encodedstr.decode("utf-8") ) print(" 'encodedSignedData': %s" % signedstr.decode("utf-8") ) print("}")   This code returns the header needed, as well as the json body needed for the authentication request.  Authentication Header: json Body:
View full article
This article covers the necessary steps to update the iPaaS connectors for HyperConnect/Informatica Cloud, Dell Boomi, Mulesoft and SnapLogic. See the article A Guide to CA Certificates in Anaplan Integrations - Anaplan Community for the steps to process a certificate once it has been procured. HyperConnect/Informatica Cloud Authentication within HyperConnect/Informatica Cloud is handled at the connection level. There should be a connection for each model that is used within the integrations. HyperConnect/Informatica Cloud supports basic authentication and certificate authentication.  The steps to use Certificate Authority (CA) certificates with HyperConnect/Informatica are listed below: Each connection must be using the "Anaplan V2" connector.  A java keystore containing both the public and private keys needs to be created and placed where the secure agent can access it. In each connection: Set the Auth Type to "Cert Auth". Clear the "Certificate Path Location" field. Update the API Major Version. Set it to 2. Update the API Minor Version. Set it to 0. Enter the full path to the java keystore in the "KeyStore Path Location". Enter the alias used when the java keystore was created in the "KeyStore Alias" field. Enter the password for the java keystore in the "KeyStore Password" field. Note the password is masked. Test for connectivity. Dell Boomi Authentication within Dell Boomi is handled at the connection level. There should be a connection for each workspace that is used within the integrations. Dell Boomi supports basic authentication and certificate authentication.  The steps to use CA certificates with Dell Boomi are listed below: Each connection must be using the "Anaplan" version of the connector. The "Anaplan V2" and "Anaplan (legacy)" versions are not current and do not support CA certificate authentication. A P12 bundle of both the public and private keys needs to be created. The file received from the CA provider is sometimes in the P12 bundle format. To test this: Use the java keytool to run the following command. keytool -v list -storetype pkcs12 -keystore %path to keystore% Within the output of the command, there should be an "Alias name" property. This value will be used in the connection. If the certificate does not contain the alias, a P12 bundle can be created using OpenSSL. See Creating a Java Keystore for the steps to create a P12 bundle. Once the bundle is created, the remaining steps in the article are not needed. In Dell Boomi: Create a new object.  Type: Certificate. Certificate Type: X.509. The name and location of the certificate are up to you. Click "Create". Import the P12 bundle file. Edit the connection. Ensure the URL is pointed to "https://api.anaplan.com/2/0". Set the Authentication Type to "Client Certificate". Select the certificate created above from the "Certificate" dropdown. Enter the alias used in the P12 bundle into the "Private Key Alias" field. Enter the password for the P12 bundle in the "Password" field. MuleSoft Authentication within MuleSoft is handled at the c onnection level. Typically only a single connection is needed. MuleSoft supports basic authentication and certificate authentication.  The steps to use CA Certificates with MuleSoft are listed below: A java keystore containing both the public and private keys needs to be created. Enter the full path to the java keystore in the "Key store path". Enter the alias used when the java keystore was created in the "KeyStore Alias" field. Enter the password for the java keystore in the "KeyStore Password" field. Note the password is masked. SnapLogic Authentication within SnapLogic is handled at the c onnection level. Typically only a single connection is needed. SnapLogic supports basic authentication and certificate authentication.  The steps to use CA Certificates with SnapLogic are listed below: Public Key Open the public key file in a text editor. Copy everything from "--- BEGIN CERTIFICATE ---" through "---END CERTIFICATE ---". Paste the contents into the "External certificate contents". Private Key The private key cannot be encrypted for use in SnapLogic. Open the private key file in a text editor. If the key information begins with "--- BEGIN RSA PRIVATE KEY ---" then the key is not encrypted. Continue with step iii below. If the key information begins with "--- BEGIN ENCRYPTED PRIVATE KEY ---" then the key needs to be un-encrypted prior to use. Issue the following OpenSSL command to create a new private key file from the original. openssl rsa -in private_key.pem -out unencrypted_private_key.pem Copy everything from "--- BEGIN RSA PRIVATE KEY ---" through "---END RSA PRIVATE KEY ---" Paste the contents into the "External private key" field.
View full article
As a business operations manager on the Anaplan on Anaplan (AoA) team—an internal team, focused on bringing Connected Planning to life within Anaplan—I help to oversee our internal Anaplan model ecosystem and assist in the solutioning and development of Anaplan models across all of our functional business groups.  As Anaplan's largest customer, one of the numerous requirements we must address is user access and security. Utilizing Anaplan's user roles functionality typically gets the job done for granting users access to specific models. Occasionally, we must go one step further and leverage Anaplan's selective access feature. Roles and selective access are powerful tools and address our needs nearly all of the time. However, as we scale our own use of Anaplan, we have begun to encounter the need to provision user's access to lists based on multiple criteria, rather than just a single condition.  In Real Life A real-life user provisioning challenge we’ve encountered is in our headcount planning model. As this model provides real-time reporting on our employees, there are inherent sensitivities and considerations around who can see information for specific employees—taking into consideration visibility to things like compensation and personally identifiable information (PII). We have multiple use cases built out within the model, including recruiting capacity and analysis, attrition reporting, hiring reporting, etc., and the access to specific employee data depends on the end user of the model. Sample employee roster: Joey manages Usain, Eluid, and Meb; Americas Geo; HR Cost Center. In this example model, we have our complete employee roster included. If an HR business partner accesses the model, we want them to see only employees that are tagged to the functional area they support (e.g. finance, sales). Additionally, if a business manager goes into the model, they should only see information for employees where they are the manager, or employees downstream on their management chain. But wait! If the HR business partner is in Europe, they shouldn’t be able to see PII fields for their employees. Do you see how this could get complicated quickly? Additionally, some dashboards that contain non-sensitive employee information are perfectly fine to open up broadly to all users, while others contain sensitive data we need to provision. What’s Next So, how do we handle this? We can’t provision access by roles because all of the aforementioned users need access to the same modules/dashboards as it relates to the employees they manage. Additionally, no single user should be able to see all data for all employees. Selective access could be considered as a solution, but given the levels of complexity and multiple logical drivers—as well as the requirement to not hide reporting of non-sensitive data for employees—that option also has limitations. Enter Dynamic Cell Access (DCA). Since DCA allows us to base read/write access off of formulae logic, it offers us the ability to layer on multiple levels of logic ahead of deciding whether or not someone should be able to read or write on a particular item in a list. It’s dynamic (who would have thought with that name?), which means it adjusts live as data within the model changes. Additionally, it offers us the flexibility to apply the provisioning logic to the exact modules we want to, rather than blanket provision users across the model. DCA In Action The following is a high-level example of how to leverage the power of DCA: Load employee roster data into Anaplan, ensuring the data contains the employee email—the same email that is used to log in to Anaplan. This allows for the mapping of Anaplan users to the employee roster. Set up a System module with the ‘applies-to’ list of the user list. User meta-data staging module: Rows represent model users (Joey, in this example) and the line items represent meta-data off of the roster module. Within this module, we can join the employee roster data and the user list to map the employee’s meta-data to their Anaplan user profile (e.g. cost center, location, management chain, etc.) Using a series of Boolean line items, we can write whatever logic we want to base our DCA on. In our example, this could include: Is HR business partner? Is Euro? Basically, this is a staging module for all of the employee meta-data we want to leverage to create our DCA drivers. Set up a second System module with the ‘applies-to’ list of whatever list you want to apply DCA against, as well as the user list. In our case, this would also be our employee roster list. Create a series of Boolean line items, testing different attributes of the User System module we just set up against the meta-data of the employees. An example would be (Employee Cost Center = User’s Cost Center). DCA logic module for the employee roster list (rows in this module): Line items represent the logic used to determine whether the user (Joey— in the page selector) can see the employee. The key here is to consolidate all of your logic into a single “Master” line item, which is on the far right. Daisy chain your conditions together as desired, with the end result being a master Boolean line item, which is the driver for whether or not a particular user has read or write access to a particular item within the list. In this dashboard you can see that the information is masked for those employees that did not meet all of the criteria identified in the master DCA line item. Select which modules you’d like to apply DCA to. The nice thing about DCA is you can go down to the line item level to map the master Boolean driver against. The incredible power of the process described above is not only the complete control over and ability to customize your user provisioning, but also that as new roster data is loaded into Anaplan, the DCA automatically adjusts itself to account for changes. So, if someone changed cost centers or a manager on an employee changed, the formulas that we set up above would be referencing the new employee meta-data, and would automatically adjust the DCA drivers, allowing for a much more hands-off, sustainable approach to user provisioning. Another inadvertent benefit we discovered with using this methodology is that Anaplan treats cells that are blank as a result of DCA drivers as being blank for filtering purposes. So, if you want to set up a dashboard that auto-filtered employees for the end user based on the logic above, you just have to add a line item hardcoded to contain values for every list item, and then filter that line item for not-blanks on your dashboards. Then you have a dynamic filter based on the user that is viewing the model…pretty slick! Take this one step further and filter for not-blanks on a line item that will always contain data for an employee, and you get completely custom reporting based on which end user is viewing the dashboards.
View full article
What are the benefits and drawbacks of using Versions instead a General List
View full article
What is Pre-Allocation in Lists? Pre-allocation in lists is a mechanism in Anaplan that adds a buffer to list lengths. It is not added by default for lists; it becomes enabled when a role is set on a list. Please follow 1.03-01 though. Only add roles when needed. When it is enabled, a 2 percent buffer is added to the list, and this includes all line items where the list is used. This means we create extra space (in memory) for each line item so that when a new list item is added, the line item does not need to be expanded or restructured. When the buffer is used up (the list has run out of free slots) another 2 percent buffer will be created and any line items using the list will be restructured. This buffer is not shown in the list settings in Anaplan, meaning if we had a list with 1,000 items, that’s what Anaplan would show as the size. But in the background, that list has an extra 20 hidden and unused items. Pre-allocation also applies to list deletions but allows for 10 percent of the list to be deleted before any line items using the list get restructured. The purpose of pre-allocation in lists is to avoid restructuring line items that use frequently updated lists. What Happens When We Restructure? Restructuring the model is an expensive task in terms of performance and time. The Anaplan Hyperblock gets its efficiency by holding your data and multi-dimensional structures in memory — memory being the fastest storage space for a computer. Creating the model structures in memory — building the Hyperblock — does take a significant time to complete. But once it's in memory, access is quick. The initial model opening is when we first build those structures in memory. Once in memory, any further model opens (by other users, for example) are quick. Restructuring is the process of having to rebuild some parts of the model in memory. In the case of adding an item to a list, that means any line item that uses that list as a dimension. When restructuring a line item, we have to recalculate it, and this is often where we see the performance hit. This is because line items have references, so there is a calculation chain from any line item changed by that restructuring. Pre-allocation is there to reduce this extra calculation caused by restructuring. An example of this was seen in a model that was adding to a list that contained trial products. These products would then have a set of forecasted data calculated from historical data from real products. The list of these new products was small and changed reasonably frequently; it contained around 100 items. Adding an item took around two seconds (except every third item took two minutes). This happened because of the difference between adding to the pre-allocated buffer and when it had to do the full calculation (and re-adjust the buffer). Without pre-allocation, every list addition would have taken two minutes. Fortunately, we managed to optimize that calculation down from two minutes to several seconds, so the difference between adding to the pre-allocation buffer and the full calculation was around five seconds, a much more acceptable difference. In summary, pre-allocation on lists can give us a great performance boost, but it works better with larger lists than small lists. Small, Frequently Updated Lists As we’ve seen, the pre-allocation buffer size is 2 percent, so on a large list — say one million items — we have a decent-sized buffer and can add many items. When we have a small list that is frequently used, then a performance characteristic that is seen is frequently changing calculation times. This is especially the case if that list is heavily used throughout the model. A list with 100 items will restructure and recalculate on every third addition. This will continue to be noticeable for quite some time. Doubling the list size is still just adding four unused items (2 percent of 200). When we have a small list that is frequently used, you will see the calculation times change from fast to slow while the buffer is frequently filled. In cases like this, it is very important to reduce and optimize the calculations as much as possible. What Can Be Done? There are a few options. You could always make the list bigger and increase the buffer so that it restructures less. How? Option 1: Create a subset of “active” items and ignoring the additional list items used to bulk out the list. The problem with this would be the size of any line items using that list would increase and so would their calculations. Changing from a 100-item list to a 10,000- or even 1,000-item list (enough to give us a bigger buffer) could greatly increase the model size. Option 2: Create a new list that is not used in any modules so we avoid any restructuring costs. This would work but it adds a lot of extra manual steps. You would have this new list used in a single data entry module, which means this data is unconnected with the rest of the model. Being connected is what gives us value. You would then need to create a manual process to push data from this unconnected module to one that is connected to the rest of the model (this way all the changes will happen at once). We do lose the real-time updates and benefits of Connected Planning, though. Option 3: Reduce the impact of restructuring by optimizing the model and the formulas. Our best option is optimizing calculations. If we have quick calculations, the difference between buffered and unbuffered list additions could be small. The best way to achieve this would be through a Model Optimization Success Accelerator. This is a tailored service delivered by Anaplan Professional Services experts who aim to improve model performance through calculation optimizations. Please discuss this service with your Anaplan Business Partner. You can also follow our best practice advice and reference the Planual to find ways you can optimize your own models.
View full article
This article covers the necessary steps for you to migrate your Anaplan Connect (AC) 1.3.x.x script to Anaplan Connect 1.4.x. For additional details and examples, refer to the  latest Anaplan Connect User Guide. The changes are: New connectivity parameters. Replace reference to Anaplan Certificate with Certificate Authority (CA) certificates using new parameters. Optional Chunksize & Retry parameters. Changes to JDBC configuration. New Connectivity Parameters Add the following parameters to your Anaplan Connect 1.4.x integration scripts. These parameters provide connectivity to Anaplan and Anaplan authentication services. Note: Both of the urls listed below need to be whitelisted with your network team. -service "https://api.anaplan.com/" -auth "https://auth.anaplan.com" Certificate Changes As noted in our   Anaplan-Generated Certificates to Expire at the End of 2019 blog post, new and updated Anaplan integration options support Certificate Authority (CA) certificates for authentication. Basic Authentication is still available in Anaplan Connect 1.4.x, however, the use of certificates has changed. In Anaplan Connect 1.3.x.x, the script references the full path to the Anaplan certificate file. For example: -certificate "/Users/username/Documents/AnaplanConnect1.3/certificate.pem" In Anaplan Connect 1.4.x the CA certificate can be referenced via two different options. Examples of both options are included at the end of this article as well as in the Anaplan Connect 1.4.x download. Option 1: Direct Use of the Private Key with Anaplan Connect Use your Private Key with Anaplan Connect by providing to certificate, private key and optional private key passphrase.  For example: If your private key has been encrypted use the following: CertPath="FullPathToThePublicCertificate" PrivateKey="FullPathToThePrivateKey:Passphrase" If your private key has not been encrypted then the passphrase can be omitted, however the colon is still needed in the path of the private key. CertPath="FullPathToThePublicCertificate" PrivateKey="FullPathToThePrivateKey:" To pass these values to Anaplan Connect 1.4.x, use these command line parameters: -certificate {path to the certificate file} -privatekey {path to the private key file:}{passphrase} These parameters should be passed as part of the credentials in the script: Credentials="-certificate ${CertPath} -privatekey ${PrivateKey}" Option 2: Create a Java Keystore A Java Keystore (JKS) is a repository of security certificates and their private keys.  Refer to   this video   for a walkthrough of the process of getting the CA certificate into the key store. You can also refer to   Anaplan Connect User Guide   for steps to create the Java key store. Once you have imported the key into the JKS,   make note of this information : Path to the JKS (directory path on server where JKS is saved) The Password to the JKS The alias of the certificate within the JKS. For example: KeyStorePath ="/Users/username/Documents/AnaplanConnect1.4/my_keystore.jks" KeyStorePass ="your_password" KeyStoreAlias ="keyalias" To pass these values to Anaplan Connect 1.4.x, use these command line parameters: -keystore {KeystorePath} -keystorealias {KeystoreAlias} -keystorepass {KeystorePass} These parameters should be passed as part of the credentials in the script: Credentials="-keystore ${KeyStorePath} -keystorepass ${KeyStorePass} -keystorealias ${KeyStoreAlias}" Chunksize Anaplan Connect 1.4.x allows for custom chunk sizes on files being imported. The -chunksize parameter can be included in the call with the value being the size of the chunks in megabytes. The chunksize can be any whole number between 1 and 50. -chunksize {SizeInMBs} Retry Anaplan Connect 1.4.x allows for the client to retry requests to the server in the event that the server is busy. The -maxretrycount parameter defines the number of times the process retries the action before exiting. The -retrytimeout parameter is the time in seconds that the process waits before the next retry. -maxretrycount {MaxNumberOfRetries} -retrytimeout {TimeoutInSeconds} Changes to JDBC Configuration With Anaplan Connect 1.3.x.x the parameters and query for using JDBC are stored within the Anaplan Connect script itself. For example: Operation="-file Sample.csv' -jdbcurl 'jdbc:mysql://localhost:3306/mysql?useSSL=false' -jdbcuser 'root:Welcome1' -jdbcquery 'SELECT * FROM py_sales' -import 'Sample.csv' -execute" With Anaplan Connect 1.4.x. the parameters and query for using JDBC have been moved to a separate file. The name of that file is then added to the AnaplanClient call using the   -jdbcproperties   parameter. For example:  Operation="-auth 'https://auth.anaplan.com' -file 'Sample.csv'  -jdbcproperties 'jdbc_query.properties' -import 'Sample.csv' -execute " To run multiple JDBC calls in the same operation, a separate jdbcpropeties file will be needed for each query. Each set of calls in the operation should include then following parameters: -file, -jdbcproperties, -import, and -execute. In the code sample below each call is underlined separately.  For example: Operation="-auth 'https://auth.anaplan.com' -file 'SampleA.csv' -jdbcproperties 'SampleA.properties' -import 'SampleA Load' -execute -file 'SampleB.csv' -jdbcproperties 'SampleB.properties' -import 'SampleB Load' -execute" JDBC Properties File Below is an example of the JDBCProperties file. Refer to the   Anaplan Connect User Guide   for more details on the properties shown below. If the query statement is long, the statement can be broken up on multiple lines by using the \ character at the end of each line. No \ is needed on the last line of the statement. The \ must be at the end of the line and nothing can follow it. jdbc.connect.url=jdbc:mysql://localhost:3306/mysql?useSSL=false jdbc.username=root jdbc.password=Welcome1 jdbc.fetch.size=5 jdbc.isStoredProcedure=false jdbc.query=select * \ from mysql.py_sales \ where year = ? and month !=?; jdbc.params=2018,04 CA Certificate Examples Direct Use of the Private Key Anaplan Connect Windows BAT Script Example (with direct use of the private key) '@echo of rem This example lists files in a model set CertPath="C:\CertFile.pem" set PrivateKey="C:\PrivateKeyFile.pem:passphrase" set WorkspaceId="Enter WS ID Here" set ModelId="Enter Model ID here" set Operation=-service "https://api.anaplan.com" -auth "https://auth.anaplan.com" -workspace %WorkspaceId% -model %ModelId% -F set Credentials=-certificate %CertPath% -privatekey %PrivateKey% rem *** End of settings - Do not edit below this line *** setlocal enableextensions enabledelayedexpansion || exit /b 1 cd %~dp0 set Command=.\AnaplanClient.bat %Credentials% %Operation% @echo %Command% cmd /c %Command% pause Anaplan Connect Shell Script Example (with Direct Use of the Private Key) #!/bin/sh # This example lists files in a model set CertPath="/path/CertFile.pem" set PrivateKey="/path/PrivateKeyFile.pem:passphrase" WorkspaceId="Enter WS ID Here" ModelId="Enter Model Id Here" Operation="-service 'https://api.anaplan.com' -auth 'https://auth.anaplan.com' -workspace ${WorkspaceId} -model ${ModelId} -F" #________________ Do not edit below this line __________________ if [ "${PrivateKey}" ]; then     Credentials="-certificate ${CertPath} -privatekey ${PrivateKey}" fi echo cd "`dirname "$0"`" cd "`dirname "$0"`" if [ ! -f AnaplanClient.sh ]; then     echo "Please ensure this script is in the same directory as AnaplanClient.sh." >&2     exit 1 elif [ ! -x AnaplanClient.sh ]; then     echo "Please ensure you have executable permissions on AnaplanClient.sh." >&2     exit 1 fi Command="./AnaplanClient.sh ${Credentials} ${Operation}" /bin/echo "${Command}" exec /bin/sh -c "${Command}"  Using a Java Keystore (JKS) Anaplan Connect Windows BAT Script Example (Using a Java Keystore) @echo off rem This example lists files in a model set Keystore="C:\YourKeyStore.jks" set KeystoreAlias="alias1" set KeystorePassword="mypassword" set WorkspaceId="Enter WS ID Here" set ModelId="Enter Model ID here" set Operation=-service "https://api.anaplan.com" -auth "https://auth.anaplan.com" -workspace %WorkspaceId% -model %ModelId% -F set Credentials=-k %Keystore% -ka %KeystoreAlias% -kp %KeystorePassword% rem *** End of settings - Do not edit below this line *** setlocal enableextensions enabledelayedexpansion || exit /b 1 cd %~dp0 set Command=.\AnaplanClient.bat %Credentials% %Operation% @echo %Command% cmd /c %Command% pause Anaplan Connect Shell Script Example (Using a Java Keystore) #!/bin/sh #This example lists files in a model KeyStorePath="/path/YourKeyStore.jks" KeyStoreAlias="alias1" KeyStorePass="mypassword" WorkspaceId="Enter WS ID Here" ModelId="Enter Model Id Here" Operation="-service 'https://api.anaplan.com' -auth 'https://auth.anaplan.com' -workspace ${WorkspaceId} -model ${ModelId} -F" #________________ Do not edit below this line __________________ if [ "${KeyStorePath}" ]; then     Credentials="-keystore ${KeyStorePath} -keystorepass ${KeyStorePass} -keystorealias ${KeyStoreAlias}" fi echo cd "`dirname "$0"`" cd "`dirname "$0"`" if [ ! -f AnaplanClient.sh ]; then     echo "Please ensure this script is in the same directory as AnaplanClient.sh." >&2     exit 1 elif [ ! -x AnaplanClient.sh ]; then     echo "Please ensure you have executable permissions on AnaplanClient.sh." >&2     exit 1 fi Command="./AnaplanClient.sh ${Credentials} ${Operation}" /bin/echo "${Command}" exec /bin/sh -c "${Command}"   
View full article
Note that this article uses a planning dashboard as an example, but many of these principles apply to other types of dashboards as well. Methodology User Stories Building a useful planning dashboard always starts with getting a set of very clear user stories, which describe how a user should interact with the system. The user stories need to identify the following: What the user wants to do. What data the user needs to see to perform this action. What data the user wants to change. How the user will check that changes made have taken effect. If one or more of the above is missing in a user story, ask the product owner to complete the description. Start the dashboard design, but use it to obtain the answers. It will likely change as more details arrive. Product Owners Versus Designers Modelers should align with product owners by defining concrete roles and responsibilities for each team member. Product owners should provide what data users are expecting to see and how they wish to interact with the data, not ask for specific designs (this is the role of the modelers/designers). Product owners are responsible for change management and should be extra careful when dashboard/navigation is significantly different than what is currently being used (i.e. Excel ® ). Pre-Demo Peer Review  Have a usability committee that: Is made up of modeling peers outside the project and/or project team members outside of modeling team. Will host a mandatory gate-check meeting to review models before demos to product owners or users. Committee is designed to ensure: Best design by challenging modelers. Consistency between models. The function is clear. Exceptions/calls to action are called out. The best first impression. Exception, Call to Action, Measure Impact Building a useful planning dashboard will be successful if the dashboard allows users to highlight and analyze exceptions (issues, alerts, warning), take action and plan to solve these, and always visually check the impact of the change against a target. Dashboard structure Example: A dashboard is built for these two user stories that compliment each other. Story 1: Review all of my accounts for a specific region, manually adjust the goals and enter comments. Story 2: Edit my account by assigning direct and overlay reps. The dashboard structure should be made of: Dashboard header: Short name describing the purpose of the dashboard at the top of the page in "Heading 1." Groupings: A collection of dashboard widgets. Call to action. Main grid(s). Info grid(s) : Specific to one item of the main grid. Info charts: Specific to one item of the main grid. Specific action buttons: Specific to one item of the main grid. Main charts: Covers more than one item of the main grid. Individual line items: Specific to one item of the main grid, usually used for commentaries. Light instructions. A dashboard can have more than one of these groupings, but all elements within a grouping need to answer the needs of the user story. Use best judgements to determine the number of groupings added to one dashboard. A maximum of two-to-three groupings is reasonable. Past this range, consider building a new dashboard. Avoid having a "does it all" dashboard, where users keep scrolling up and down to find each section. If users ask for a table of contents at the top of a dashboard, it's a sign that the dashboard has too much functionality and should be divided into multiple dashboards. Example:   General Guidelines  Call to Action Write a short sentence describing the task to be completed within this grouping. Use the Heading 2 format. Main Grid(s) The main grid is the central component of the dashboard, or of the grouping. It's where the user will spend most of their time. This main grid will display the KPIs needed for the task (usually in the columns) and will display one or more other dimension in the rows. Warning: Users may ask for 20+ KPIs and need these KPIs to be broken down by many dimensions, such as by product, actual/plan/variance, or by time. It's critical to have a main grid as simple and as decluttered as possible. Avoid the "data jungle" syndrome. Users are used to "data jungles" simply because that's what they are used to with Excel. Tips to avoid data jungle syndrome: Make a careful KPI election (KPIs are usually the line items of a module). Display the most important KPIs ONLY, which are those needed for decision making. Hide the others for now. A few criteria for electing a KPI in the main grid are: The KPI is meant to be compared across the dimension items in the rows, or across other KPIs. Viewing the KPI values for all of the rows is required to make the decision. The KPI is needed for sorting the rows (except on row name). A few criteria for not electing a KPI in the main grid are (besides not matching the above criteria) when we need these KPIs in more of a drill down mode; The KPI provides valid extra info, but just for the selected row of the Dashboard and does not need to be displayed for all rows. These "extra info" KPIs should be displayed in a different grid, which will be referred to as "info grid" in this document. Take advantage of the row/column sync functionality to provide a ton of data in your dashboard but only display data when requested or required. Design your main grid in such a way that it does not require the user to scroll left and right to view the KPIs: Efficiently select KPIs. Use the column header wrap. Set the column size accordingly. Vertical Scroll It is ok to have users scroll vertically on the main grid. Only display 15 to 20 rows at a time when there are numerous rows, as well as other groupings and action buttons, to display on the same dashboard. Use sorts and a filter to display relevant data. Sort Your Grid Always sort your rows. Obtain the default sort criteria via user stories. If no special sort criteria is called out, use the alphanumeric sort on the row name. This will require a specific line item. Train end users to use the sort functionality. Filter Your Grid Ask end users or product owners what criteria to use to display the most relevant rows. It could be: Those that make 80 percent of a total. Use the RankCumulate function. Those that have been modified lately. This requires a process to attach a last modified date to a list item, updated daily via a batch mode. When the main grid allows item creation, always display the newly created first. Status Flag. If end users need to apply their own filter values on some attributes of the list items, such as filter to show only those who belong to EMEA or those whose status is "in progress," build pre-set user-based filters. Use the new Users list. Create modules dimensioned by user with line items (formatted as lists) to hold the different criteria to be used. Create a module dimensioned by Users and the list to be filtered. In this module resolve the filter criteria from above against the list attributes to a single Boolean. Apply this filter in the target module.  Educate the users to use the refresh button, rather than create an "Open Dashboard" button. Color Code Your Grid Use colored cells to call attention to areas of a grid, such as green for positive and red for negative. Color code cells that specifically require data entry. Display the Full Details If a large grid is required, something like 5k lines and 100 columns, then: Make it available in a dedicated full-screen dashboard via a button available from the summary dashboard, such as an action button. Do not add such a grid to a dashboard where KPIs, charts, or multiple grids are used for planning.  These dashboards are usually needed for ad-hoc analysis and data discovery, or random verification of changes, and can create a highly cluttered dashboard. Main Charts The main chart goes hand-in-hand with the main grid. Use it to compare one or more of the KPIs of the main grid across the different rows. If the main grid contains hundreds or thousands of items, do not attempt to compare this in the main chart. Instead, identify the top 20 rows that really matter or that make most of the KPI value and compare these 20 rows for the selected KPI. Location: Directly below or to the right of main display grid; should be at least partially visible with no scrolling. Synchronization with a selection of KPI or row of the main display grid. Should be used for: Comparison between row values of the main display grid. Displaying difference when the user makes change/restatement or inputs data. In cases where a chart requires 2–3 additional modules to be created: Implement and test performance. If no performance issues are identified, keep the chart. If performance issues are identified, work with product owners to compromise. Info Grid(s) These are the grids that will provide more details for an item selected on the main grid. If territories are displayed as rows, use an info grid to display as many line items as necessary for this territory. Avoid cluttering your main grid by displaying all of these line items for all territories at once. This is not necessary and will create extra clutter and scrolling issues for end users. Location: Below or to the right of the main display grid. Synced to selection of list item in the main display grid. Should read vertically to display many metrics pertaining to list item selected. Info Charts Similar to info grids, an info chart is meant to compare one or more KPIs for a selected item in the rows of the main grid. These should be used for: Comparison of multiple KPIs for a single row. Comparison or display of KPIs that are not present on the main grid, but are on info grid(s). Comparing a single row's KPI(s) across time. Place it on the right of the main grid, above or below an info grid. Specific Action Buttons Location: Below main grid; Below the KPI that the action is related to, OR to the far left/right - similar to "checkout." Should be an action that is to be performed on the selected row of the main grid. Can be used for navigation as a drill down to a detailed view of a selected row/list item. Should NOT be used as lateral navigation between dashboards; Users should be trained to use the left panel for lateral navigation. Individual Line Items Serve as a call out of important KPIs or action opportunities (i.e., user setting container for explosion, Container Explosion status). If actions taken by users require additional collaboration with other users, it should be published outside the main grid (giving particular emphasis by publishing the individual line item/s). Light Instructions Call to action. Serves as a header for a grouping. Short sentence describing what the user should be performing within the grouping. Formatted in "Heading 2." Action Instructions. Directly located next to a drop-down, input field, or button where the function is not quite clear. No more than 5–6 words. Formatted in "instructions." Tooltips. Use Tooltips on modules and line items for more detailed instructions to avoid cluttering the dashboard.
View full article
What happens to History when I delete a user from a workspace?
View full article
How do we keep our users in the Anaplan platform to do their work which requires a high level of advanced customization, faster and more easily than their previous Excel environment? The solution is called “Smart Filters”. Check it out !
View full article
Filters can be very useful in model building and are widely used, but they can come at the expense of performance—often very visible to users through their use on dashboards. Performance can also hit imports and exports, which in turn may lead to the blocking of other activity, causing a poor perception of the model. There are some very simple guidelines to design well-performing filters: Using a Single Boolean Filter on a Line Item That Does Not Have Time or Versions Applied and Does Not Have a Summary Is Fastest Try to create a Boolean line item that incorporates all the filter criteria you want to apply. This allows you to re-use the line item and combine a series of Boolean line items into a single Boolean for use in the filter. For example, you may want to filter on three data points: Volume, Product Category, and Active Status. Volume is numeric, Product Category is a list formatted line item matching a user selection, and Active Status is a Boolean. Create a line item called Filter with the following formula: Volume > Min Vol AND Product Cat = User Selection.Category AND Active Status Here’s a very simple example module to demonstrate this: A Filter line item is added to represent all the filters we need on the view. Only the Filter line needs to be dimensioned by Users. A User Selection module dimension only by Users is created to capture user-specific filter choices: Here’s the data before we apply the filter:  Here's the data with the filter applied: A best practice suggestion would be to create a filter module and line items for each filter part. You may want other filters and you can then combine each filter as needed from this system module. This should reduce repetition and give you control over the filters to ensure they can all be Boolean. What Can Make a Filter Slow? The Biggest Performance Hit for Filters Is When Nesting Dimensions on Rows. The performance loss is significantly increased by the number of nested dimensions and the number of levels they contain. With a flat list versus nested dimensions (filtering on the same number of items) the nested filter will be slower. This was tested with a 10,000,000 list versus 2 lists of 10 and 1,000,000 items as nested rows; the nested dimension filter was 40% slower. Filtering on Line Items With a Line Item Summary Will Be Slow. A numeric filter on 10,000,000 items can take less than a second, but with a summary will take at least five seconds. Multiple Filters Will Increase Time. This is especially significant if any of the preceding filters do not lower the load because they will take additional time to evaluate. If you do use multiple filter conditions, try to order them so the most effective filters are first. If a filter doesn’t often match on anything, evaluate whether it's even needed. Hidden Levels Act as a Filter. If you hide levels on a composite list, this acts like a filter before any other filter is applied. The hiding does take time to process and will impact more depending on the number of levels and the size of the list. Avoid Nested Rows for Export Views Using nested rows can be a useful way to filter a complex set of data for export, but, as discussed above, the filter performance here can be poor. The best way around this is to pivot the dimensions so there is only one dimension on rows and use the Tabular Multi Column export option with a Filter Row based on Boolean option. Some extra filter tips include the following:  Filter duration will affect saved views used in imports, so check the saved view open time to see the impact. This view open time will be on every use of the view, including imports or exports. If you need to filter on a specific list, create a subset of those items and create a new module dimensioned by the subset to view that data.
View full article
Learn how to organize your model into logical parts to give you a  well-designed model that is easy to follow, understand and amend at a later date
View full article
This post summarizes steps to convert your security certificate to PEM format and test it in a cURL command with Anaplan. The current production API version is v1.3. Using a certificate to authenticate will eliminate the need to update your script when you have to change your Anaplan password. To use a certificate for authentication with the API, it first has to be converted into a Base64 encoded string recognizable by Anaplan. Information on how to obtain a certificate can be found in Anapedia. This article assumes that you already have a valid certificate tied to your user name. Steps: 1.   To properly convert your Anaplan certificate to be usable with the API, first you will need openssl (https://www.openssl.org/). Once you have that, you will need to convert the certificate to PEM format. The PEM format uses the header and footer lines “-----BEGIN CERTIFICATE-----“, and “-----END CERTIFICATE-----“. 2.   If your certificate is not in PEM format, you can convert it to the PEM format using the following OpenSSL command. “certificate-(certnumber).cer” is name of source certificate, and “certtest.pem” is name of target PEM certificate. openssl x509 -inform der -in certificate-(certnumber).cer -out certtest.pem View the PEM file in a text editor. It should be a Base64 string starting with “-----BEGIN CERTIFICATE-----“, and ending with “-----END CERTIFICATE-----“. 3.   View the PEM file to find the CN (Common Name) using the following command: openssl x509 -text -in certtest.pem It should look something like "Subject: CN=(Anaplan login email)". Copy the Anaplan login email. 4.   Use a Base-64 encoder (e.g.   https://www.base64encode.org/   ) to encrypt the CN and PEM string, separated by a colon. For example, paste this in: (Anaplan login email):-----BEGIN CERTIFICATE-----(PEM certificate contents)-----END CERTIFICATE----- 5.   You now have the encrypted string necessary to authenticate API calls. For example, using cURL to GET a list of the Anaplan workspaces for the user that the certificate belongs to: curl -H "Authorization: AnaplanCertificate (encrypted string)" https://api.anaplan.com/1/3/workspaces
View full article
The process of designing a model will help you: Understand the customer’s problem more completely. Bring to light any incorrect assumptions you may have made, allowing for correction before building begins. Provide the big-picture view for building. (If you were working on an assembly line building fenders, wouldn’t it be helpful to see what the entire car looked like?) Table of Contents:   Understand the Requirements and the Customer’s Technical Ecosystem when Designing a Model When you begin a project, gather information and requirements using a number of tools. These include: Statement of Work (SOW): Definition of the project scope and project objectives/high-level requirements. Project Manifesto: Goal of the project – big-picture view of what needs to be accomplished. IT ecosystem: Which systems will provide data to the model and which systems will receive data from the model? What is the Anaplan piece of the ecosystem? Current business process: If the current process isn’t working, it needs to be fixed before design can start. Business logic: What key pieces of business logic will be included in the model? Is a distributed model needed? High user concurrency. Security where the need is a separate model. Regional differences that are better handled by a separate model. Is the organization using ALM, requiring split or similar models to effectively manage development, testing, deployment, and maintenance of applications? (This functionality requires a premium subscription or above.) User stories: These have been written by the client—more specifically, by the subject matter experts (SMEs) who will be using the model. Why do this step? To solve a problem, you must completely understand the current situation. Performing this step provides this information and the first steps toward the solution. Results of this step: Understand the goal of the project. Know the organizational structure and reporting relationships (hierarchies). Know where data is coming from and have an idea of how much data clean-up might be needed. If any of the data is organized into categories (for example, product families) or what data relationships exist that need to be carried through to the model (for example, salespeople only sell certain products). What lists currently exist and where are they are housed. Know which systems the model will either import from or export to. Know what security measures are expected. Know what time and version settings are needed. Document the User Experience Front-to-back design has been identified as the preferred method for model design. This approach puts the focus on the end-user experience. We want that experience to align with the process so users can easily adapt to the model. During this step focus on: User roles. Who are the users? Identifying the business process that will be done in Anaplan. Reviewing and documenting the process for each role. The main steps. If available, utilize user stories to map the process. You can document this in any way that works for you. Here is a step-by-step process you can try: What are the start and end-points of the process? What is the result or output of the process? What does each role need to see/do in the process? What are the process inputs and where do they come from? What are the activities the user needs to engage in? Verb/object—approve request, enter sales amount, etc. Do not organize during this step. Use post-its to capture them. Take the activities from step 4 and put them in the correct sequence. Are there different roles for any of these activities? If no, continue with step 8. If yes, assign a role to each activity. Transcribe process using PowerPoint ®  or Lucid charts. If there are multiple roles, use swim lanes to identify the roles. Check with SMEs to ensure accuracy. Once the user process has been mapped out, do a high-level design of the dashboards. Include: Information needed. What data does the user need to see? What the user is expected to do or decisions that the user makes. Share the dashboards with the SMEs. Does the process flow align? Why do this step?  This is probably the most important step in the model-design process. It may seem as though it is too early to think about the user experience, but ultimately the information or data that the user needs to make a good business decision is what drives the entire structure of the model. On some projects, you may be working with a project manager or a business consultant to flesh out the business process for the user. You may have user stories, or it may be that you are working on design earlier in the process and the user stories haven’t been written. In any case, identify the user roles, the business process that will be completed in Anaplan, and create a high-level design of the dashboards. Verify those dashboards with the users to ensure that you have the correct starting point for the next step. Results of this step: List of user roles. Process steps for each user role. High-level dashboard design for each user role. Use the Designed Dashboards to Determine What Output Modules are Necessary Here are some questions to help you think through the definition of your output modules: What information (and in what format) does the user need to make a decision? If the dashboard is for reporting purposes, what information is required? If the module is to be used to add data, what data will be added and how will it be used? Are there modules that will serve to move data to another system? What data and in what format is necessary? Why do this step? These modules are necessary for supporting the dashboards or exporting to another system. This is what should guide your design—all of the inputs and drivers added to the design are added with the purpose of providing these output modules with the information needed for the dashboards or export. Results of this step: List of outputs and desired format needed for each dashboard. Determine What Modules are Needed to Transform Inputs to the Data Needed for Outputs Typically, the data at the input stage requires some transformation. This is where business rules, logic, and/or formulas come into play: Some modules will be used to translate data from the data hub. Data is imported into the data hub without properties, and modules are used to import the properties. Reconciliation of items takes place before importing the data into the spoke model. These are driver modules that include business logic, rules.  Why do this step?  Your model must translate data from the input to what is needed for the output.  Results of this step: Business rules/calculations needed. Create a Model Schema You can whiteboard your schema, but at some point in your design process, your schema must be captured in an electronic format. It is one of the required pieces of documentation for the project and is also used during the Model Design Check-in, where a peer checks over your model and provides feedback.  Identify the inputs, outputs, and drivers for each functional area. Identify the lists used in each functional area. Show the data flow between the functional areas. Identify time and versions where appropriate. Why do this step?   It is required as part of The Anaplan Way process. You will build your model design skills by participating in a Model Design Check-in, which allows you to talk through the tougher parts of design with a peer. More importantly, designing your model using a schema means that you must think through all of the information you have about the current situation, how it all ties together, and how you will get to that experience that meets the exact needs of the end-user without fuss or bother.  Result of this step: A model schema that provides the big-picture view of the solution. It should include imports from other systems or flat files, the modules or functional areas that are needed to take the data from current state to what is needed to support the dashboards that were identified in Step 2. Time and versions should be noted where required. Include the lists that will be used in the functional areas/modules.  Your schema will be used to communicate your design to the customer, model builders, and others. While you do not need to include calculations and business logic in the schema, it is important that you understand the state of the data going into a module, the changes or calculations that are performed in the module and the state of the data leaving the module, so that you can effectively explain the schema to others. For more information, check out 351 Schemas. This 10-to-15-minute course provides basic information about creating a model schema. Verify That the Schema Aligns with Basic Design Principles When your schema is complete, give it a final check to ensure: It is simple. “Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius — and a lot of courage to move in the opposite direction.”  ― Ernst F. Schumacher “Design should be easy in the sense that every step should be obviously and clearly identifiable. Simplify elements to make change simple so you can manage the technical risk.” — Kent Beck The model aligns with the manifesto. The business process is defined and works well within the model.
View full article
Learn how small changes can lead to dramtic improvements in model calculations
View full article