Platform Filters
Choose a label to narrow down the articles.

Modeling

Import and Export Data

Data Integration

Application Lifecycle Management

Dashboard

Share a Best Practice

Share what you know. Contribute your best practices and Anaplan expertise using our Contributor's Toolkit.

Contribute a Best Practice

Let's talk about it

Discuss what you learned from these best practices and your own experiences in the Forums.

Visit Forums
Learn how small changes can lead to dramtic improvements in model calculations
View full article
Thinking through the results of a modeling decision is a key part of ensuring good model performance—in other words, making sure the calculation engine isn’t overtaxed. This article highlights some ideas for how to lessen the load on the calculation engine. Formulas should be simple; a formula that is nested, or uses multiple combinations, uses valuable processing time. Writing a long, involved formula makes the engine work hard. Seconds count when the user is staring at the screen. Simple is better. Breaking up formulas and using other options helps keep processing speeds fast. You must keep a balance when using these techniques in your models, so the guidance is as follows: Break up the most commonly changed formula Break up the most complex formula Break up any formula you can’t explain the purpose of in one sentence Formulas with many calculated components The structure of a formula can have a significant bearing on the amount of calculation that happens when inputs in the model are changed. Consider the following example of a calculation for the Total Profit in an application. There are five elements that make up the calculation: Product Sales, Service Sales, Cost of Goods Sold (COGS), Operating Expenditure (Op EX), and Rent and Utilities. Each of the different elements is calculated in a separate module. A reporting module pulls the results together into the Total Profit line item, which is calculated using the formula shown below. What happens when one of the components of COGS changes? Since all the source components are included in the formula, when anything within any of the components changes, this formula is recalculated. If there are a significant number of component expressions, this can put a larger overhead on the calculation engine than is necessary. There is a simple way to structure the module to lessen the demand on the calculation engine. You can separate the input lines in the reporting module by creating a line item for each of the components and adding the Total Profit formula as a separate line item. This way, changes to the source data only cause the relevant line item to recalculate. For example, a change in the Product Sales calculation only affects the Product Sales and the Total Profit line items in the Reporting module; Services Sales, Op EX, COGS and Rent & Utilities are unchanged. Similarly, a change in COGS only affects COGS and Total Profit in the Reporting module. Keep the general guidelines in mind. It is not practical to have every downstream formula broken out into individual line items. Plan to provide early exits from formulas Conditional formulas (IF/THEN) present a challenge for the model builder in terms of what is the optimal construction for the formula, without making it overly complicated and difficult to read or understand. The basic principle is to avoid making the calculation engine do more work than necessary. Try to set up the formula to finish the calculations as soon as possible. Always put first the condition that is most likely to occur. That way the calculation engine can quit the processing of the expression at the earliest opportunity. Here is an example that evaluates Seasonal Marketing Promotions: The summer promotion runs for three months and the winter promotion for two months. There are more months when there is no promotion, so this formula is not optimal and will take longer to calculate. This is better, as the formula will exit after the first condition more frequently. There is an even better way to do this. Following the principles from above, add another line item for no promotion. And then the formula can become: This is even better because the calculation for No Promo has already been calculated, and Summer Promo occurs more frequently than Winter Promo. It is not always clear which condition will occur more frequently than others, but here are a few more examples of how to optimize formulas: FINDITEM formula The Finditem element of a formula will work its way through the whole list looking for the text item, and if it does not find the referenced text, it will return blank. If the referenced text is blank, it will also return a blank. Inserting a conditional expression at the beginning of the formula keeps the calculation engine from being overtaxed. IF ISNOTBLANK(TEXT) THEN FINDITEM(LIST,TEXT) ELSE BLANK Or IF BLANK(TEXT) THEN BLANK ELSE FINDITEM(LIST,TEXT) Use the first expression if most of the referenced text contains data and the second expression if there are more blanks than data. LAG, OFFSET, POST, etc. If in some situations there is no need to lag or offset data, for example, if the lag or offset parameter is 0. The value of the calculation is the same as the period in question. Adding a conditional at the beginning of the formula will help eliminate unnecessary calculations: IF lag_parameter = 0 THEN 0 ELSE LAG(Lineitem, lag_parameter, 0) Or IF lag_parameter <> 0 THEN LAG(Lineitem, lag_parameter, 0) ELSE 0 The use of formula a or b will depend on the most likely occurrence of 0s in the lag parameter. Booleans Avoid adding unnecessary clutter for line items formatted as BOOLEANS. There is no need to include the TRUE or FALSE expression, as the condition will evaluate to TRUE or FALSE. Sales>0 Instead of IF Sales > 0 then TRUE ELSE FALSE
View full article
How do we keep our users in the Anaplan platform to do their work which requires a high level of advanced customization, faster and more easily than their previous Excel environment? The solution is called “Smart Filters”. Check it out !
View full article
Summary This article describes the technique to dynamically filter specific levels of a hierarchy on a dashboard and provides a method to select and visualize hierarchies on a dashboard. Details This article explains how to configure the calculation of the level of a list in a hierarchy in order to apply specific calculations (custom summary) or filters by level on a dashboard. In this example, we have an organized hierarchy of 4 levels (Org L1 to Org L4). For each item in the hierarchy, we want to calculate a module value that returns the associated level that is to be displayed on a dashboard. Notes and Platform Context The technique addresses a specific limitation within dashboards where a composite hierarchy's list level cannot be selected if the list is synchronized to module objects on the dashboard. The technique uses a static module based on the levels of the composite structure used for filtering of the object on a dashboard. The technique is based on utilizing the Summary Method "Ratio" on line items corresponding to the list levels of the composite hierarchy to define the values of the filtering line items. Note that this method is not a formula calculation, but rather a use of the Summary Method Ratio on each line item applied to the composite hierarchy. Example List In this example, a four-level list composite hierarchy list is used. The hierarchy in this example has asymmetrical leaf items per parent: Defining the Level of Each List In order to calculate the level of each item in each of the lists L1 - L4, we need to create a module that calculates the associated level of each member by this technique: 1) Create as many line items as levels of hierarchy, plus one technical line item. 2) Configure the settings in the blueprint of the line items of this filtering module, per this example and table: Line Item Formula Applies to Summary Summary method Setting Ratio Technical line item* 1 (empty) Formula   Level or L4 (lowest level) 4 Org L4 Ratio* L3 / Technical L3 3 Org L3 Ratio L2 / Technical L2 2 Org L2 Ratio L1 / Technical L1 1 Org L1 Ratio L1 / Technical                     When applying these settings, the filtering module looks like this: *Note the Technical line item Summary method is using Formula. Alternatively, The Minimum Summary Method can be used but will return an error when a level of the hierarchy does not have any children and the level calculated is blank. The filtering module with Summary method applied results: Use the line item at the lowest level—Level (or L4) (LOWEST)—as the basis of filters or calculations. Applying a Filter on Specific Levels in Case of Synchronization When synchronization is enabled, the option “Select levels to show” is not available. Instead, a filter based on the level calculated can be used to show only specific levels. In the example, we apply a filter which matches any of the level 4 and 1: The following filtered dashboard result is achieved by using the composite hierarchy as a page selector:
View full article
Reducing the number of calculations will lead to quicker calculations and improve performance. However, this doesn’t mean combining all your calculations into fewer line items, as breaking calculations into smaller parts has major benefits for performance. Learn more about this in the Formula Structure article. How is it possible to reduce the number of calculations? Here are three easy methods: Turn off unnecessary Summary method calculations. Avoid formula repetition by creating modules to hold formulas that are used multiple times. Ensure that you are not including more dimensions than necessary in your calculations. Turn off Summary method calculations Model builders often include summaries in a model without fully thinking through if they are necessary. In many cases, the summaries can be eliminated. Before we get to how to eliminate them, let’s recap on how the Anaplan engine calculates. In the following example we have a Sales Volume line-item that varies by the following hierarchies: Region Hierarchy Product Hierarchy Channel Hierarchy City SKU Channel Country Product All Channels Region All Products   All Regions     This means that from the detail values at SKU, City, and Channel level, Anaplan calculates and holds all 23 of the aggregate combinations shown below—24 blocks in total. With the Summary options set to Sum, when a detailed item is amended (represented in the grey block), all the other aggregations in the hierarchies are also re-calculated. Selecting the None summary option means that no calculations happen when the detail item changes. The varying levels of hierarchies are quite often only there to ease navigation, and the roll-up calculations are not actually needed, so there may be a number of redundant calculations being performed. The native summing of Anaplan is a faster option, but if all the levels are not needed it might be better to turn off the summary calculations and use a SUM formula instead.  For example, from the structure above, let’s assume that we have a detailed calculation for SKU, City, and Channel (SALES06.Final Volume). Let’s also assume we need a summary report by Region and Product, and we have a module (REP01) and a line item (Volume) dimensioned as such. REP01.Volume = SALES06 Volume Calculation.Final Volume is replaced with REP01.Volume = SALES06.Final Volume[SUM:H01 SKU Details.Product, SUM:H02 City Details.Region] The second formula replaces the native summing in Anaplan with only the required calculations in the hierarchy. How do you know if you need the summary calculations? Look for the following: Is the calculation or module user-facing? If it is presented on a dashboard, then it is likely that the summaries will be needed. However, look at the dashboard views used. A summary module is often included on a dashboard with a detail module below; Effectively, the hierarchy sub-totals are shown in the summary module, so the detail module doesn’t need the sum or all the summary calculations. Detail to Detail Is the line item referenced by another detailed calculation line item? This is very common, and if the line item is referenced by another detailed calculation the summary option is usually not required. Check the Referenced by column and see if there is anything referencing the line item. Calculation and staging modules If you have used the D.I.S.C.O. module design, you should have calculation/staging modules. These are often not user-facing and have many detailed calculations included in them. They also often contain large cell counts, which will be reduced if the summary options are turned off. Can you have different summaries for time and lists? The default option for Time Summaries is to be the same as the lists. You may only need the totals for hierarchies, or just for the timescales. Again, look at the downstream formulas. The best practice advice is to turn off the summaries when you create a line item, particularly if the line item is within a Calculation module (from the D.I.S.C.O. design principles). Avoid Formula Repetition An optimal model will only perform a specific calculation once. Repeating the same formula expression multiple times will mean that the calculation is performed multiple times. Model builders often repeat formulas related to time and hierarchies. To avoid this, refer to the module design principles (D.I.S.C.O.) and hold all the relevant calculations in a logical place. Then, if you need the calculation, you will know where to find it, rather than add another line item in several modules to perform the same calculation. If a formula construct always starts with the same condition evaluation, evaluate it once and then refer to the result in the construct. This is especially true where the condition refers to a single dimension but is part of a line item that goes across multiple dimension intersections. A good example of this can be seen in the example below: START() <= CURRENTPERIODSTART() appears five times and similarly START() > CURRENTPERIODSTART() appears twice. To correct this, include these time-related formulas in their own module and then refer to them as needed in your modules. Remember, calculate once; reference many times! Taking a closer look at our example, not only is the condition evaluation repeated, but the dimensionality of the line items is also more than required. The calculation only changes by the  day, as per the diagram below: But the Applies To here also contains Organization, Hour Scale, and Call Center Type. Because the formula expression is contained within the line item formula, for each day the following calculations are also being performed: And, as above, it is repeated in many other line items. Sometimes model builders use the same expression multiple times within the same line item. To reduce this overcalculation, reference the expression from a more appropriate module; for example, Days of Week (dimensioned solely by day) which was shown above. The blueprint is shown below, and you can see that the two different formula expressions are now contained in two line items and will only be calculated by day; the other dimensions that are not relevant are not calculated. Substitute the expression by referencing the line items shown above. In this example, making these changes to the remaining lines in this module reduces the calculation cell count from 1.5 million to 1500. Check the Applies to for your formulas, and if there are extra dimensions, remove the formula and place it in a different module with the appropriate dimensionality .
View full article
Overview G uide for the new  Statistical Forecasting Calculation Engine Models (monthly and weekly).  Includes enablement videos, practice data import exercise, model documentation, and specific steps when using the model for implementations .  1. Enablement Videos & Practice Exercise # Item Details Link 1a Intro and Overview Video Model overview and review of new key features   Video Below 1b Initial Model & Data Import Steps Steps on how to setup model, product hierarchy, customer list and multi-level forecast analysis  Video Below  1c Practice Exercise—Import data to setup stat forecast Two sets of load files included to practice setup for single level product set or multi-level product set w/ customers, product and brand level.  Start on "Initial App Setup" dashboard and load   either Single OR Multi Level files   into model, use Import video as guide if needed.  .Zip File Attached  2. Documentation  # Item Details Link 2a Lucidchart Process Maps Lucidchart Process Map document includes High Level process flow for end user navigation and detailed tabs for each section  **Details & links also on "Training & Enablement" dashboard Process Maps  2b High Level Process Map PDF High level process map PDF format  Attached 2c Forecast Methods PDFs High level version with forecast algorithms list and overview  Detailed version which includes slide for each forecast method, m ethod overview, advantages/disadvantages, equation and graph example output   **These slides are also included on "Forecast Methods Overview & Formulas" dashboard     Attached 3. Implementation Specifics # Item Details 3a Training & Enablement Dashboard Training & Enablement dashboard contains details on process map navigation  3b Initial Model Setup  Initial Setup: current model staged with chocolate data from data hub, execute CLEAR MODEL action prior to loading customer specific data 3c Changing Model Time Scale— align Native & Dynamic Time Settings If a Time Settings change is required, need to review Initial App Setup dashboard to align Native Time with Dynamic Time setup in model   3d Monthly Update Process After initial setup, use Monthly Data History Upload dashboard to update prior period actuals and settings  3e Single Level vs. Multi-Level Forecast Setup Two implementation options & when to use:  Single Level Forecast:  Forecast at one level of product hierarchy (i.e. all stat forecasts calculated at Item level). Most use cases will leverage single level forecast setup. Multi-Level Forecast : Ability to forecast at different levels of the product hierarchy (i.e. Top Item | Customers, Item and Brand level can all have stat forecast generated). This requires complex forecast reconciliation process, review "Multi-Level Forecast Overview" dashboard if this process is needed.   3f Troubleshooting Tips Follow troubleshooting tips on Training & Enablement dashboard if having issues with stat forecast generating before reaching out for support  3g Model Notes & Documentation Module Notes—includes DISCO classification and module purpose   3h "Do Not Modify" Items Module notes contain DO NOT MODIFY for items that should not be changed during implementation process     3i User Roles & Selective Access Demo, Demand Planner, Demand Planning Manager ro les can be adjusted  After Selective Access process run on Flat List Management dashboard then users can be given access to certain product groups / brands etc 3j Batch Processing Details on daily batch processing and how to prepare a roadmap of your batch processes – files, queries, import actions / processes in Anaplan (see attachment) 4. Videos Intro & Model Intro and Overview Video Data Import and Setup Steps  5. Model Download Links Monthly Statistical Forecasting Calculation Engine Weekly Statistical Forecasting Calculation Engine
View full article
Learn how to organize your model into logical parts to give you a  well-designed model that is easy to follow, understand and amend at a later date
View full article
Table of Contents   Overview A data hub is a separate model that holds an organization’s data. Data can be shared with all your models, making expands easier to implement and ensuring data integrity across models. The data hub model can be placed in a different workspace, allowing for role segregation. This allows you to assign administrator rights to users to manage the data hub without allowing those users access to the production models. The method for importing to the data hub (into modules, rather than lists) allows you to reconcile properties using formulas. One type of data hub can be integrated with an organization’s data warehouse and hold ERP, CRM, HR, and other data as shown in this example. Anaplan Data Architecture But this isn’t the only type of data hub. Some organizations may require a data hub for transactional data, such as bookings, pipeline, or revenue. Whether you will be using a single data hub or multiple hubs, it is a good idea to plan your approach for importing from the organization’s systems into the data hub(s) as well as how you will synchronize the imports from the data hub to the appropriate model. The graphic below shows best practices.   High level best practices   When building a data hub, the best practice is to import a list with properties into a module rather than directly into a list. Using this method, you set up line items to correspond with the properties and import them using the text data type. This imports all the data without errors or warnings. The data in the data hub module can be imported to a list in the required model. The exception for importing into a module is if you are using a numbered list without a unique code (or in other words, you are using combination of properties). In that case, you will need to import the properties into the list.   Implementation steps Here are the steps to create the basics of a hub and spoke architecture. 1) Create a model and name it master data hub You can create the data hub in the same workspace where all the other models are, but a better option is to put the data hub in a different workspace. The advantage is role segregation; you can assign administrator rights to users to manage the Hub and not provide them with access to the actual production models, which are in a different workspace. Large customers may require this segregation of duties. Note: This functionality became available in release 2016.2.   2) Import your data files into the data hub Set up your lists. Identify the lists that are required in the data hub. Create these lists using good naming conventions. Set up any needed hierarchies, working from the top level down. Import data into the list from the source files, mapping only the unique name, the parent (if the name rolls up into a hierarchy), and code, if available. Do not import any list properties. These will be imported into a module. Create corresponding modules for those lists that include properties. For each list, create a module. Name the module [List Name] Properties. In the module, create a line item for each property and use the data type TEXT. Import the source file into the corresponding module. There should be no errors or warnings. Automate the process with actions. Each time you imported, an action was created. Name your actions using the appropriate naming conventions. Note: Indicate the name of the source in the name of the import action. To automate the process, you’ll want to create one process that includes all your imports. For hierarchies, it is important to get the actions in the correct order. Start with the highest level of the hierarchy list import, then the next level list and on down the hierarchy. Then add the module imports. (The order of the module imports is not critical.) Now, let's look at an example: You have a four-level hierarchy to load, such as 1) Employee→ 2) State → 3) Region → 4) Country   Lists Create lists with the right naming conventions. For this example, create these lists: G1 Country G2 Region G3 State Employee G4 Set the parent hierarchy to create the composite hierarchy. Import into each list from the source file(s), and only map name and parent. The exception is the employee list, which includes a code (employee ID) which should be mapped. Properties will be added to the data hub later.   Properties → Modules Create one module for each list that includes properties. Name the module [List Name] Properties. For this example, only the Employees list includes properties, so create one module named Employee Properties. In each module, create as many line items as you have properties. For this example, the line items are Salary and Bonus. Open the Blueprint view of the module and in the Format column, select Text. Pivot the module so that the line items are columns. Import the properties. In the grid view of the module, click on the property you are going to import into. Set up the source as a fixed line item. Select the appropriate line item from the Line Item tab and on the Mapping tab, select the correct column for the data values. You’ll need to import each property (line item) separately. There should be no errors or warnings.     Actions  Each time you run an import, an action is created. You can view these actions by selecting Actions from the Model Settings tab. The previous imports into lists and modules have created one import action per list. You can combine these actions into a process that will run each action in the correct order. Name your actions following the naming conventions. Note, the source is included in the action name.   Create one process that includes the imports. Name your process Load [List Name]. Make sure the order is correct: Put the list imports first, starting with the top hierarchy level (numbered as 1) and working down the module imports in any order.   3) Reconcile These list imports should be running with zero errors because imports are going into text formatted items. If some properties should match with items in lists, it's recommended to use FINDITEM formulas to match text to list items: FINDITEM simply looks at the text formatted line item, and finds the match in the list that you specify. Every time data is uploaded into Anaplan, you just need to make sure all items from the text formatted line item are being loaded into the list. This will be useful as you will be able to always compare the "raw data" to the "Anaplan data," and not have to load that data more than once if there are concerns about the data quality in Anaplan. If there is not a list of the properties included in your data hub model, first, create that list. Let’s use the example of Territory. Add a line item to the module and select list as the format type, then select the list name of your list of properties—in this case, Territory from the drop-down. Add the FINDITEM formula FINDITEM(x,y) where x is the name of your list (Territory for our example) and y is the line item. You can then filter this line item so that it shows all of the blank items. Correct the data in the source system. If you will be importing frequently, you may want to set up a dashboard to allow users to view the data so they can make corrections in the source system. Set up a saved view for the errors and add conditional formatting to highlight the missing (blank items) data. You can also include a counter to show the number of errors and add that information to the dashboard.   4) Split models: Filter and Set up Saved Views If the architecture of your model includes spoke models by regions, you need one master hierarchy that covers all regions and a corresponding module that stores the properties. Use that module and create as many saved views as you have spoke region models. For example, filter on Country GI = Canada if you want to import only Canadian accounts into the spoke model. You will need to create a saved view for each hierarchy and spoke model.   5) Import to the spoke module Use the cross-workspace imports if you have decided to put your Master data hub in a separate workspace. Create the lists that correspond to the hierarchy levels in each spoke model. There is no way to create a list via import for now. Create the properties in the list where needed. Keep in mind that the import of properties into the data hub as line items is an exception. List properties generally do not vary, unlike a line item in a module, which are often measured over time. Note: Properties can also be housed in modules and there are some benefits to this. See Anapedia - Model Building (specifically, the "List Attributes" and "List attributes in a module" topics). If you decide to use a module to hold the properties, you will need to create a line item for each property type and then import the properties into the module. To simplify the mapping, make sure the property names in each spoke model match the line item names of the data hub model. In each spoke model, create an import from the filtered module view of the data hub model into the lists you created in step 1. In the Actions window, name your imports using naming conventions. Create a process that includes these actions (imports). Begin with the highest level in the hierarchy and work down to the lowest. Well done! You have imported your hierarchy from a data hub model.   6) Incremental list imports When you are in the midst of your peak planning cycle and your large lists are changing frequently, you’ll want to update the data hub and push the changes to the spoke models. Running imports of several thousand list members, may cause performance issues and block users during the import activity. In a best case scenario, your data warehouse provides a date field that shows when the item was added or modified, and is able to deliver a flat file or table that includes only the changes. Your import into the HUB model will just take few seconds, and you can filter on this date field to only export the changes to the spoke models. But in most cases, all you have is a full list from the data warehouse, regardless of what has changed. To mitigate this, we'll use a technique to export only the list items that have changed (edited, deleted, updated) since the last export, using the logic in Anaplan.   Setting up the incremental loads: In the data hub model: Create a text formatted line item in your module. Name it CHECKSUM, set the format as Text, and enter a formula to concatenate of all the properties that you want to track changes for. These properties will form the base of the incremental import. Example: CHECKSUM = State & Segment & Industry & Parent & Zip Code Create a second line item, name it CHECKSUM OLD, set the format as Text, and create an import that imports CHECKSUM into CHEKSUM_OLD. Ignore any other mappings. Name this import: 1/2 im DELTA and put it in a process called "RESET DELTA" Create a line item and name it "DELTA" and set the format as Boolean. Enter this formula: IF CHECKSUM <> CHECKSUM OLD THEN TRUE ELSE FALSE. Update the filtered view that you created to export only the hierarchy for a specific region or geography. Add a filter criteria "DELTA = true". You will only see the list items which differ from the last time you imported into the data hub In the example above, we'll import into a spoke model only the list items that are in US East, and that have changed since the last import. Execute the import from the source into the data hub and then into the spoke models. In the data hub model, upload the new files and run the process import. In the spoke models, run the process import that takes the list from the data hub's filtered view. → Check the import logs and verify that only the number of items that have changed are actually imported. Back in the data hub model, run the RESET DELTA process (1/2 im DELTA import). The RESET DELTA process resets the changes, so you are ready for the next set of changes. Your source, data hub model and spoke models are all in sync. 7) Incremental data load The Actual transaction file might need to be imported several times into the data hub model and from there into the spoke models during the planning peak cycle. If the file is large, it can create performance issues for end users. Since not all transactions will change as the data is imported several times a day, there is a strong opportunity to optimize this process. In the data hub model transaction module, create the same CHECKSUM, CHECKSUM OLD and DELTA line items. CHECKSUM should concatenate all the fields you want to track the delta on, including the values. "DELTA" line item will actually catch new transactions, as well as modified transactions. See 6. Incremental List Imports above for more information   Filter the view using DELTA to only import transaction list items into the list, and the actuals transaction into the module. Create an import from CHECKSUM to CHECKSUM OLD, to be able to reset the delta after the imports have run, name this import: 2/2 im DELTA, and add it to the DELTA process created for the list. In the spoke model, import into the transaction list and into the transaction module, from the transaction filtered view. Run the DELTA import or process.   😎  Automation You can semi-automate this process and have it run automatically on a frequent basis if incremental loads have been implemented. That provides immediacy of master data and actuals across all models during a planning cycle. It's semi-automatic because it requires a review of the reconciliation dashboards before pushing the data to the spoke models. There are a few ways to automate, all requiring an external tool: Anaplan Connect or the customer's ETL. The automation script needs to execute in this order: Connect to the master data hub model. Load the external files into the master data hub model. Execute the process that imports the list into the data hub. Execute the process that imports actuals (transactions) into the data hub. Manual step: Open your reconciliation dashboards, and check that data and the list are clean. Again, these imports should run with zero errors or warnings. Connect to the spoke model. Execute the list import process. Execute the transaction import models. Repeat 5, 6, and 7 for all spoke models. Connect to the master data hub model. Run the Clear DELTA process to reset the incremental checks.   Other best practices Create deletes for all your lists Create a module called Clear Lists. In the module, create a line item of type Boolean in the module where you have list and properties, call it "CLEAR ALL" and set a formula to TRUE. In Actions, create a "delete from list using selection" action and set it as below: Repeat this for all lists and create one process that executes all these delete actions.   Example of a maintenance/reconcile dashboard Use a maintenance/reconcile dashboard when manual operations are required to update applications from the hub. One method that works well is to create a module that highlights if there are errors in each data source. In that module, create a line item message that displays on the dashboard if there are errors, for example: There are errors that need correcting. A link on this dashboard to the error status page will make it easy for users to check on errors. A best practice is to automate the list refresh. Combine this with a modeling solution that only exports what has changed.   Dev-test-prod considerations There should be two saved views: One for development and one for production. That way, the hub can feed the development models with shortened versions of the lists and the production models will get the full lists. ALM considerations: The development (DEV) model will need the imports set up for DEV and production (PROD) if the different saved view option is taken. The additional ALM consideration is that the lists that are imported into the spoke models from the hub need to be marked as production data.   Development DATA HUB The data hub houses all global data needed to execute the Anaplan use case. The data hub often houses complex calculations and readies data for downstream models. DEVELOPMENT MODEL The development model is built to the 80/20 rule. It is built upon a global process, regional specific functionality is added in the deployment phase. The model is built to receive data from the data hub. DATA INTEGRATION During this stage, Anaplan Connect or a 3rd party tool is used to automate data integration. Data feeds are built from the source system into the data hub and from the data hub to downstream models. PERFORMANCE TESTING The application is put through rigorous performance testing, including automated and end user testing. These tests mimic real world usage and exceptionally heavy traffic to see how the system will perform.   Deployment DATA HUB The data hub is refreshed with the latest information from the source systems. The data hub readies data for downstream models. DEPLOYMENT  MODEL The development model is copied and the appropriate data is loaded from the data hub. Regional specific functionality is added during this phase. DATA INTEGRATION Additional data feeds from the data hub to downstream models are finalized. The integrations are tested and timed to establish baseline SLA. Automatic feeds are placed on timed schedules to keep the data up to date. PERFORMANCE TESTING The application is again put through rigorous performance testing.   Expansion DATA HUB The need for additional data for new use cases is often handled by splitting the data hub into regional data hubs. This helps the system perform more efficiently. MODEL  DEVELOPMENT The models built for new use cases are developed and thoroughly tested. Additional functionality can be added to the original models deployed. DATA INTEGRATION Data integration is updated to reflect the new system architecture. Automatic feeds are tested and scheduled according to business needs. PERFORMANCE TESTING At each stage, the application is put through rigorous performance testing. These tests mimic real world usage and exceptionally heavy traffic to see how the system will perform.
View full article
PLANS is the new standard for Anaplan modeling—“the way we model.” This covers more than just the formulas and includes and evolves existing best practices around user experience and data hubs. It is a set of rules on the structure and detailed design of Anaplan models. This set of rules will provide both a clear route to good model design for the individual Anaplanner and common guidance on which Anaplanners and reviewers can rely when passing models amongst themselves.  In defining the standard, everything we do will consider or be based around: Performance – Use the correct structures and formula to optimize the Hyperblock Logical – Build the models and formula more logically – See D.I.S.C.O. below Auditable – Break up the formula for better understanding, performance, and maintainability Necessary – Don’t duplicate expressions. Store and calculate data and attributes once and reference them many times. Don't have calculations on more dimensions than needed Sustainable – Build with the future in mind, thinking about process cycles and updates        The standards will be based around three axes: Performance - How do the structures and formula impact the performance of the system? Usability/Auditability - Is the user able to understand how to interact with the functionality? Sustainability - Can the solution be easily maintained by model builders and support? We will define the techniques to use that balance on the three areas to ensure the optimal design of Anaplan models and architecture.       D.I.S.C.O As part of model and module design, we recommend categorizing modules as follows: Data – Data hubs, transactional modules, source data; reference everywhere Inputs – Design for user entry, minimize the mix of calculations and outputs System – Time management, filters, list attributes modules, mappings, etc.; reference everywhere Calculations – Optimize for performance (turn summaries off, combine structures) Outputs -  Reporting modules, minimize data flow out   Why build this way?   Performance Fewer repeated calculations Optimized structures and formulas Logical Data and calculations reside in logical places Model data flows can be easily understood Auditable Model structure can be easily understood Simplified formula (no need for complex expressions) Necessary Formulas and structures are not repeated Data is stored and calculated once, referenced many times, leading to efficient calculations Sustainable Models can be adapted and maintained more easily Expansion and scaling simplified     Recommended Content: Performance Dimension Order Formula Optimization in Anaplan Formula Structure for Performance Logical Best Practices for Module Design Auditable Formula Structure for Performance Necessary Reduce Calculations for Better Performance Formula Optimization in Anaplan Sustainable Dynamic Cell Access Tips and Tricks Dynamic Cell Access - Learning App Personal Dashboards Tips and Tricks Time Range Application Ask Me Anything (AMA) sessions
View full article
Note: While all of these scripts have been tested and found to be fully functional, due to the vast amount of potential use cases, Anaplan does not explicitly support custom scripts built by our customers. This article is for information only and does not suggest any future product direction. Getting Started Python 3 offers many options for interacting with an API. This article will explain how you can use Python 3 to automate many of the requests that are available in our apiary, which can be found at   https://anaplan.docs.apiary.io/#. This article assumes you have the requests (version 2.18.4), base64, and JSON modules installed, as well as the Python 3 version 3.6.4. Please make sure you are installing these modules with Python 3, and not for an older version of Python. For more information on these modules, please see their respective websites: Python   (If you are using a Python version older or newer than 3.6.4, or requests version older or newer than 2.18.4, we cannot guarantee the validity of the article.)   Requests   Base Converter   JSON   (Note: Install instructions are not at this site but will be the same as any other Python module.) Note:   Please read the comments at the top of every script before use, as they more thoroughly detail the assumptions that each script makes. Authentication To start, let's talk about Authentication. Every script run that connects to our API will be required to supply valid authentication. There are two ways to authenticate a Python script that I will be covering. Certificate Authentication Basic Encoded Authentication Certificate authentication will require that you have a valid Anaplan certificate, which you can read more about   here. Once you have your certificate saved locally, to properly convert your Anaplan certificate to be usable with the API, first you will need   OpenSSL. Once you have that, you will need to convert the certificate to PEM format by running the following code in your terminal: openssl x509 -inform der -in certificate-(certnumber).cer -out certtest.pem If you are using Certificate Authorization, the scripts we use in this article will assume you know the Anaplan account email associated with the certificate. If you do not know it, you can extract the common name (CN) from the PEM file by running the following code in your terminal: openssl x509 -text -in certtest.pem To be used with the API, the PEM certificate string will need to be converted to base64, but the scripts we will be covering will take care of that for you, so I won't cover that in this section. To use basic authentication, you will need to know the Anaplan account email that is being used, as well as the password. All scripts in this article will have the following code near the top: # Insert the Anaplan account email being used username = '' ----------------- # If using cert auth, replace cert.pem with your pem converted certificate # filename. Otherwise, remove this line. cert = open('cert.pem').read() # If using basic auth, insert your password. Otherwise, remove this line. password = '' # Uncomment your authentication method (cert or basic). Remove the other. user = 'AnaplanCertificate ' + str(base64.b64encode(( f'{username}:{cert}').encode('utf-8')).decode('utf-8')) # user = 'Basic ' + str(base64.b64encode((f'{username}:{password}' # ).encode('utf-8')).decode('utf-8') Regardless of the authentication method, you will need to set the username variable to the Anaplan account email being used. If you are using a certificate to authenticate, you will need to have your PEM converted certificate in the same folder or a child folder of the one you are running the scripts from. If your certificate is in a child folder, please remember to include the file path when replacing cert.pem (e.g. cert/cert.pem). You can remove the password line and its comments, and its respective user variable. If you are using basic authentication, you will need to set the password variable to your Anaplan account password, and you can remove the cert line, its comments, and its respective user variable. Getting the Information Needed for Each Script Most of the scripts covered in this article will require you to know an ID or metadata for the file, action, etc., that you are trying to process. Each script that gets this information for their respective fields is titled get_____.py. For example, if you want to get your file's metadata, you'll run getFiles.py, which will write the file metadata for each file in the selected model, in the selected workspace, in an array to a JSON file titled files.json. You can then open the JSON file, find the file you need to reference, and use the metadata from that entry in your other scripts. TIP:   If you open the raw data tab of the JSON file it makes it much easier to copy the whole set of metadata. The following are the links to download each get____.py script. Each get script uses the requests.get method to send a get request to the proper API endpoint. getWorkspaces.py: Writes an array to workspaces.json of all the workspaces the user has access to. getModels.py: Writes an array to models.json of either all the models a user has access to if wGuid is left blank or all of the models the user has access to in a selected workspace if a workspace ID was inserted. getModelInfo.py: Writes an array to modelInfo.json of all metadata associated with the selected model. getFiles.py: Writes an array to files.json of all metadata for each file the user has access to in the selected model and workspace. (Please refer to   the Apiary   for more information on private vs default files. Generally, it is recommended that all scripts be run via the same user account.) getChunkData.py: Writes an array to chunkData.json of all metadata for each chunk of the selected file in the selected model and workspace. getImports.py: Writes an array to imports.json of all metadata for each import in the selected model and workspace. getExports.py: Writes an array to exports.json of all metadata for each export in the selected model and workspace. getActions.py: Writes an array to actions.json of all metadata for all actions in the selected model and workspace. getProcesses.py: Writes an array to processes.json of all metadata for all processes in the selected model and workspace. Uploads A file can be uploaded to the Anaplan API endpoint either in chunks or as a single chunk. Per our apiary: We recommend that you upload files in several chunks. This enables you to resume an upload that fails before the final chunk is uploaded. In addition, you can compress files on the upload action. We recommend compressing single chunks that are larger than 50MB. This creates a Private File. Note: To upload a file using the API that file must exist in Anaplan. If the file has not been previously uploaded, you must upload it initially using the Anaplan user interface. You can then carry out subsequent uploads of that file using the API. Multiple Chunk Uploads The script we have for reference is built so that if the script is interrupted for any reason, or if any particular chunk of a file fails to upload, simply rerunning the script will start uploading the file again, starting at the last successful chunk. For this to work, the file must be initially split using a standard naming convention, using the terminal script below. split -b [numberofBytes] [path and filename] [prefix for output files] You can store the file in any location as long as you the proper file path when setting the chunkFilePrefix (e.g. chunkFilePrefix = ''upload_chunks/chunk-" This will look for file chunks named chunk-aa, chunk-ab, chunk-ac etc., up to chunk-zz in the folder script_origin/upload_chunks/. It is very unlikely that you will ever exceed chunk-zz). This will let the script know where to look for the chunks of the file to upload. You can download the script for running a multiple chunk upload from this link: chunkUpload.py. Note:   The assumed naming conventions will only be standard if using Terminal, and they do not necessarily work if the file was split using another method in Windows. If you are using Windows you will need to either create a way to standardize the naming of the chunks alphabetically {chunkFilePrefix}(aa - zz) or run the script as detailed in the   Apiary. Note:   The chunkUpload.py script keeps track of the last successful chunk by writing the name of the last successful chunk to a .txt file chunkStop.txt. This file is deleted once the import completes successfully. If the file is modified in between runs of the script, the script may not function correctly. Best practice is to leave the file alone and delete it if you want to start the upload from the first chunk. Single Chunk Upload The single chunk upload should only be used if the file is small enough to upload in a reasonable time frame. If the upload fails, it will have to start again from the beginning. If your file has a different name then that of its version of the server, you will need to modify line 31 ("name" : '') to reflect the name of the local file. This script runs a single put request to the API endpoint to upload the file. You can download the script for running a single chunk upload from this link: singleChunkUpload.py Imports The import.py script sends a post request to the API endpoint for the selected import. You will need to set the importData value to the metadata for the import. See Getting the Information Needed for Each Script for more information. You can download the script for running an import from this link: Import.py. Once the import is finished, the script will write the metadata for the import task in an array to postImport.json, which you can use to verify which task you want to view the status of while running the importStatus.py script. The importStatus.py script will return a list of all tasks associated with the selected importID and their respective list index. If you are wanting to check the status of the last run import, make sure you are checking postImport.json to verify you have the correct taskID. Enter the index for the task and the script will write the task status to an array in file importStatus.json. If the task is still in progress, it will print the task status and progress. If the task finished and a failure dump is available, it will write the failure dump in comma delimited format to importDump.csv which can be used to review the cause of the failure. If the task finished with no failures, you will get a message telling you the import has completed with no failures. You can download the script for importStatus.py from this link: importStatus.py Note:   If you check the status of a task with an old taskID for an import that has been run since you last checked it, the dump will no longer exist and importDump.csv will be overwritten with an HTTP error, and the status of the task will be 410 Gone. Exports The export.py script sends a post request to the API endpoint for the selected export. You will need to set the exportData value to the metadata for the export. See Getting the Information Needed for Each Script for more information. You can download the script for running an export from this link: Export.py Once the export is finished, the script will write the metadata for the export task in an array to postExport.json, which you can use to verify which task you want to view the status of while running the exportStatus.py script. The exportStatus.py script will return a list of all tasks associated with the selected exportID and their respective list index. If you are wanting to check the status of the last run import, make sure you are checking postExport.json to verify you have the correct taskID. Enter the index for the task and the script will write the task status to an array in file exportStatus.json. If the task is still in progress, it will print the task status and progress. It is important to note that no failure dump will be generated if the export fails. You can download the script for exportStatus.py from this link: exportStatus.py Actions The action.py script sends a post request to the API endpoint for the selected action (for use with actions other than imports or exports). You will need to set the actionData value to the metadata for the action. See Getting the Information Needed for Each Script for more information. You can download the script for running an action from this link: actionStatus.py. Processes The process.py script sends a post request to the API endpoint for the selected process. You will need to set the processData value to the metadata for the process. See Getting the Information Needed for Each Script for more information. You can download the script for running a process from this link: Process.py. Once the process is finished, the script will write the metadata for the process task in an array to postProcess.json, which you can use to verify which task you want to view the status of while running the processStatus.py script. The processStatus.py script will return a list of all tasks associated with the selected processID and their respective list index. If you are wanting to check the status of the last run import, make sure you are checking postProcess.json to verify you have the correct taskID. Enter the index for the task and the script will write the task status to an array in file processStatus.json. If the task is still in progress, it will print the task status and progress. If the task finished and a failure dump is available, it will write the failure dump in comma delimited format to processDump.csv which can be used to review the cause of the failure. It is important to note that no failure dump will be generated for the process itself, only if one of the imports in the process failed. If the task finished with no failures, you will get a message telling you the process has completed with no failures. You can download the script for processStatus.py from this link: processStatus.py. Downloading a File Downloading a file from the Anaplan API endpoint will download the file in however many chunks it exists in on the endpoint. It is important to note that you should set the variable fileName to the name it has in the file metadata. First, the downloads individual chunk metadata will be written in an array to downloadChunkData.json for reference. The script will then download the file chunk by chunk and write each chunk to a new local file with the same name as the 'name' listed in the file's metadata. You can download the link for this script from this link: downloadFile.py. Note: If a file already exists in the same folder as your script with the same name as the name value in the file's metadata, the local file will be overwritten with the file being downloaded from the server. Deleting a File You can delete the file contents of any file that the user has access to that exists in the Anaplan server. Note: This only removes private content. Default content and the import data source model object will remain. You can download the link for this script from this link: deleteFile.py. Standalone Requests Code and Their Required Headers In this section, I will list the code for each request detailed above, including the API URL and the headers necessary to complete the call. I will be leaving the content right of Authorization: headers blank. Authorization header values can be either Basic encoded_username: password or AnaplanCertificate encoded_CommonName:PEM_Certificate_String (see   Certificate-Authorization-Using-the-Anaplan-API   for more information on encoded certificates) Note: requests.get will only generate a response body from the server, and no data will be locally saved unless written to a local file. Get Workspaces List requests.get('https://api.anaplan.com/1/3/workspaces/', headers='Authorization':) Get Models List requests.get('https://api.anaplan.com/1/3/models/', headers={'Authorization':}) or requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models', headers={'Authorization':}) Get Model Info requests.get(f'https://api.anaplan.com/1/3/models/{mGuid}', headers={'Authorization':}) Get Files/Imports/Exports/Actions/Processes List The get request for files, imports, exports, actions, or processes is largely the same. Change files to imports, exports, actions, or processes to run each. requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files', headers={'Authorization':}) Get Chunk Data requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/chunks', headers={'Authorization':}) Post Chunk Count requests.post('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/chunks/{chunkNumber}', headers={'Authorization': , 'Content-type': 'application/json'}, json={fileMetaData}) Upload a Chunk of a File requests.put('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/chunks/{chunkNumber}', headers={'Authorization': , 'Content-Type': 'application/octet-stream'}, data={raw contents of local chunk file}) Mark an upload complete requests.put('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/complete', headers=={'Authorization': , 'Content-Type': 'application/json'}, json={fileMetaData}) Upload a File in a Single Chunk requests.put('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}', headers={'Authorization': , 'Content-Type': 'application/octet-stream'}, data={raw contents of local file}) Run an Import/Export/Process The post request for imports, exports, and processes are largely the same. Change imports to exports, actions, or processes to run each. requests.post('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/imports/{Id}/tasks', headers={'Authorization': , 'Content-Type': 'application/json'}, data=json.dumps({'localeName': 'en_US'})) Run an Action requests.post('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/imports/{Id}/tasks', data={'localeName': 'en_US'}, headers={'Authorization': , 'Content-Type': 'application/json'}) Get Task list for an Import/Export/Action/Process The get request for import, export, action and process task lists are largely the same. Change imports to exports, actions, or processes to get each task list. requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/imports/{importID}/tasks', headers={'Authorization':}) Get Status for an Import/Export/Action/Process Task The get request for import, export, action and process task statuses are largely the same. Change imports to exports, actions, or processes to get each task list. Note: Only imports and processes will ever generate a failure dump. requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/imports/{ID}/tasks/{taskID}' headers={'Authorization':}) Download a File Note:   You will need to get the chunk metadata for each chunk of a file you want to download. requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/chunks/{chunkID}, headers={'Authorization': ,'Accept': 'application/octet-stream'}) Delete a File Note:   This only removes private content. Default content and the import data source model object will remain. requests.delete('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}', headers={'Authorization': , 'Content-type': 'application/json'} Note:  SFDC user administration is not covered in this article, but the same concepts from the scripts provided can be applied to SFDC user administration. For more information on SFDC user administration see the apiary entry for  SFDC user administration .
View full article
Introduction The new Anaplan APIs and integration connectors leverage Certificate Authority (CA) -issued certificates.  These certificates can be obtained through your company's intermediary CA (typically issued by IT) or by purchasing it from a trusted Certificate Authority. Anaplan clients leveraging REST API v2.0 use both basic authentication and CA certificate based authentication. Examples of these clients include Anaplan Connect 1.4, Informatica Anaplan Connector, and Mulesoft 2.0.1. If you are migrating your Anaplan Connector scripts from v1.3 to v1.4, your available options for authentication will be basic authentication or CA certificate based authentication. This article outlines steps to perform in preparation for CA certificate authentication. Steps to prepare for CA certificate authentication Obtain a certificate from a CA authority Convert CA certificate to either a p12 or pfx file Import CA certificate into Internet Explorer/Mozilla Firefox to convert to a p12/pfx file Export CA certtificate from Internt Explorer/Mozilla Firefox to covert to a p12/pfx file Optional: Install Openssl tool Convert the p12/pfx file into a Java Keystore Manage CA certificates in Anaplan Tenant Administrator Validate CA certificate authentication via Anaplan Connect 1.4 script. Obtain a certificate from a CA authority You can obtain a certificate from CA authority by submitting a request or submit a request with a certificate signing requiest (CSR) containing your private key.  Contact your IT or Security Operations organization to determine if your company already has an existing relationship with a CA or intermediary CA. If your organization has an existing relationship with a CA or Intermediate CA you can request a client certificate be issued for your integration user. If your organization does not have an existing CA relationship, you should contact a valid CA to procure a client certificate. Convert CA certificate to either a p12 or pfx file Import CA certificate into IE/Firefox to convert to a p12/pfx file This section presents steps to import CA certificate into Internet Explorer and Mozilla Firefox. CA certificate will be exported in the next section to either a p12 or pfx format. CA certificates may have .crt or .cer as file extensions. Internet Explorer Within Internet explorer, click on the Settings icon and select Internet option.    Navigate to the Content tab and then click on Certificates.   Click Import to launch the Certificate Import Wizard.   Click Browse to search & select the CA Certificate file. This file may have a file extension of .crt or .cer.    If a password was used when requesting the Certificate, enter it in this screen. Ensure that the “Mark this key as exportable” option is selected and click Next.    Select the certificate store in which to import the certificate and click Next.     Review the setting and click Finish.     The certificate should appear in the certificate store selected. Mozilla Firefox Within Firefox, select Options from the settings menu.    In the Options window, click Privacy & Security from the navigation pane on the left. Scroll to the very bottom and click on the View Certificates… button.    In the Certificate Manager, click the Import… button and select the certificate to convert and click Open.   If a password was provided when the certificate was requested, enter that password and click OK.    The certificate should now show up in the Certificate Manager.   Export CA certificate from IE/Firefox to convert to a p12/pfx file This section presents steps to export CA certificate from Internet Explorer (pfx) and Mozilla Firefox (p12). Internet Explorer (pfx) Select the certificate imported above and click the Export… button to initiate the Certificate Export Wizard.      Select the option “Yes, export the private key” and click Next.   Select the option for Personal Informatica Exchange – PCKS #12 (.PFX) and click Next.    Create a password, enter it and confirm it in the following screen.  This password will be used later on in the process. Click Next to continue.    Select a location to export the file and click Save.    Verify the file location and click Next.    Review the export settings, ensure that the Export Keys settings says “Yes”, if not start the export over. If all looks good, click Next. A message will appear when the export is successful.      Mozilla Firefox (p12) To export the certificate from Firefox, click the Backup… button in the Certificate Manager.  Select a location and a name for the file.  Ensure that the Save as type: is “PKCS12 Files (*.p12)”. Click the Save button to continue.    Enter a password to be used later when exporting the public and private keys. Click the OK button to finish.   Install openssl tool (Optional) If you haven't done so already, install openssl tool for your operating system.  List of third party binary distributions may be found on www.openssl.org or here. Examples in this article are shown for Windows platform. Convert the p12/pfx file into a Java Keystore Execute the following toto export the public and private keys exported above. In the commands listed below, the values that are customer specific are in Bold Italics. There is a screen shot at the end of this section that shows all of the commands run in sequence and it shows how the passwords relate between the steps. Examples in this article assume location of the certificate as the working directory. If you are executing these commands from a different directory (ex: ...\openssl\bin), then ensure you provide absolute directory path to all the files. Export the public key Public key will be exported from the certificate (p12/pfx) using openssl tool. Result is a .pem (public_key.pem) file that will be imported into Anaplan using Anaplan's Tenant Administrator client.   NOTE: The command below will prompt for a password. This password was created in steps above during export. openssl pkcs12 -clcerts -nokeys -in ScottSmithExportedCert.pfx -out public_key.pem Edit the public_key.pem file Remove everything before ---Begin Certificate --- (section highlighted in yellow). Ensure that the emailAddress value is populated with the user that will run the integrations. Export the Private Key This command will prompt for a password. This password is the password created in the export above. It will the prompt for a new password for the Private Key. It will also ask to confirm that password.  openssl pkcs12 -nocerts -in ScottSmithExportedCert.pfx -out private_key.pem Create P12 Bundle This command will prompt for the private key password from the step above. It will the prompt for a new password for the Bundle. It will also ask to confirm that password. openssl pkcs12 -export -in public_key.pem -inkey private_key.pem -out bundle.p12 -name Scott -CAfile public_key.pem -caname Scott In the command above,  public_key.pem is the file that was created in the step "Export the Public Key".  This is the file that will be registered with Anaplan using Anaplan Tenant Administrator.  private_key.pem is the file that was created in the step "Export the Private Key". bundle.p12  is the output file from this command, which will be used in the next step to create Java Keystore. Scott is the keystore alias. Add to Java Keystore (jks) Using keytool (typically found in <Java8>/bin), create a .jks file. This file will be referenced in Anaplan Connect 1.4 scripts for authentication. Command below will prompt for a new password for the entry into the keystore. It will also ask to confirm that password.  It will, then, prompt for the Bundle password from the step above. keytool -importkeystore -destkeystore my_keystore.jks -srckeystore bundle.p12 -srcstoretype PKCS12 In the command above: my_keystore.jks is the keystore file that will be referenced in your Anaplan Connect 1.4 scripts. bundle.p12 is the P12 bundle that was created in the last step.   Manage CA certificates in Anaplan Tenant Administrator In this step, you will add public_key.pem file to list of certificates in Anaplan Tenant Administrator. This file was created & edited in the first two steps of the last section. Log on to Anaplan Tenant Administrator. Navigate to Administration --> Security --> Certificates --> Add Certificate.   Validate CA certificate authentication via Anaplan Connect 1.4 script. Since you will be migrating to CA Certificate based authentication, you will need to upgrade your Anaplan Connect and associated scripts from v1.3 to v1.4. Community article, Migrating from Anaplan Connect 1.3.x.x to Anaplan Connect 1.4 will guide you through necessary steps. Follow the steps outlined in the article to edit & execute your Anaplan Connect 1.4 script. Examples provided (Windows & Linux) at the end of the article will validate authentication to Anaplan using CA Certificates and will return list of user's workspaces in a tenant.
View full article
I recently posted a Python library for version 1.3 of our API. With the GA announcement of API 2.0, I'm sharing a new library that works with these endpoints. Like the previous library, it does support certificate authentication, however, it requires the private key in a particular format (documented in the code, and below). I'm pleased to announce, the use of Java keystore is now supported. Note:   While all of these scripts have been tested and found to be fully functional, due to the vast amount of potential use cases, Anaplan does not explicitly support custom scripts built by our customers. This article is for information only and does not suggest any future product direction. This library is a work in progress and will be updated with new features once they have been tested.   Getting Started The attached Python library serves as a wrapper for interacting with the Anaplan API. This article will explain how you can use the library to automate many of the requests that are available in our Apiary, which can be found at   https://anaplanbulkapi20.docs.apiary.io/#. This article assumes you have the requests and M2Crypto modules installed as well as the Python 3.7. Please make sure you are installing these modules with Python 3, and not for an older version of Python. For more information on these modules, please see their respective websites: Python   (If you are using a Python version older or newer than 3.7 we cannot guarantee the validity of the article)   Requests   M2Crypto Note:   Please read the comments at the top of every script before use, as they more thoroughly detail the assumptions that each script makes. Gathering the Necessary Information In order to use this library, the following information is required: Anaplan model ID Anaplan workspace ID Anaplan action ID CA certificate key-pair (private key and public certificate), or username and password There are two ways to obtain the model and workspace IDs: While the model is open, go Help>About:  Select the workspace and model IDs from the URL:  Authentication Every API request is required to supply valid authentication. There are two (2) ways to authenticate: Certificate Authentication Basic Authentication For full details about CA certificates, please refer to our Anapedia article. Basic authentication uses your Anaplan username and password. To create a connection with this library, define the authentication type and details, and the Anaplan workspace and model IDs: Certificate Files: conn = AnaplanConnection(anaplan.generate_authorization("Certificate","<path to private key>", "<path to public certificate>"), "<workspace ID>", "<model ID>") Basic: conn = AnaplanConnection(anaplan.generate_authorization("Basic","<Anaplan username>", "<Anaplan password>"), "<workspace ID>", "<model ID>")   Java Keystore: from anaplan_auth import get_keystore_pair key_pair=get_keystore_pair('/Users/jessewilson/Documents/Certificates/my_keystore.jks', '<passphrase>', '<key alias>', '<key passphrase>') privKey=key_pair[0] pubCert=key_pair[1] #Instantiate AnaplanConnection without workspace or model IDs conn = AnaplanConnection(anaplan.generate_authorization("Certificate", privKey, pubCert), "", "") Note: In the above code, you must import the get_keystore_pair method from the anaplan_auth module in order to pull the private key and public certificate details from the keystore. Getting Anaplan Resource Information You can use this library to get the necessary file or action IDs. This library builds a Python key-value dictionary, which you can search to obtain the desired information: Example: list_of_files = anaplan.get_list(conn, "files") files_dict = anaplan_resource_dictionary.build_id_dict(list_of_files) This code will build a dictionary, with the file name as the key. The following code will return the ID of the file: users_file_id = anaplan_resource_dictionary.get_id(files_dict, "file name") print(users_file_id) To build a dictionary of other resources, replace "files" with the desired resource: actions, exports, imports, processes.  You can use this functionality to easily refer to objects (workspace, model, action, file) by name, rather than ID. Example: #Fetch the name of the process to run process=input("Enter name of process to run: ") start = datetime.utcnow() with open('/Users/jessewilson/Desktop/Test results.txt', 'w+') as file: file.write(anaplan.execute_action(conn, str(ard.get_id(ard.build_id_dict(anaplan.get_list(conn, "processes"), "processes"), process)), 1)) file.close() end = datetime.utcnow() The code above prompts for a process name, queries the Anaplan model for a list of processes, builds a key-value dictionary based on the resource name, then searches that dictionary for the user-provided name, and executes the action, and writes the results to a local file. Uploads You can upload a file of any size and define a chunk size up to 50mb. The library loops through the file or memory buffer, reading chunks of the specified size and uploads to the Anaplan model. Flat file:  upload = anaplan.file_upload(conn, "<file ID>", <chunkSize (1-50)>, "<path to file>") "Streamed" file: with open('/Users/jessewilson/Documents/countries.csv', "rt") as f: buf=f.read() f.close() print(anaplan.stream_upload(conn, "113000000000", buf)) print(anaplan.stream_upload(conn, "113000000000", "", complete=True)) The above code reads a flat file and saves the data to a buffer (this can be replaced with any data source, it does not necessarily need to read from a file). This data is then passed to the "streaming" upload method. This method does not accept the chunk size input. Instead, it simply ensures that the data in the buffer is less than 50mb before uploading. You are responsible for ensuring that the data you've extracted is appropriately split. Once you've finished uploading the data, you must make one final call to mark the file as complete and ready for use by Anaplan actions. Executing Actions You can run any Anaplan action with this script and define a number of times to retry the request if there's a problem. In order to execute an Anaplan action, the ID is required. To execute, all that is required is the following: run_job = execute_action(conn, "<action ID>", "<retryCount>") print(run_job) This will run the desired action, loop until complete, then print the results to the screen. If failure dump(s) exits, this will also be returned. Example output: Process action 112000000082 completed. Failure: True Process action 112000000079 completed. Failure: True Details: hierarchyName Worker Report successRowCount 0 successCreateCount 0 successUpdateCount 0 warningsRowCount 435 warningsCreateCount 0 warningsUpdateCount 435 failedCount 4 ignoredCount 0 totalRowCount 439 totalCreateCount 0 totalUpdateCount 435 invalidCount 4 updatedCount 435 renamedCount 435 createdCount 0 lineItemName Code rowCount 0 ignoredCount 435 Failure dump(s): Error dump for 112000000082 "_Status_","Employees","Parent","Code","Prop1","Prop2","_Line_","_Error_1_" "E","Test User 2","All employees","","101.1a","1.0","2","Error parsing key for this row; no values" "W","Jesse Wilson","All employees","a004100000HnINpAAN","","0.0","3","Invalid parent" "W","Alec","All employees","a004100000HnINzAAN","","0.0","4","Invalid parent" "E","Alec 2","All employees","","","0.0","5","Error parsing key for this row; no values" "W","Test 2","All employees","a004100000HnIO9AAN","","0.0","6","Invalid parent" "E","Jesse Wilson - To Delete","All employees","","","0.0","7","Error parsing key for this row; no values" "W","#1725","All employees","69001","","0.0","8","Invalid parent" [...] "W","#2156","All employees","21001","","0.0","439","Invalid parent" "E","All employees","","","","","440","Error parsing key for this row; no values" Error dump for 112000000079 "Worker Report","Code","Value 1","_Line_","_Error_1_" "Jesse Wilson","a004100000HnINpAAN","0","434","Item not located in Worker Report list: Jesse Wilson" "Alec","a004100000HnINzAAN","0","435","Item not located in Worker Report list: Alec" "Test 2","a004100000HnIO9AAN","0","436","Item not located in Worker Report list: Test 2 Downloading a File If the above code is used to execute an export action, the fill will not be downloaded automatically. To get this file, use the following: download = get_file(conn, "<file ID>", "<path to local file>") print(download) This will save the file to the desired location on the local machine (or mounted network share folder) and alert you once the download is complete, or warn you if there is an error. Get Available Workspaces and Models API 2.0 introduced a new means of fetching the workspaces and models available to a given user. You can use this library to build a key-value dictionary (as above) for these resources. #Instantiate AnaplanConnection without workspace or model IDs conn = AnaplanConnection(anaplan.generate_authorization("Certificate", privKey, pubCert), "", "") #Setting session variables uid = anaplan.get_user_id(conn) #Fetch models and workspaces the account may access workspaces = ard.build_id_dict(anaplan.get_workspaces(conn, uid), "workspaces") models = ard.build_id_dict(anaplan.get_models(conn, uid), "models") #Select workspace and model to use while True: workspace_name=input("Enter workspace name to use (Enter ? to list available workspaces): ") if workspace_name == '?': for key in workspaces: print(key) else: break while True: model_name=input("Enter model name to use (Enter ? to list available models): ") if model_name == '?': for key in models: print(key) else: break #Extract workspace and model IDs from dictionaries workspace_id = ard.get_id(workspaces, workspace_name) model_id = ard.get_id(models, model_name) #Updating AnaplanConnection object conn.modelGuid=model_id conn.workspaceGuid=workspace_id The above code will create an AnaplanConnection instance with only the user authentication defined. It queries the API to return the ID of the user in question, then queries for the available workspaces and models and builds a dictionary with these results. You can then enter the name of the workspace and model you wish to use (or print to screen all available), then finally update the AnaplanConnection instance to be used in all future requests.
View full article
Filters can be very useful in model building and are widely used, but they can come at the expense of performance; often very visible to users through their use on dashboards. Performance can also hit imports and exports, which in turn may lead to the blocking of other activity, causing a poor perception of the model. There are some very simple guidelines to designing well performing filters: Using a single Boolean filter on a line item that does not have time or versions applied and does not have a summary is fastest. Try to create a Boolean line item that incorporates all the filter criteria you want to apply. This allows you to re-use the line item and combine a series of Boolean line items into a single Boolean for use in the filter. For example, you may want to filter on three data points: Volume, Product Category, and Active Status. Volume is numeric, Product Category is a list formatted line item matching a user selection, and Active Status is a Boolean. Create a line item called Filter with the following formula: Volume > Min Vol AND Product Cat = User Selection.Category AND Active Status Here’s a very simple example module to demonstrate this… A Filter line item is added to represent all the filters we need on the view. Only the Filter line needs to be dimensioned by Users. A User Selection module dimension only by Users is created to capture user-specific filter choices: Here’s the data before we apply the filter: With the filter applied: A best practice suggestion would be to create a filter module and line items for each filter part. You may want other filters and you can then combine each filter as needed from this system module. This should reduce repetition and give you control over the filters to ensure they can all be Boolean. What can make a filter slow? The biggest performance hit for filters are when nesting dimensions on rows. The performance loss is significantly increased by the number of nested dimensions and the number of levels they contain. With a flat list versus nested dimensions (filtering on the same number of items) the nested filter will be slower. This was tested with a 10,000,000 list versus 2 lists of 10 and 1,000,000 items as nested rows; the nested dimension filter was 40% slower. Filtering on line items with a line item summary will be slow. A numeric filter on 10,000,000 items can take less than a second, but with a summary will take at least 5 seconds. Multiple filters will increase time This is especially significant if any of the preceding filters do not lower the load, because they will take additional time to evaluate. If you do use multiple filter conditions, try to order them so the most effective filters are first. If a filter doesn’t often match on anything, evaluate whether it's even needed. Hidden levels act as a filter If you hide levels on a composite list, this acts like a filter before any other filter is applied. The hiding does take time to process and will impact more depending on the number of levels and the size of the list. How to avoid nested rows for export views Using nested rows can be a useful way to filter a complex set of data for export but, as discussed above, the filter performance here can be poor. The best way around this is to pivot the dimensions so there is only one dimension on rows and use the Tabular Multi Column export option with a Filter Row based on Boolean option. Some extra filter tips… Filter duration will affect saved views used in imports, so check the saved view open time to see the impact. This view open time will be on every use of the view, including imports or exports. If you need to filter on a specific list, create a subset of those items and create a new module dimensioned by the subset to view that data.
View full article
“Back to the Future” Imagine this scenario: You are in the middle of making changes in your development model and have been doing so for the last few weeks. The changes are not complete and are not ready to synchronize. However, you just received a request for an urgent fix from the user community that is critical for the forthcoming monthly submission. What do you do? What you don’t want to do is take the model out of deployed mode! You also don’t want to lose all the development work you have been doing.  Don’t worry! Following the procedure below will ensure you can apply the hotfix quickly and keep your development work. The following diagram illustrates the procedure: It’s a two-stage process: Stage 1: Roll the development model back to a version that doesn’t contain any changes (is the same as production), and apply the hotfix to that version. Add a new revision tag to the development model as a temporary placeholder. (Note the History ID of the last structural change as you'll need it later.) On the development model, use History to restore to a point where development and production were identical (before any changes were made in development). Apply the hotfix. Save a new revision of the development model. Sync the development model with the production model. Production now has its hotfix. Stage 2: Restore the changes to development and apply the hotfix. On the development model, use the History ID from Stage 1 – Step 1 to restore to the version containing all of the development work (minus the hotfix). Reapply the hotfix to this version of development. Create a new revision of the development model. Development is now back to where it was, with the hotfix now applied. When your development work is complete, you can promote the new version to production using ALM best practice. The procedure is documented here: https://community.anaplan.com/t5/Anapedia-Model-Building/Fixing-Production-Issues/ta-p/4839
View full article
This article covers the necessary steps for you to migrate your Anaplan Connect (AC) 1.3.x.x script to Anaplan Connect 1.4.x. For additional details and examples, refer to the  latest Anaplan Connect User Guide. The changes are: New connectivity parameters Replace reference to Anaplan Certificate with Certificate Authority (CA) certificates using new parameters Optional Chunksize & Retry parameters Changes to JDBC configuration   New Connectivity Parameters Add the following parameters to your Anaplan Connect 1.4.x integration scripts. These parameters provide connectivity to Anaplan and Anaplan authentication services. Note: Both of the urls listed below need to be whitelisted with your network team. -service "https://api.anaplan.com/" -auth "https://auth.anaplan.com"   Certificate Changes As noted in our   Anaplan-Generated Certificates to Expire at the End of 2019 blog post, new and updated Anaplan integration options support Certificate Authority (CA) certificates for authentication. Basic Authentication is still available in Anaplan Connect 1.4.x, however, the use of certificates has changed. In Anaplan Connect 1.3.x.x, the script references the full path to the Anaplan certificate file. For example: -certificate "/Users/username/Documents/AnaplanConnect1.3/certificate.pem" In Anaplan Connect 1.4.x the CA certificate can be referenced via two different options.  Examples of both options are included at the end of this article as well as in the Anaplan Connect 1.4.x download. Option 1: Direct use of the Private Key with Anaplan Connect Use your Private Key with Anaplan Connect by providing to certificate, private key and optional private key passphrase.  For example: If your private key has been encrypted use the following: CertPath="FullPathToThePublicCertificate" PrivateKey="FullPathToThePrivateKey:Passphrase" If your private key has not been encrypted then the passphrase can be omitted, however the colon is still needed in the path of the private key. CertPath="FullPathToThePublicCertificate" PrivateKey="FullPathToThePrivateKey:" To pass these values to Anaplan Connect 1.4.x, use these command line parameters: -certificate {path to the certificate file} -privatekey {path to the private key file:}{passphrase} These parameters should be passed as part of the credentials in the script: Credentials="-certificate ${CertPath} -privatekey ${PrivateKey}" Option 2: Create a Java Keystore A Java Keystore (JKS) is a repository of security certificates and their private keys.  Refer to   this video   for a walkthrough of the process of getting the CA certificate into the key store. You can also refer to   Anaplan Connect User Guide   for steps to create the Java key store. Once you have imported the key into the JKS,   make note of this information : Path to the JKS (directory path on server where JKS is saved) The Password to the JKS The alias of the certificate within the JKS. For example: KeyStorePath ="/Users/username/Documents/AnaplanConnect1.4/my_keystore.jks" KeyStorePass ="your_password" KeyStoreAlias ="keyalias" To pass these values to Anaplan Connect 1.4.x, use these command line parameters: -keystore {KeystorePath} -keystorealias {KeystoreAlias} -keystorepass {KeystorePass} These parameters should be passed as part of the credentials in the script: Credentials="-keystore ${KeyStorePath} -keystorepass ${KeyStorePass} -keystorealias ${KeyStoreAlias}"   Chunksize Anaplan Connect 1.4.x allows for custom chunk sizes on files being imported. The -chunksize parameter can be included in the call with the value being the size of the chunks in megabytes. The chunksize can be any whole number between 1 and 50. -chunksize {SizeInMBs}   Retry Anaplan Connect 1.4.x allows for the client to retry requests to the server in the event that the server is busy. The -maxretrycount parameter defines the number of times the process retries the action before exiting. The -retrytimeout parameter is the time in seconds that the process waits before the next retry. -maxretrycount {MaxNumberOfRetries} -retrytimeout {TimeoutInSeconds}   Changes to JDBC Configuration With Anaplan Connect 1.3.x.x the parameters and query for using JDBC are stored within the Anaplan Connect script itself. For example: Operation="-file Sample.csv' -jdbcurl 'jdbc:mysql://localhost:3306/mysql?useSSL=false' -jdbcuser 'root:Welcome1' -jdbcquery 'SELECT * FROM py_sales' -import 'Sample.csv' -execute" With Anaplan Connect 1.4.x. the parameters and query for using JDBC have been moved to a separate file. The name of that file is then added to the AnaplanClient call using the   -jdbcproperties   parameter. For example:  Operation="-auth 'https://auth.anaplan.com' -file 'Sample.csv'  -jdbcproperties 'jdbc_query.properties' -import 'Sample.csv' -execute " To run multiple JDBC calls in the same operation, a separate jdbcpropeties file will be needed for each query. Each set of calls in the operation should include then following parameters: -file, -jdbcproperties, -import, and -execute. In the code sample below each call is underlined separately.  For example: Operation="-auth 'https://auth.anaplan.com' -file 'SampleA.csv' -jdbcproperties 'SampleA.properties' -import 'SampleA Load' -execute -file 'SampleB.csv' -jdbcproperties 'SampleB.properties' -import 'SampleB Load' -execute"   JDBC Properties File Below is an example of the JDBCProperties file. Refer to the   Anaplan Connect User Guide   for more details on the properties shown below. If the query statement is long, the statement can be broken up on multiple lines by using the \ character at the end of each line. No \ is needed on the last line of the statement. The \ must be at the end of the line and nothing can follow it. jdbc.connect.url=jdbc:mysql://localhost:3306/mysql?useSSL=false jdbc.username=root jdbc.password=Welcome1 jdbc.fetch.size=5 jdbc.isStoredProcedure=false jdbc.query=select * \ from mysql.py_sales \ where year = ? and month !=?; jdbc.params=2018,04   CA Certificate Examples: Direct use of the private key: Anaplan Connect Windows BAT Script Example (with direct use of the private key) '@echo of rem This example lists files in a model set CertPath="C:\CertFile.pem" set PrivateKey="C:\PrivateKeyFile.pem:passphrase" set WorkspaceId="Enter WS ID Here" set ModelId="Enter Model ID here" set Operation=-service "https://api.anaplan.com" -auth "https://auth.anaplan.com" -workspace %WorkspaceId% -model %ModelId% -F set Credentials=-certificate %CertPath% -privatekey %PrivateKey% rem *** End of settings - Do not edit below this line *** setlocal enableextensions enabledelayedexpansion || exit /b 1 cd %~dp0 set Command=.\AnaplanClient.bat %Credentials% %Operation% @echo %Command% cmd /c %Command% pause Anaplan Connect Shell Script Example (with direct use of the private key) #!/bin/sh # This example lists files in a model set CertPath="/path/CertFile.pem" set PrivateKey="/path/PrivateKeyFile.pem:passphrase" WorkspaceId="Enter WS ID Here" ModelId="Enter Model Id Here" Operation="-service 'https://api.anaplan.com' -auth 'https://auth.anaplan.com' -workspace ${WorkspaceId} -model ${ModelId} -F" #________________ Do not edit below this line __________________ if [ "${PrivateKey}" ]; then     Credentials="-certificate ${CertPath} -privatekey ${PrivateKey}" fi echo cd "`dirname "$0"`" cd "`dirname "$0"`" if [ ! -f AnaplanClient.sh ]; then     echo "Please ensure this script is in the same directory as AnaplanClient.sh." >&2     exit 1 elif [ ! -x AnaplanClient.sh ]; then     echo "Please ensure you have executable permissions on AnaplanClient.sh." >&2     exit 1 fi Command="./AnaplanClient.sh ${Credentials} ${Operation}" /bin/echo "${Command}" exec /bin/sh -c "${Command}"    Using a Java Keystore (JKS) Anaplan Connect Windows BAT Script Example (using a java keystore) @echo off rem This example lists files in a model set Keystore="C:\YourKeyStore.jks" set KeystoreAlias="alias1" set KeystorePassword="mypassword" set WorkspaceId="Enter WS ID Here" set ModelId="Enter Model ID here" set Operation=-service "https://api.anaplan.com" -auth "https://auth.anaplan.com" -workspace %WorkspaceId% -model %ModelId% -F set Credentials=-k %Keystore% -ka %KeystoreAlias% -kp %KeystorePassword% rem *** End of settings - Do not edit below this line *** setlocal enableextensions enabledelayedexpansion || exit /b 1 cd %~dp0 set Command=.\AnaplanClient.bat %Credentials% %Operation% @echo %Command% cmd /c %Command% pause Anaplan Connect Shell Script Example (using a java keystore) #!/bin/sh #This example lists files in a model KeyStorePath="/path/YourKeyStore.jks" KeyStoreAlias="alias1" KeyStorePass="mypassword" WorkspaceId="Enter WS ID Here" ModelId="Enter Model Id Here" Operation="-service 'https://api.anaplan.com' -auth 'https://auth.anaplan.com' -workspace ${WorkspaceId} -model ${ModelId} -F" #________________ Do not edit below this line __________________ if [ "${KeyStorePath}" ]; then     Credentials="-keystore ${KeyStorePath} -keystorepass ${KeyStorePass} -keystorealias ${KeyStoreAlias}" fi echo cd "`dirname "$0"`" cd "`dirname "$0"`" if [ ! -f AnaplanClient.sh ]; then     echo "Please ensure this script is in the same directory as AnaplanClient.sh." >&2     exit 1 elif [ ! -x AnaplanClient.sh ]; then     echo "Please ensure you have executable permissions on AnaplanClient.sh." >&2     exit 1 fi Command="./AnaplanClient.sh ${Credentials} ${Operation}" /bin/echo "${Command}" exec /bin/sh -c "${Command}"   
View full article
Overview These dashboards are absolutely critical to good usability of a model. Dashboards are the first contact between the end users and a model. What SHOULD NOT be done in a landing dashboard: Display detailed instructions on how to use the model. See "Instruction Dashboard" instead. Use it for global navigation, built using text boxes and navigation buttons. It will create maintenance challenges if different roles have different navigation paths. It's not helpful once users know where to go. What SHOULD be done in a landing dashboard: Display KPIs with a chart that highlights where they stand on these KPIs, and highlight gaps/errors/exceptions/warnings. A summary/aggregated view of data on a grid to support the chart. The chart should be the primary element. Short instructions on the KPIs. A link to an instruction-based dashboard that includes guidance and video links. A generic instruction to indicate that the user should open the left-side sliding panel to discover the different navigation paths. Users who perform data entry need access to the same KPIs as execs are seeing. Landing dashboard example 1:   Displays the main KPI, which the planning model allows the organization to plan. Landing dashboard example 2:   Provides a view on how the process is progressing against the calendar. Landing dashboard example 3:   Created for executives who need to focus on escalation. Provides context and a call to action (could be a planning dashboard, too).  
View full article
Note that this article uses a planning dashboard as an example, but many of these principles apply to other types of dashboards as well. Methodology User stories Building a useful planning dashboard always starts with getting a set of very clear user stories, which describe how a user should interact with the system. The user stories need to identify the following: What the user wants to do. What data the user needs to see to perform this action. What data the user wants to change. How the user will check that changes made have taken effect. If one or more of the above is missing in a user story, ask the product owner to complete the description. Start the dashboard design, but use it to obtain the answers. It will likely change as more details arrive. Product owners versus designers Modelers should align with product owners by defining concrete roles and responsibilities for each team member. Product owners should provide what data users are expecting to see and how they wish to interact with the data, not ask for specific designs (this is the role of the modelers/designers). Product owners are responsible for change management and should be extra careful when dashboard/navigation is significantly different than what is currently being used (i.e. Excel ® ). Pre-demo peer review  Have a usability committee that: Is made up of modeling peers outside the project and/or project team members outside of modeling team. Will host a mandatory gate-check meeting to review models before demos to product owners or users. Committee is designed to ensure: Best design by challenging modelers. Consistency between models. The function is clear. Exceptions/calls to action are called out. The best first impression. Exception, call to action, measure impact Building a useful planning dashboard will be successful if the dashboard allows users to highlight and analyze exceptions (issues, alerts, warning), take action and plan to solve these, and always visually check the impact of the change against a target. Dashboard structure Example: A dashboard is built for these two user stories that compliment each other. Story 1: Review all of my accounts for a specific region, manually adjust the goals and enter comments. Story 2: Edit my account by assigning direct and overlay reps. The dashboard structure should be made of: Dashboard header: Short name describing the purpose of the dashboard at the top of the page in "Heading 1." Groupings: A collection of dashboard widgets. Call to action. Main grid(s). Info grid(s) : Specific to one item of the main grid. Info charts: Specific to one item of the main grid. Specific action buttons: Specific to one item of the main grid. Main charts: Covers more than one item of the main grid. Individual line items: Specific to one item of the main grid, usually used for commentaries. Light instructions. A dashboard can have more than one of these groupings, but all elements within a grouping need to answer the needs of the user story. Use best judgements to determine the number of groupings added to one dashboard. A maximum of two-to-three groupings is reasonable. Past this range, consider building a new dashboard. Avoid having a "does it all" dashboard, where users keep scrolling up and down to find each section. If users ask for a table of contents at the top of a dashboard, it's a sign that the dashboard has too much functionality and should be divided into multiple dashboards. Example:   General Guidelines  Call to action Write a short sentence describing the task to be completed within this grouping. Use the Heading 2 format. Main grid(s) The main grid is the central component of the dashboard, or of the grouping. It's where the user will spend most of their time. This main grid will display the KPIs needed for the task (usually in the columns) and will display one or more other dimension in the rows. Warning: Users may ask for 20+ KPIs and need these KPIs to be broken down by many dimensions, such as by product, actual/plan/variance, or by time. It's critical to have a main grid as simple and as decluttered as possible. Avoid the "data jungle" syndrome. Users are used to "data jungles" simply because that's what they are used to with Excel. Tips to avoid data jungle syndrome: Make a careful KPI election (KPIs are usually the line items of a module). Display the most important KPIs ONLY, which are those needed for decision making. Hide the others for now. A few criteria for electing a KPI in the main grid are: The KPI is meant to be compared across the dimension items in the rows, or across other KPIs. Viewing the KPI values for all of the rows is required to make the decision. The KPI is needed for sorting the rows (except on row name). A few criteria for not electing a KPI in the main grid are (besides not matching the above criteria) when we need these KPIs in more of a drill down mode; The KPI provides valid extra info, but just for the selected row of the Dashboard and does not need to be displayed for all rows. These "extra info" KPIs should be displayed in a different grid, which will be referred to as "info grid" in this document. Take advantage of the row/column sync functionality to provide a ton of data in your dashboard but only display data when requested or required. Design your main grid in such a way that it does not require the user to scroll left and right to view the KPIs: Efficiently select KPIs. Use the column header wrap. Set the column size accordingly. Vertical scroll It is ok to have users scroll vertically on the main grid. Only display 15 to 20 rows at a time when there are numerous rows, as well as other groupings and action buttons, to display on the same dashboard. Use sorts and a filter to display relevant data. Sort your grid Always sort your rows. Obtain the default sort criteria via user stories. If no special sort criteria is called out, use the alphanumeric sort on the row name. This will require a specific line item. Train end users to use the sort functionality. Filter your grid Ask end users or product owners what criteria to use to display the most relevant rows. It could be: Those that make 80 percent of a total. Use the RankCumulate function. Those that have been modified lately. This requires a process to attach a last modified date to a list item, updated daily via a batch mode. When the main grid allows item creation, always display the newly created first. Status Flag. If end users need to apply their own filter values on some attributes of the list items, such as filter to show only those who belong to EMEA or those whose status is "in progress," build pre-set user-based filters. Use the new Users list. Create modules dimensioned by user with line items (formatted as lists) to hold the different criteria to be used. Create a module dimensioned by Users and the list to be filtered. In this module resolve the filter criteria from above against the list attributes to a single Boolean. Apply this filter in the target module.  Educate the users to use the refresh button, rather than create an "Open Dashboard" button. Color code your grid Use colored cells to call attention to areas of a grid, such as green for positive and red for negative. Color code cells that specifically require data entry. Display the full details If a large grid is required, something like 5k lines and 100 columns, then: Make it available in a dedicated full-screen dashboard via a button available from the summary dashboard, such as an action button. Do not add such a grid to a dashboard where KPIs, charts, or multiple grids are used for planning.  These dashboards are usually needed for ad-hoc analysis and data discovery, or random verification of changes, and can create a highly cluttered dashboard. Main charts The main chart goes hand-in-hand with the main grid. Use it to compare one or more of the KPIs of the main grid across the different rows. If the main grid contains hundreds or thousands of items, do not attempt to compare this in the main chart. Instead, identify the top 20 rows that really matter or that make most of the KPI value and compare these 20 rows for the selected KPI. Location: Directly below or to the right of main display grid; should be at least partially visible with no scrolling. Synchronization with a selection of KPI or row of the main display grid. Should be used for: Comparison between row values of the main display grid. Displaying difference when the user makes change/restatement or inputs data. In cases where a chart requires 2–3 additional modules to be created: Implement and test performance. If no performance issues are identified, keep the chart. If performance issues are identified, work with product owners to compromise. Info grid(s) These are the grids that will provide more details for an item selected on the main grid. If territories are displayed as rows, use an info grid to display as many line items as necessary for this territory. Avoid cluttering your main grid by displaying all of these line items for all territories at once. This is not necessary and will create extra clutter and scrolling issues for end users. Location: Below or to the right of the main display grid. Synced to selection of list item in the main display grid. Should read vertically to display many metrics pertaining to list item selected. Info charts Similar to info grids, an info chart is meant to compare one or more KPIs for a selected item in the rows of the main grid. These should be used for: Comparison of multiple KPIs for a single row. Comparison or display of KPIs that are not present on the main grid, but are on info grid(s). Comparing a single row's KPI(s) across time. Place it on the right of the main grid, above or below an info grid. Specific action buttons Location: Below main grid; Below the KPI that the action is related to, OR to the far left/right - similar to "checkout." Should be an action that is to be performed on the selected row of the main grid. Can be used for navigation as a drill down to a detailed view of a selected row/list item. Should NOT be used as lateral navigation between dashboards; Users should be trained to use the left panel for lateral navigation. Individual line items Serve as a call out of important KPIs or action opportunities (i.e., user setting container for explosion, Container Explosion status). If actions taken by users require additional collaboration with other users, it should be published outside the main grid (giving particular emphasis by publishing the individual line item/s). Light instructions Call to action. Serves as a header for a grouping. Short sentence describing what the user should be performing within the grouping. Formatted in "Heading 2." Action Instructions. Directly located next to a drop-down, input field, or button where the function is not quite clear. No more than 5–6 words. Formatted in "instructions." Tooltips. Use Tooltips on modules and line items for more detailed instructions to avoid cluttering the dashboard.
View full article
Overview Imports are blocking operations: To maintain a consistent view of the data, the model is locked during the import, and concurrent imports run by end-users will need to run one after the other and will block the model for everyone else. Exports are blocking for data entry while the export data is retrieved, and then the model is released. During the blocking phase, users can still navigate within the model. Rule #1  Carefully Decide If You Let End-Users Import (And Export) During Business Hours Imports executed by end-users should be carefully considered, and if possible, executed once or twice a day. Customers more easily accept model updates at scheduled hours for a predefined time—even if it takes 10+ minutes—and are frustrated when these imports are run randomly during business hours. Your first optimization is to adjust the process and run these imports by an administrator at a scheduled time, and then let the user based know about the schedule. Rule #2 Use a Saved View The first part of any import (or export) is retrieving the data. The time it takes to open the view directly affects the time of the import or export. Always import from a saved view—NEVER from the default view. And use the naming convention for easy maintenance. Ensure the view is using optimized filters with a single Boolean value per axis. Hide the line items that are not needed for import; do not bring extra columns that are not needed. If you have done all of the above, and the view is still taking time to complete, consider using the Tabular Multi Column export and filter "in the way out." This has been proven to improve some sluggish exports.  Rule #3 Mapping Objective = Zero Errors or Warning Make sure your import executes with no errors or warnings as every error takes processing time. The time to import into a medium-to-large list (>50k) is significantly reduced if no errors are to be processed. In the import definition, always map all displayed line items (source→target) or use the "ignore" setting. Don't leave any line item unmapped. Rule #4 Watch the Formulas Recalculated During the Import If your end-users encounter poor performance when clicking a button that triggers an import or a process, it is likely due to the recalculations that are triggered by the import, especially if the action creates or moves items within a hierarchy. You will likely need the help of the Anaplan Model Optimization team to identify what formulas are triggered after the import is done and to get a performance check on these formulas to identify which one takes most of the time. Usually, those fetching many cells such as SUM, LOOUKP, ANY, or FINDITEM are likely to be responsible for the performance impact. Speak to your Business Partner for more details on the Model Optimization services available to you. To solve such situations, you will need to challenge the need for recalculating the formula identified each time a user calls the action. Often, for actions such as creations, moves, assignments done in WFP or Territory Planning, many calculations used for reporting are triggered in real-time after the hierarchy is modified by the import, and are not necessarily needed by users until later in the process. The recommendation is to challenge your customer and see if these formulas can be calculated only once a day, instead of each time a user runs the action. If this is acceptable, you'll need to rearchitect your modules and/or formulas so that these heavy formulas get to run through a different process run daily by an administrator and not by each end-users. If not, you will need to look at the formulas more closely to see what improvements can be made. Remember, breaking formulas up often helps performance. Rule #5 Don't Import List Properties Importing list properties takes more time than importing these as a module line item. Review your model list impacted by imports, and look to replace list properties with module line items when possible. Use a system module to hold these for the key hierarchies, as per D.I.S.C.O. Rule #6 Get Your Data Hub Hub and spoke: Setup a data hub model, which will feed the other production models used by stakeholders. Performance benefits: It will prevent production models to be blocked by a large import from an external data source. But since data hub to production model imports will still be blocking operations, carefully filter what you import, and use the best practices rules listed above. All import, mapping/transformation modules required to prepare the data to be loaded into planning modules can now be located in a dedicated data hub model and not in the planning model. This model will then be smaller and will work more efficiently. Try and keep the transaction data history in the data hub with a specific analysis dashboard made available for end users; often, the detail is not needed for planning purposes, and holding this data in the planning model has a negative impact on size, model opening times, and performance. As a reminder of the other benefits of a data hub not linked to performance: Better structure, easier maintenance: data hub helps keep all the data organized in a central location. Better governance: Whenever possible put this Data Hub on a different workspace. That will ease the separation of duties between production models and meta-data management, at least on actual data and production lists. IT departments will love the idea to own the data hub and have no one else be an administrator in the workspace. Lower implementation costs: A data hub is a way to reduce the implementation time of new projects. Assuming IT can load the data needed by the new project in the data hub, then business users do not have to integrate with complex source systems but with the Anaplan data hub instead. Rule #7 Incremental Import/Export This can be the magic bullet in some cases. If you export on a frequent basis (daily or more) from an Anaplan model into a reporting system, or write back to the source system, or simply transfer data from one Anaplan model to another, you have ways to only import/export the data that have changed since the last export. Use a Boolean line item to identify records that have changed and only import those.
View full article
Dynamic Cell Access (DCA) controls the access levels for line items within modules. It is simple to implement and provides modelers with a flexible way of controlling user inputs. Here are a few tips and tricks to help you implement DCA effectively. Access control modules Any line item can be controlled by any other applicable Boolean line item. To avoid confusion over which line item(s) to use, it is recommended that you add a separate functional area and create specific modules to hold the driver line items. These modules should be named appropriately (e.g. Access – Customers > Products, or Access – Time etc.). The advantage of this approach is the access driver can be used for multiple line items or modules, and the calculation logic is in one place. In most cases, you will probably want read and write access. Therefore, within each module it is recommended that you add two line items (Write? and Read?). If the logic is being set for Write?, then set the formulas for the Read? line item to NOT WRITE? (or vice-versa). It may be necessary to add multiple line items to use for different target line items, but start with this a default. Start simple You may not need to create a module that mirrors the dimensionality of the line item you wish to control. For example, if you have a line item dimensioned by customer, product, and time, and you wish to make actual months read-only, you can use an access module just dimensioned by time. Think about what dimension the control needs to apply to and create an access module accordingly. What settings do I need? There are three different states of access that can be applied: READ, WRITE, and INVISIBLE or hidden. There are two blueprint controls (read control and write control) and there are two states for a driver (TRUE or FALSE). The combination of these determines which state is applied to the line item. The following table illustrates the options: Only the read access driver is set:   Read Access Driver Driver Status True False Target Line Item READ INVISIBLE Only the write access driver is set:   Write Access Driver Driver Status True False Target Line Item WRITE INVISIBLE Both read access and write access drivers are set:   Read Access Driver Write Access Driver Driver Status True False True False Target Line Item READ INVISIBLE WRITE Revert to Read* *When both access drivers are set, the write access driver takes precedence with write access granted if the status of the write access driver is true. If the status of the write access driver is false, the cell access is then taken from the read access driver status. The settings can also be expressed in the following table:   WRITE ACCESS DRIVER TRUE FALSE NOT SET READ ACCESS DRIVER TRUE Write Read Read FALSE Write Invisible Invisible NOT SET Write Invisible Write Note: If you want to have read and write access, it is necessary to set both access drivers within the module blueprint.  Totals Think about how you want the totals to appear. When you create a Boolean line item, the default summary option is NONE. This means that if you used this access driver line item, any totals within the target would be invisible. In most cases, you will probably want the totals to be read-only, so setting the access driver line item summary to ANY will provide this setting. If you are using the Invisible setting to “hide” certain items and you do not want the end user to compute hidden values, then it is best to use the ANY setting for the access driver line item. This means that only if all values in the list are visible then the totals show; otherwise, the totals are hidden from view.
View full article
Deal with monthly dashboards Many FP&A dashboards will need to display all 12 months in the current year, as well as Quarter, Half, and Total Year totals. Doing this is likely to create a very large grid, especially if more than one dimension is nested on the rows. Avoid this: The grid displayed here is what may be requested when Anaplan is replacing a spreadsheet-based solution. The requirement being "At minimum, do what we could do in the spreadsheets". Avoid the trap of rebuilding this in Anaplan. Usually, this simply creates an extra requirement to export this into Excel ® , have users work offline, and then import the data back into Anaplan, which kills the value that Anaplan can bring. Instead, build the dashboard as indicated below: Have end users view the aggregated values on the Cost center (the first nested dimensions) that will provide an overview on where most OPEX are spent Have end users highlight a cost center, and enter its detailed sub-accounts Visualize the monthly trend using a line chart for the selected sub-account   
View full article