Choose a label or article, or search below to begin.
Sort by:
Learn how small changes can lead to dramtic improvements in model calculations
View full article
Overview G uide for the new  Statistical Forecasting Calculation Engine Models (monthly and weekly).  Includes enablement videos, practice data import exercise, model documentation, and specific steps when using the model for implementations .  1. Enablement Videos & Practice Exercise # Item Details Link 1a Intro and Overview Video Model overview and review of new key features   Video Below 1b Initial Model & Data Import Steps Steps on how to setup model, product hierarchy, customer list and multi-level forecast analysis  Video Below  1c Practice Exercise—Import data to setup stat forecast Two sets of load files included to practice setup for single level product set or multi-level product set w/ customers, product and brand level.  Start on "Initial App Setup" dashboard and load   either Single OR Multi Level files   into model, use Import video as guide if needed.  .Zip File Attached  2. Documentation  # Item Details Link 2a Lucidchart Process Maps Lucidchart Process Map document includes High Level process flow for end user navigation and detailed tabs for each section  **Details & links also on "Training & Enablement" dashboard Process Maps  2b High Level Process Map PDF High level process map PDF format  Attached 2c Forecast Methods PDFs High level version with forecast algorithms list and overview  Detailed version which includes slide for each forecast method, m ethod overview, advantages/disadvantages, equation and graph example output   **These slides are also included on "Forecast Methods Overview & Formulas" dashboard     Attached 3. Implementation Specifics # Item Details 3a Training & Enablement Dashboard Training & Enablement dashboard contains details on process map navigation  3b Initial Model Setup  Initial Setup: current model staged with chocolate data from data hub, execute CLEAR MODEL action prior to loading customer specific data 3c Changing Model Time Scale— align Native & Dynamic Time Settings If a Time Settings change is required, need to review Initial App Setup dashboard to align Native Time with Dynamic Time setup in model   3d Monthly Update Process After initial setup, use Monthly Data History Upload dashboard to update prior period actuals and settings  3e Single Level vs. Multi-Level Forecast Setup Two implementation options & when to use:  Single Level Forecast:  Forecast at one level of product hierarchy (i.e. all stat forecasts calculated at Item level). Most use cases will leverage single level forecast setup. Multi-Level Forecast : Ability to forecast at different levels of the product hierarchy (i.e. Top Item | Customers, Item and Brand level can all have stat forecast generated). This requires complex forecast reconciliation process, review "Multi-Level Forecast Overview" dashboard if this process is needed.   3f Troubleshooting Tips Follow troubleshooting tips on Training & Enablement dashboard if having issues with stat forecast generating before reaching out for support  3g Model Notes & Documentation Module Notes—includes DISCO classification and module purpose   3h "Do Not Modify" Items Module notes contain DO NOT MODIFY for items that should not be changed during implementation process     3i User Roles & Selective Access Demo, Demand Planner, Demand Planning Manager ro les can be adjusted  After Selective Access process run on Flat List Management dashboard then users can be given access to certain product groups / brands etc 3j Batch Processing Details on daily batch processing and how to prepare a roadmap of your batch processes – files, queries, import actions / processes in Anaplan (see attachment) 4. Videos Intro & Model Intro and Overview Video Data Import and Setup Steps    5. Model Download Links Monthly Statistical Forecasting Calculation Engine Weekly Statistical Forecasting Calculation Engine
View full article
Learn how to organize your model into logical parts to give you a  well-designed model that is easy to follow, understand and amend at a later date
View full article
PLANS is the new standard for Anaplan modelling; “the way we model”. This will cover more than just the formulas and will include and evolve existing best practices around user experience and data hubs. The initial focus is to develop a set of rules on the structure and detailed design of Anaplan models. This set of rules will provide both a clear route to good model design for the individual Anaplanner, and common guidance on which Anaplanners and reviewers can rely when passing models amongst themselves.  In defining the standard, everything we do will consider or be based around: Performance – Use the correct structures and formulae to optimize the Hyperblock Logical – Build the models and formulae more logically – See D.I.S.C.O below Auditable – Break up formulae for better understanding, performance and maintainability Necessary – Don’t duplicate expressions, store reference data and attributes once, no unnecessary calculations Sustainable – Build with the future in mind, think about process cycles and updates        The standards will be based around three axes: Performance - How do the structures and formulae impact the performance of the system? Usability/Auditability - Is the user able to understand how to interact with the functionality? Sustainability - Can the solution be easily maintained by model builders and support? We will define the techniques to use that balance the three areas to ensure the optimal design of Anaplan models and architecture       D.I.S.C.O As part of model and module design we recommend categorizing modules as follows: Data – Data hubs, transactional modules, source data; reference everywhere Inputs – Design for user entry, minimize the mix of calculations and output System – Time management, filters, mappings etc.; reference everywhere Calculations – Optimize for performance (turn summaries off, combine structures) Outputs -  Reporting modules, minimize data flows out Recommended Content: Performance Dimension Order Formula Optimization in Anaplan Formula Structure for Performance Logical Best Practices for Module Design Auditable Formula Structure for Performance Necessary Reduce Calculations for Better Performance Formula Optimization in Anaplan Sustainable Dynamic Cell Access Tips and Tricks Dynamic Cell Access - Learning App Personal Dashboards Tips and Tricks Time Range Application Ask Me Anything (AMA) sessions
View full article
Thinking through the results of a modeling decision is a key part of ensuring good model performance—in other words, making sure the calculation engine isn’t overtaxed. This article highlights some ideas for how to lessen the load on the calculation engine. Formulas should be simple; a formula that is nested or uses multiple combinations uses valuable processing time. Writing a long, involved formula makes the engine work hard. Seconds count when the user is staring at the screen. Simple is better. Breaking up formulas and using other options helps keep processing speeds fast. You must keep a balance when using these techniques in your models, so the guidance is as follows: Break up the most commonly changed formula Break up the most complex formula Break up any formula you can’t explain the purpose of in one sentence Formulas with many calculated components The structure of a formula can have a significant bearing on the amount of calculation that happens when inputs in the model are changed. Consider the following example of a calculation for the Total Profit in an application. There are five elements that make up the calculation: Product Sales, Service Sales, Cost of Goods Sold (COGS), Operating Expenditure (Op EX), and Rent and Utilities. Each of the different elements are calculated in a separate module. A reporting module pulls the results together into the Total Profit line item, which is calculated using the formula shown below. What happens when one of the components of COGS changes? Since all the source components are included in the formula, when anything within any of the components changes, this formula is recalculated. If there are a significant number of component expressions, this can put a larger overhead on the calculation engine than is necessary. There is a simple way to structure the module to lessen the demand on the calculation engine. You can separate the input lines in the reporting module by creating a line item for each of the components and adding the Total Profit formula as a separate line item. This way, changes to the source data only cause the relevant line item to recalculate. For example, a change in the Product Sales calculation only affects the Product Sales and the Total Profit line items in the Reporting module; Services Sales, Op EX, COGS and Rent & Utilities are unchanged. Similarly, a change in COGS only affects COGS and Total Profit in the Reporting module. Keep the general guidelines in mind. It is not practical to have every downstream formula broken out into individual line items. Plan to provide early exits from formulas Conditional formulas (IF/THEN) present a challenge for the model builder in terms of what is the optimal construction for the formula, without making it overly complicated and difficult to read or understand. The basic principle is to avoid making the calculation engine do more work than necessary. Try to set up the formula to finish the calculations as soon as possible. Always put first the condition that is most likely to occur. That way the calculation engine can quit the processing of the expression at the earliest opportunity. Here is an example that evaluates Seasonal Marketing Promotions: The summer promotion runs for three months and the winter promotion for two months. There are more months when there is no promotion, so this formula is not optimal and will take longer to calculate. This is better as the formula will exit after the first condition more frequently. There is an even better way to do this. Following the principles from above, add another line item for no promotion. And then the formula can become: This is even better because the calculation for No Promo has already been calculated and Summer Promo occurs more frequently than Winter Promo. It is not always clear which condition will occur more frequently than others, but here are a few more examples of how to optimize formulas: FINDITEM formula The Finditem element of a formula will work its way through the whole list looking for the text item and if it does not find the referenced text it will return blank. If the referenced text is blank it will also return a blank. Inserting a conditional expression at the beginning of the formula keeps the calculation engine from being overtaxed. IF ISNOTBLANK(TEXT) THEN FINDITEM(LIST,TEXT) ELSE BLANK Or IF BLANK(TEXT) THEN BLANK ELSE FINDITEM(LIST,TEXT) Use the first expression if most of the referenced text contains data and the second expression if there are more blanks than data. LAG, OFFSET, POST, etc. If in some situations there is no need to lag or offset data, for example if the lag or offset parameter is 0. The value of the calculation is the same as the period in question. Adding a conditional at the beginning of the formula will help eliminate unnecessary calculations: IF lag_parameter = 0 THEN 0 ELSE LAG(Lineitem, lag_parameter, 0) Or IF lag_parameter <> 0 THEN LAG(Lineitem, lag_parameter, 0) ELSE 0 The use of formula a or b will depend on the most likely occurrence of 0s in the lag parameter. Booleans Avoid adding unnecessary clutter for line items formatted as BOOLEANS. There is no need to include the TRUE or FALSE expression, as the condition will evaluate to TRUE or FALSE. Sales>0 Instead of IF Sales > 0 then TRUE ELSE FALSE
View full article
There are several business use cases that require the ability to compute distances between pairs of locations. Optimizing sales territory realignment Logistics cost optimization Transportation industry passenger revenue or cost per mile Franchise territory design Brick-and-mortar market area analysis (stores, hotels, bank branches, …) Optimizing inventory among geographic Distribution Centers At their core, each of these requires knowing how far apart a pair of sites are positioned. This article provides step-by-step instructions for creating a dashboard where users select a location and set a market area radius, then the dashboard shows all population centers in that vicinity with some demographic information. Doing the Math: Trig functions in Anaplan  Distance between two latitude-longitude points (lat1, lon1) and (lat2, lon2) requires solving this equation: Radius of Earth *  ACOS(  COS(90 - lat1) * COS(90 - lat2)       + SIN(90 - lat1) * SIN(90 - lat2) * COS(lon1 - lon2)     )  This formula works quite well. We know the Earth isn’t flat, but it’s not a perfect sphere either. Our home world bulges a bit at the equator and is flattened a bit at the poles. But for most purposes other than true rocket science, this equation gives sufficiently accurate results.  Unfortunately, Anaplan doesn’t have the functions SIN, COS, or ACOS built in, and the usual workaround – lookup modules – simply won't do in this situation because we need much higher precision than lookups can practically handle. But don't despair, it is possible to calculate trig functions to 8 decimal point precision using nothing more sophisticated than Anaplan's POWER() function and some ingenuity. In the following demonstration model, the trig functions needed for distance calculation have been built for you using equations called Taylor Series expansions. Step-by-Step Construction  Here's a small educational project: In our example model, the user will select one post code, enter a market area radius value, and click a button. Changing the selected post code updates rows in a filtered module, so we need to refresh the dashboard to see the result. The dashboard will identify all post codes in the vicinity location and display their population, growth rate, median age, and distance. Step 1 Get U.S. postal code demographic and geolocation data. Our model will use Census Zip Code tabulation areas. ZCTAs are essentially postal Zip Codes adjusted to remove unpopulated Zip Codes that are only for PO Boxes and combining some codes where that solves practical census tallying problems. There are about 32,000 ZCTAs and 43,000 Zip Codes in the U.S. Download the US.zip file from http://download.geonames.org/export/zip/ That file provides a full list of US Zip Codes and their county, state, latitude, and longitude. Other countries post codes are also listed in that folder. Download demographic data by post code from the US Census Bureau report DP05, choose the 5-digit ZCTA geographic option for the entire US. To calculate growth rate, you will need datasets for both the most recent year available and for the fifth year prior to that. (2017 and 2012 at the time this was written.)  Notes: Import maps in the next two steps will need some manipulation by concatenating fields to get nice looking names (such as "Boston, MA 02134") and to get codes to match up among the lists. You'll need to either import to transaction modules or do this manipulation in Excel. Step 2 Create a list named "Loc 3 - Post Codes". Set a top level member with a name like “Total Population Centers”.  It is generally a best practice to create a Clear action for any list to be run before future list reloads. Notes: For the purposes of this demonstration, a flat list of 5-digit codes is sufficient. I found it helpful to roll up ZCTAs by state (Loc 1) and county (Loc 2). This is optional. I will leave “give friendly names to your list members and assign them to parents” as an exercise for the advanced reader. Step 3 Create a module named "DATA: Loc 3 - Post Codes" dimensionalized by the list "Loc 3 - Post Codes" (no time, no version). Notes: There are a LOT of data fields in the tables you downloaded, and much more data is available in other Census Bureau products (gender, households, age details, income, …). Feel free to add line items for any census fields that you find useful. I found it helpful to pull the data into Excel and keep only the fields of interest to streamline the mapping process in Anaplan. Expect a few rejects due to mismatches between Zip Code and ZCTA files. The geonames.org zip code list US.zip doesn't include Puerto Rico and other island territories. Census data does include them. As a result, Census ZCTAs that begin with 006## and 009## will report there is no matching list member. In a "real world" application, a significant effort goes into insuring that data "ties out" by addressing issues like this. You may either ignore the small percentage of rejects (my sincere apologies to the people of Puerto Rico) or you may find and add those missing zip codes to your list. Your choice.  For this exercise, the module must contain, at minimum, these line items:   Formula Format Applies To DATA: Loc 3 - Post Codes     Loc 3 - Post Codes Latitude   Number - Longitude   Number - Total Population   Number - Total Population 5 yr prior   Number - Growth Rate POWER(Total Population / 'Total Population 5 yr prior', 0.2) – 1 Number - Median Age   Number - Median Age * Tot Pop Median Age * Total Population Number - Set the Summary properties as follows: 'Total Population’, ‘Total Population 5 yr prior’, and ‘Median Age * Tot Pop’ aggregate by Sum. ‘Growth Rate’ aggregates by Formula. ‘Median Age’ aggregates by Ratio: ‘Median Age * Tot Pop’ / ‘Total Population’ Create import actions to load your downloaded data into “DATA: Loc 3 – Post Codes” Step 4 Create a module named "INPUT: Globals". It holds four constants and two inputs as line items. There is no List, Time or Version dimension. I put those line items’ values into the Formula so users cannot change them. Line Items are:   Formula Format Applies To INPUT: Globals      <none> UI   No Data   Select a Location   List, Loc 3 – Post Code - Market Area Radius (miles)   Number - Constants   No Data - Earth Radius (km) 6371 Number - Pi 3.141592654 Number - km / mi 1.609344 Number - ACOS(2/3) 0.588002604 Number - Publish the “Select a Location” and “Market Area Radius (miles) line items to a new dashboard with the name “Distance Demo”. Note: Distance calculations in kilometers are provided below. Feel free to adjust your model’s inputs, outputs, and filters to the needs of your locale. Step 5 Create a module named "CALC: Post Code - Nearby Population Centers" dimensionalized by only the list “Loc 3 - Post Codes”. There are no Time or Versions dimensions.   Formula Format Applies To CALC: Post Code - Nearby Population Centers     Loc 3 - Post Codes Origination Location:   No Data - Selected Post Code 'INPUT: Globals'.'Select a Location' List: Loc 3 - Post Codes <none> Selected Post Code Latitude 'DATA: Loc 3 - Post Codes'.Latitude[LOOKUP: Selected Post Code] Number <none>  Selected Post Code Longitude 'DATA: Loc 3 - Post Codes'.Longitude[LOOKUP: Selected Post Code] Number <none>  Destination Location:   No Data   Population Center ITEM('Loc 3 - Post Codes') List: Loc 3 - Post Codes - Population If 'In Market Area?' Then 'DATA: Loc 3 - Post Codes'.Total Population Else 0 Number - Population 5 yr prior IF In Market Area? THEN 'DATA: Loc 3 - Post Codes'.'Total Population 5 yr prior' ELSE 0 Number -  Growth Rate IF In Market Area? THEN POWER(Population / 'Population 5 yr prior', -2) - 1 ELSE 0 Number, Percent - Median Age If 'In Market Area?' Then  'DATA: Loc 3 - Post Codes'.Median Age Else 0 Number - Median Age * Pop If 'In Market Area?' Then  Median Age * Population Else 0 Number - Pop Center Latitude 'DATA: Loc 3 - Post Codes'.Latitude Number - Pop Center Longitude 'DATA: Loc 3 - Post Codes'.Longitude Number - Calculated Distance:   No Data   Distance (miles) 'EarthRadius (miles)' * 'ACOS(x)' Number - Distance (km) 'EarthRadius (km)' * 'ACOS(x)' Number - Staging   No Data   EarthRadius (km) 'INPUT: Globals'.'Earth Radius (km)' Number   EarthRadius (miles) 'EarthRadius (km)' / 'INPUT: Globals'.'km / mi' Number   Pi 'INPUT: Globals'.Pi Number   Radians(90 - Lat1) 2 * Pi * (90 - Selected Post Code Latitude) / 360 Number - COS(Radians(90 -  Lat1)) 1 - POWER('Radians(90 - Lat1)', 2) / 2 + POWER('Radians(90 - Lat1)', 4) / 24 - POWER('Radians(90 - Lat1)', 6) / 720 + POWER('Radians(90 - Lat1)', 8) / 40320 - POWER('Radians(90 - Lat1)', 10) / 3628800 + POWER('Radians(90 - Lat1)', 12) / 479001600 - POWER('Radians(90 - Lat1)', 14) / 87178291200 + POWER('Radians(90 - Lat1)', 16) / 20922789888000 - POWER('Radians(90 - Lat1)', 18) / 6402373705728000 + POWER('Radians(90 - Lat1)', 20) / 2432902008176640000 Number - SIN(Radians(90 - Lat1)) 'Radians(90 - Lat1)' - POWER('Radians(90 - Lat1)', 3) / 6 + POWER('Radians(90 - Lat1)', 5) / 120 - POWER('Radians(90 - Lat1)', 7) / 5040 + POWER('Radians(90 - Lat1)', 9) / 362880 - POWER('Radians(90 - Lat1)', 11) / 39916800 + POWER('Radians(90 - Lat1)', 13) / 6227020800 - POWER('Radians(90 - Lat1)', 15) / 1307674368000 + POWER('Radians(90 - Lat1)', 17) / 355687428096000 - POWER('Radians(90 - Lat1)', 19) / 121645100408832000 + POWER('Radians(90 - Lat1)', 21) / 51090942171709440000 Number - Radians(90 - Lat2) 2 * Pi * (90 - Pop Center Latitude) / 360 Number - COS(Radians(90 -  Lat2)) 1 - POWER('Radians(90 - Lat2)', 2) / 2 + POWER('Radians(90 - Lat2)', 4) / 24 - POWER('Radians(90 - Lat2)', 6) / 720 + POWER('Radians(90 - Lat2)', 8) / 40320 - POWER('Radians(90 - Lat2)', 10) / 3628800 + POWER('Radians(90 - Lat2)', 12) / 479001600 - POWER('Radians(90 - Lat2)', 14) / 87178291200 + POWER('Radians(90 - Lat2)', 16) / 20922789888000 - POWER('Radians(90 - Lat2)', 18) / 6402373705728000 + POWER('Radians(90 - Lat2)', 20) / 2432902008176640000 Number - SIN(Radians(90 - Lat2)) 'Radians(90 - Lat2)' - POWER('Radians(90 - Lat2)', 3) / 6 + POWER('Radians(90 - Lat2)', 5) / 120 - POWER('Radians(90 - Lat2)', 7) / 5040 + POWER('Radians(90 - Lat2)', 9) / 362880 - POWER('Radians(90 - Lat2)', 11) / 39916800 + POWER('Radians(90 - Lat2)', 13) / 6227020800 - POWER('Radians(90 - Lat2)', 15) / 1307674368000 + POWER('Radians(90 - Lat2)', 17) / 355687428096000 - POWER('Radians(90 - Lat2)', 19) / 121645100408832000 + POWER('Radians(90 - Lat2)', 21) / 51090942171709440000 Number - Radians(Long1-Long2) 2 * Pi * (Selected Post Code Longitude - Pop Center Longitude) / 360 Number - COS(RADIANS(Long1-Long2)) 1 - POWER('Radians(Long1-Long2)', 2) / 2 + POWER('Radians(Long1-Long2)', 4) / 24 - POWER('Radians(Long1-Long2)', 6) / 720 + POWER('Radians(Long1-Long2)', 8) / 40320 - POWER('Radians(Long1-Long2)', 10) / 3628800 + POWER('Radians(Long1-Long2)', 12) / 479001600 - POWER('Radians(Long1-Long2)', 14) / 87178291200 + POWER('Radians(Long1-Long2)', 16) / 20922789888000 - POWER('Radians(Long1-Long2)', 18) / 6402373705728000 + POWER('Radians(Long1-Long2)', 20) / 2432902008176640000 Number - X - pre adj 'COS(Radians(90 -  Lat1))' * 'COS(Radians(90 -  Lat2))' + 'SIN(Radians(90 - Lat1))' * 'SIN(Radians(90 - Lat2))' * 'COS(RADIANS(Long1-Long2))' Number - X IF ABS('X - pre adj') <= 1 / POWER(2, 0.5) THEN 'X - pre adj' ELSE IF ABS('X - pre adj') > 1 THEN SQRT(-1) ELSE POWER(1 - POWER('X - pre adj', 2), 0.5) Number - ASIN (Taylor Series) X + 1 / 6 * POWER(X, 3) + 3 / 40 * POWER(X, 5) + 5 / 112 * POWER(X, 7) + 35 / 1152 * POWER(X, 9) + 63 / 2816 * POWER(X, 11) + 231 / 13312 * POWER(X, 13) + 143 / 10240 * POWER(X, 15) / 6435 / 557056 * POWER(X, 17) + 12155 / 1245184 * POWER(X, 19) + 46189 / 5505024 * POWER(X, 21) + 88179 / 12058624 * POWER(X, 23) Number - ASIN(x) IF ABS('X - pre adj') <= 1 / SQRT(2) THEN 'ASIN (Taylor Series)' ELSE IF 'X - pre adj' > 1 / SQRT(2) AND 'X - pre adj' <= 1 THEN Pi / 2 - 'ASIN (Taylor Series)' ELSE IF 'X - pre adj' < -1 / SQRT(2) AND 'X - pre adj' > -1 THEN -Pi / 2 + 'ASIN (Taylor Series)' ELSE SQRT(-1) Number - ACOS(x) Pi / 2 - 'ASIN(x)' Number - Filters   No Data   In Market Area? Distance (miles)' > 0 AND 'Distance (miles)' <= 'INPUT: Globals'.'Market Area Radius (miles)' Boolean - Set summary settings for the user-facing population and age line items just as you did in step 2. The line items under Distance Calculations and Staging should not roll up, so use summary: None. (This is a best practice for conserving model size). The ‘In Market Area?’ Boolean should roll up using summary: Any. Filter the list with ‘In Market Area?’ = TRUE and publish the 'CALC: Post Code - Nearby Population Centers' module to your dashboard. In grid view, use pivot / filter / hide in the module:  ‘Loc 3 – Post Codes’ is the row dimension, Filter on ‘Is Market Area?’ = True, Line items are in the columns and only the desired line items show, Adjust column settings for heading wrap and column widths. Save the view and publish it to your dashboard. Step 6 Create a new Action that opens the dashboard, name it "Refresh Surrounding Locations". Publish it to your dashboard and position it between the two inputs and the output module. This action button is needed because the output module is filtered for "In Market Area?" = True but that filtering is only updated when the dashboard is refreshed. This completes the build instructions, following are more insights on the calculations. The calculation logic Take a look at the line item formulas under Staging. In those, we build the distance equation from its component parts. You might find it helpful to know that each trig operation, such as COS(90 - lat1), is a line item. Radius of Earth *  ACOS(  COS(90 - lat1) * COS(90 - lat2)       + SIN(90 - lat1) * SIN(90 - lat2) * COS(lon1 - lon2)     ) In overview, the line items represent these steps: Get the constants Pi, Earth’s radius, etc. Convert latitude and longitude from degrees to radians Use Taylor Series formulas to calculate the variety of SIN and COS components Use another Taylor Series formula and a trig identity to calc ASIN, then convert ASIN to ACOS using another trig identity. Multiply the finished ACOS by Earth’s radius. Going Multidimensional This example model is intentionally small; it uses a single list of locations and computes their distances from a selected location. In most "real world" applications, you need to know the distance between every pairing of two lists of locations, for example Stores and Towns, or DCs and Stores. Let’s call them origin and destination locations. To compute distance between every possible pairing, you would dimensionalize the CALC module above by those two lists and replace the user selection with ITEM(<origin location list>). Good luck!!
View full article
An easy to use set of PowerShell wrapper scripts This article outlines the features of the PowerShell scripts that are used as wrappers to the standard Anaplan Connect scripts. These PowerShell scripts will enable the following features: A file watcher that waits for the arrival of files to start importing into Anaplan that can run through Enterprise Schedulers Copy/move, import, and back up the source files as required after the Success or Failure of the import Provide email notifications of the outcome of the processes Can be used to trigger Actions on Anaplan that do not have file operations, but as required through schedulers The scripts are avaliable in the links below. GitHub Repository Please contribute enhancements here: https://github.com/upaliw/anaplanconnect_ps Releases Latest releases for AC1.4 & AC1.3: https://github.com/upaliw/anaplanconnect_ps/releases  Contents of the ZIP file The contents of the ZIP file are Object Comments exceptions Folder to hold the errors/messages generated from Anaplan Connect java_keystore Folder to hold the Java KeyStore file for CA Certificate authentication. See the complete Anaplan Connect Guide. lib Folder that holds the required Java libraries to run Anaplan Connect logs Folder to hold the logging information of the PowerShell scripts AnaplanClient.bat anaplan-connect.jar Anaplan Connect script and Java package AnaplanConfig.bat The connection details for Anaplan (i.e. Basic Authentication or CA Cert details) Anaplan_Action.bat Main script that runs the various types of Anaplan Actions FileInterface.ini Config file for all file-based operations FileWatch.ps1 FileCopy.ps1 FileRun.ps1 Functions.ps1 Main PowerShell scripts for all operations FW.bat FWCPY.bat FWCPYRUN.bat RUN.bat Windows batch scripts that can be used to call the main PowerShell scripts through Enterprise Schedulers EmailNotifications.ini Config file for email notification settings EmailPassword.txt Config file to hold the encrypted password for SMTP authentication  Step 1 – Anaplan Connect Authentication The following file should be updated as required to denote the connection type to Anaplan. The connection can be either one of the two possible types: Basic Authentication: Anaplan username and password, where the password is maintained by Anaplan and the Anaplan username is set to be an Exception user for SSO workspaces. The password will need to be reset every 90 days. CA Certificate Authentication: A client certificate procured using a Certification Authority that is attached to the Anaplan Username (see the Administration: Security - Certificates article in Anapedia). Step 2 – Email configuration The following steps need to be completed for email notifications. Update the EmailNotifications.ini file with the SMTP parameters. As required, create the encrypted password file txt for the SMTP authentication. To use the default encryption of PowerShell the following command can be issued in the PowerShell prompt and redirect the output to a file as: "smtpPassword" | ConvertTo-SecureString -AsPlainText -Force | ConvertFrom-SecureString | Out-File ".\EmailPassword.txt"Step 3 – File import configuration Step 3 – File import configuration This is the main configuration file for all the file import operations. The FileInterface.ini file will have the following information: Config Entry Comments Key Mandatory: The main parameter passed to the scripts that picks all the details of the operations Inbound filename Optional: The inbound filename as a Regular Expression, so that it can recognize any timestamps Load filename Optional: The filename the Anaplan Action is tied to Backup filename Optional: The filename the file should be backed up as Inbound location Optional: The folder the file arrives to from a source system Load location Optional: The folder the file moves from the inbound location Backup location Optional: The folder where the backups are located, which is by date-stamped subfolders Command to run Mandatory: The Anaplan Action Notify Optional: One of Success, Fail, or Both Notify email addresses Optional: The email addresses comma (,) separated Action Type Mandatory: One of Import, Export, Process, Action, ImportAndProcess, JDBCImport, or JDBCProcess Export filename Optional: Only for Export Action Type JDBC Properties file Optional: Only for JDBCImport and JDBCProcess Action Type Workspace GUID Mandatory: Workspace ID Model GUID Mandatory: Model ID  Calling the scripts The scripts can be called manually or via an Enterprise Scheduler. The Key should be passed as the argument. The following scenarios can be provided as examples of these operations. Wait for an arrival of a file, then import it to Anaplan FWCPYRUN “Key” Run an Anaplan action per schedule RUN “Key” Email notifications If an email notification is enabled per config entry, a sample with an attachment of any exceptions generated will look like: Note: The email will contain 1 of 3 statuses Success: No issues. Success with data errors: Import was successful, but some data items had issues. There will be an attachment with the details of the exceptions generated from Anaplan. Fail: The import failed, details will be attached in the email. Logging All steps of the interface processes will be logged in the logs folder for each operation (i.e. FileWatch, FileCopy, and FileRun) separately. The generated exceptions will be in the exceptions folder.  Note: There is no process to clean-up the older log files, which should be done on a case-by-case basis.
View full article
“Back to the Future” Imagine this scenario: You are in the middle of making changes in your development model and have been doing so for the last few weeks. The changes are not complete and are not ready to synchronize. However, you just received a request for an urgent fix from the user community that is critical for the forthcoming monthly submission. What do you do? What you don’t want to do is take the model out of deployed mode! You also don’t want to lose all the development work you have been doing.  Don’t worry. Following the procedure below will ensure you can apply the hotfix quickly and keep your development work. The following diagram illustrates the procedure: It’s a two-stage process: Stage 1: Roll the development model back to a version that doesn’t contain any changes (is the same as production) and apply the hotfix to that version. Add a new revision tag to the development model as a temporary placeholder. (Note the History ID of the last structural change, you'll need it later.) On the development model, use History to restore to a point where development and production were identical (before any changes were made in development). Apply the hotfix. Save a new revision of the development model. Sync the development model with the production model. Production now has its hotfix. Stage 2: Restore the changes to development and apply the hotfix. On the development model, use the History ID from Stage 1 – Step 1 to restore to the version containing all of the development work (minus the hotfix). Reapply the hotfix to this version of development. Create a new revision of the development model. Development is now back to where it was, now with the hotfix applied. When your development work is complete, you can promote the new version to production using ALM best practice.   The procedure is documented here: https://community.anaplan.com/t5/Anapedia-Model-Building/Fixing-Production-Issues/ta-p/4839
View full article
Introduction The new Anaplan APIs and integration connectors leverage Certificate Authority (CA) -issued certificates.  These certificates can be obtained through your company's intermediary CA (typically issued by IT) or by purchasing it from a trusted Certificate Authority. Anaplan clients leveraging REST API v2.0 use both basic authentication and CA certificate based authentication. Examples of these clients include Anaplan Connect 1.4, Informatica Anaplan Connector, and Mulesoft 2.0.1. If you are migrating your Anaplan Connector scripts from v1.3 to v1.4, your available options for authentication will be basic authentication or CA certificate based authentication. This article outlines steps to perform in preparation for CA certificate authentication. Steps to prepare for CA certificate authentication Obtain a certificate from a CA authority Convert CA certificate to either a p12 or pfx file Import CA certificate into Internet Explorer/Mozilla Firefox to convert to a p12/pfx file Export CA certtificate from Internt Explorer/Mozilla Firefox to covert to a p12/pfx file Optional: Install Openssl tool Convert the p12/pfx file into a Java Keystore Manage CA certificates in Anaplan Tenant Administrator Validate CA certificate authentication via Anaplan Connect 1.4 script. Obtain a certificate from a CA authority You can obtain a certificate from CA authority by submitting a request or submit a request with a certificate signing requiest (CSR) containing your private key.  Contact your IT or Security Operations organization to determine if your company already has an existing relationship with a CA or intermediary CA. If your organization has an existing relationship with a CA or Intermediate CA you can request a client certificate be issued for your integration user. If your organization does not have an existing CA relationship, you should contact a valid CA to procure a client certificate. Convert CA certificate to either a p12 or pfx file Import CA certificate into IE/Firefox to convert to a p12/pfx file This section presents steps to import CA certificate into Internet Explorer and Mozilla Firefox. CA certificate will be exported in the next section to either a p12 or pfx format. CA certificates may have .crt or .cer as file extensions. Internet Explorer Within Internet explorer, click on the Settings icon and select Internet option.    Navigate to the Content tab and then click on Certificates.   Click Import to launch the Certificate Import Wizard.   Click Browse to search & select the CA Certificate file. This file may have a file extension of .crt or .cer.    If a password was used when requesting the Certificate, enter it in this screen. Ensure that the “Mark this key as exportable” option is selected and click Next.    Select the certificate store in which to import the certificate and click Next.     Review the setting and click Finish.     The certificate should appear in the certificate store selected. Mozilla Firefox Within Firefox, select Options from the settings menu.    In the Options window, click Privacy & Security from the navigation pane on the left. Scroll to the very bottom and click on the View Certificates… button.    In the Certificate Manager, click the Import… button and select the certificate to convert and click Open.   If a password was provided when the certificate was requested, enter that password and click OK.    The certificate should now show up in the Certificate Manager.   Export CA certificate from IE/Firefox to convert to a p12/pfx file This section presents steps to export CA certificate from Internet Explorer (pfx) and Mozilla Firefox (p12). Internet Explorer (pfx) Select the certificate imported above and click the Export… button to initiate the Certificate Export Wizard.      Select the option “Yes, export the private key” and click Next.   Select the option for Personal Informatica Exchange – PCKS #12 (.PFX) and click Next.    Create a password, enter it and confirm it in the following screen.  This password will be used later on in the process. Click Next to continue.    Select a location to export the file and click Save.    Verify the file location and click Next.    Review the export settings, ensure that the Export Keys settings says “Yes”, if not start the export over. If all looks good, click Next. A message will appear when the export is successful.      Mozilla Firefox (p12) To export the certificate from Firefox, click the Backup… button in the Certificate Manager.  Select a location and a name for the file.  Ensure that the Save as type: is “PKCS12 Files (*.p12)”. Click the Save button to continue.    Enter a password to be used later when exporting the public and private keys. Click the OK button to finish.   Install openssl tool (Optional) If you haven't done so already, install openssl tool for your operating system.  List of third party binary distributions may be found on www.openssl.org or here. Examples in this article are shown for Windows platform. Convert the p12/pfx file into a Java Keystore Execute the following toto export the public and private keys exported above. In the commands listed below, the values that are customer specific are in Bold Italics. There is a screen shot at the end of this section that shows all of the commands run in sequence and it shows how the passwords relate between the steps. Examples in this article assume location of the certificate as the working directory. If you are executing these commands from a different directory (ex: ...\openssl\bin), then ensure you provide absolute directory path to all the files. Export the public key Public key will be exported from the certificate (p12/pfx) using openssl tool. Result is a .pem (public_key.pem) file that will be imported into Anaplan using Anaplan's Tenant Administrator client.   NOTE: The command below will prompt for a password. This password was created in steps above during export. openssl pkcs12 -clcerts -nokeys -in ScottSmithExportedCert.pfx -out public_key.pem Edit the public_key.pem file Remove everything before ---Begin Certificate --- (section highlighted in yellow). Ensure that the emailAddress value is populated with the user that will run the integrations. Export the Private Key This command will prompt for a password. This password is the password created in the export above. It will the prompt for a new password for the Private Key. It will also ask to confirm that password.  openssl pkcs12 -nocerts -in ScottSmithExportedCert.pfx -out private_key.pem Create P12 Bundle This command will prompt for the private key password from the step above. It will the prompt for a new password for the Bundle. It will also ask to confirm that password. openssl pkcs12 -export -in public_key.pem -inkey private_key.pem -out bundle.p12 -name Scott -CAfile public_key.pem -caname Scott In the command above,  public_key.pem is the file that was created in the step "Export the Public Key".  This is the file that will be registered with Anaplan using Anaplan Tenant Administrator.  private_key.pem is the file that was created in the step "Export the Private Key". bundle.p12  is the output file from this command, which will be used in the next step to create Java Keystore. Scott is the keystore alias. Add to Java Keystore (jks) Using keytool (typically found in <Java8>/bin), create a .jks file. This file will be referenced in Anaplan Connect 1.4 scripts for authentication. Command below will prompt for a new password for the entry into the keystore. It will also ask to confirm that password.  It will, then, prompt for the Bundle password from the step above. keytool -importkeystore -destkeystore my_keystore.jks -srckeystore bundle.p12 -srcstoretype PKCS12 In the command above: my_keystore.jks is the keystore file that will be referenced in your Anaplan Connect 1.4 scripts. bundle.p12 is the P12 bundle that was created in the last step.   Manage CA certificates in Anaplan Tenant Administrator In this step, you will add public_key.pem file to list of certificates in Anaplan Tenant Administrator. This file was created & edited in the first two steps of the last section. Log on to Anaplan Tenant Administrator. Navigate to Administration --> Security --> Certificates --> Add Certificate.   Validate CA certificate authentication via Anaplan Connect 1.4 script. Since you will be migrating to CA Certificate based authentication, you will need to upgrade your Anaplan Connect and associated scripts from v1.3 to v1.4. Community article, Migrating from Anaplan Connect 1.3.x.x to Anaplan Connect 1.4 will guide you through necessary steps. Follow the steps outlined in the article to edit & execute your Anaplan Connect 1.4 script. Examples provided (Windows & Linux) at the end of the article will validate authentication to Anaplan using CA Certificates and will return list of user's workspaces in a tenant.
View full article
We often see Anaplan Connect scripts created ad-hoc, as new actions are added, or updating existing scripts with these new actions. This works when there is a limited number of imports/exports/processes/etc. running, and when these actions are relatively quick. However, as a models and actions scale up and grow in complexity, this solution can become very inefficient. Either scheduling dozens of scripts, or trying to manage large, difficult to read scripts. I prefer to design for scale from the outset. My solution utilizes batch scripts that call the relavant Anaplan Connect script, passing the action to run as a variable. There are a couple ways I've accomplished this: dedicate a script to execute processes and pass in the process name, or pass in the action type (-action, -export, etc.) and name as the variable. I generally prefer the first approach, but you want to be careful when creating your process that it doesn't become so large that it impacts model performance. Usually, I will create a single script to perform all file uploads to a model, then run the processes. In my implementations, I've written each Anaplan Connect script to be model specific, but you could pass the model ID as a variable as well. To achieve this, I create a "controller" script, that calls the Anaplan Connect script, which would look something like this: @echo off for /F "tokens=* delims=" %%A in (Demand-Daily-Processes.txt) do ( call "Demand - Daily.bat" %%A & TIMEOUT 300) pause This reads from a file called Demand-Daily-Processes.text, reads a line which contains the name of the process as it appears in Anaplan, e.g.,  Load Master Data from Data Hub ... Load Transactional Data from Data Hub Then it calls the Anaplan Connect, passing this name as a variable. Once the script completes, the controller waits 300 seconds before reading the next line and calling the AC script again. This timeout is there to give the model time to recover after running the process and prevent any potential issues executing subsequent processes. The Anaplan Connect script itself looks mostly as it does, except in place of the process name, we use a variable reference.  @echo off set AnaplanUser="" set WorkspaceId="" set ModelId="" set timestamp=%date:~7,2%_%date:~3,3%_%date:~10,4%_%time:~0,2%_%time:~3,2% set Operation==-certificate "path\certificate.cer" -process "%~1" -execute -output "C:\AnaplanConnectErrors\<Model Name>-%~1-%timestamp%" rem *** End of settings - Do not edit below this line *** setlocal enableextensions enabledelayedexpansion || exit /b 1 cd %~dp0 if not %AnaplanUser% == "" set Credentials=-user %AnaplanUser% set Command=.\AnaplanClient.bat %Credentials% -workspace %WorkspaceId% -model %ModelId% %Operation% @echo %Command% cmd /c %Command% pause You can see that in place of declaring a process name, the script uses %~1. This tells the script to use the value of the first parameter provided. You can set to 9 variables this way, allowing you to pass in workspace and model IDs as well. This also creates a timestamp variable with the current system time when executed, then uses that and the process name to create a clearly labled folder for error dumps. eg. "C:\AnaplanConnectErrors\Demand Planning-Load Master Data from Data Hub-dd/mm/yyyy time". By using this solution, as you add processes to your model, you can simply add them to the text file (keeping them in the order you want them executed), rather than editing or creating batch scripts. Additionally, you need only schedule your controller script(s), making maintenance easier still. 
View full article
This article covers the necessary steps for you to migrate your Anaplan Connect (AC) 1.3.x.x script to Anaplan Connect 1.4. For more details and examples, refer to the   Anaplan Connect User Guide v1.4. The changes are: New connectivity parameters Replace reference to Anaplan Certificate with Certificate Authority (CA) certificates using new parameters Optional Chunksize & Retry parameters Changes to JDBC configuration New Connectivity Parameters Add the following parameters to your Anaplan Connect 1.4 integration scripts. These parameters provide connectivity to Anaplan and Anaplan authentication services. Both of the urls listed below need to be whitelisted with your network team. -service "https://api.anaplan.com/" -auth "https://auth.anaplan.com" Certificate Changes As noted in our   Anaplan-generated Certificates to Expire December 10, 2018 blog post, new and updated Anaplan integration options support Certificate Authority (CA) certificates for authentication. Basic Authentication is still available in Anaplan Connect 1.4, however, the use of certificates has changed. In Anaplan Connect 1.3.x.x, the script references the full path to the certificate file. For example: -certificate "/Users/username/Documents/AnaplanConnect1.4/certificate.pem" In Anaplan Connect 1.4 the CA certificate must be stored in a Java Key Store (JKS). Refer to   this video   for a walkthrough of the process of getting the CA certificate into the key store. You can also refer to   Anaplan Connect User Guide v1.4   for steps to create the Java key store. Once you have imported the key into the JKS,   make note of this information : Path to the JKS (directory path on server where JKS is saved) The Password to the JKS The alias of the certificate within the JKS. For example: KeyStorePath ="/Users/username/Documents/AnaplanConnect1.4/my_keystore.jks" KeyStorePass ="your_password" KeyStoreAlias ="keyalias" To pass these values to Anaplan Connect 1.4, use these command line parameters: -keystore {KeystorePath} -keystorealias {KeystoreAlias} -keystorepass {KeystorePass} Chunksize Anaplan Connect 1.4 allows for custom chunk sizes on files being imported. The -chunksize parameter can be included in the call with the value being the size of the chunks in megabytes. -chunksize {SizeInMBs} Retry Anaplan Connect 1.4 allows for the client to retry requests to the server in the event that the server is busy. The -maxretrycount parameter defines the number of times the process retries the action before exiting. The -retrytimeout parameter is the time in seconds that the process waits before the next retry. -maxretrycount {MaxNumberOfRetries} -retrytimeout {TimeoutInSeconds} Changes to JDBC Configuration With Anaplan Connect 1.3.x.x the parameters and query for using JDBC are stored within the Anaplan Connect script itself. For example: Operation="-file Sample.csv' -jdbcurl 'jdbc:mysql://localhost:3306/mysql?useSSL=false' -jdbcuser 'root:Welcome1' -jdbcquery 'SELECT * FROM py_sales' -import 'Sample.csv' -execute" With Anaplan Connect 1.4. the parameters and query for using JDBC have been moved to a separate file. The name of that file is then added to the AnaplanClient call using the   -jdbcproperties   parameter. For example:  Operation="-auth 'https://auth.anaplan.com' -file 'Sample.csv'  -jdbcproperties 'jdbc_query.properties' -chunksize 20 -import 'Sample.csv' -execute " To run multiple JDBC calls in the same operation, a separate jdbcpropeties file will be needed for each query. Each set of calls in the operation should include then following parameters: -file, -jdbcproperties, -import, and -execute. In the code sample below each call is underlined separately.  For example: Operation="-auth 'https://auth.anaplan.com' -file 'SampleA.csv' -jdbcproperties 'SampleA.properties' -chunksize 20 -import 'SampleA Load' -execute -file 'SampleB.csv' -jdbcproperties 'SampleB.properties' -chunksize 20 -import 'SampleB Load' -execute" JDBC Properties File Below is an example of the JDBCProperties file. Refer to the   Anaplan Connect User Guide v1.4   for more details on the properties shown below. If the query statement is long, the statement can be broken up on multiple lines by using the \ character at the end of each line. No \ is needed on the last line of the statement. The \ must be at the end of the line and nothing can follow it. jdbc.connect.url=jdbc:mysql://localhost:3306/mysql?useSSL=false jdbc.username=root jdbc.password=Welcome1 jdbc.fetch.size=5 jdbc.isStoredProcedure=false jdbc.query=select * \ from mysql.py_sales \ where year = ? and month !=?; jdbc.params=2018,04 Anaplan Connect Windows BAT Script Example (with Cert Auth) @echo off rem This example lists a user's workspaces set ServiceLocation="https://api.anaplan.com/" set Keystore="C:\Your Cert Name Here.jks" set KeystoreAlias="" set KeystorePassword="" set WorkspaceId="Enter WS ID Here" set ModelId="Enter Model ID here" set Operation=-service "https://api.anaplan.com" -auth "https://auth.anaplan.com" -W rem *** End of settings - Do not edit below this line *** setlocal enableextensions enabledelayedexpansion || exit /b 1 cd %~dp0 set Command=.\AnaplanClient.bat -s %ServiceLocation% -k %Keystore% -ka %KeystoreAlias% -kp %KeystorePassword% -workspace %WorkspaceId% -model %ModelId% %Operation% @echo %Command% cmd /c %Command% pause Anaplan Connect Shell Script Example with Cert Auth #!/bin/sh KeyStorePath="/path/Your Cert Name.jks" KeyStorePass="" KeyStoreAlias=" " WorkspaceId="Enter WS ID Here" ModelId="Enter Model Id Here" Operation="-service "https://api.anaplan.com" -auth "https://auth.anaplan.com" -W" #________________ Do not edit below this line __________________ if [ "${CACertPath}" ]; then     Credentials="-keystore ${KeyStorePath} -keystorepass ${KeyStorePass} -keystorealias ${KeyStoreAlias}" fi echo cd "`dirname "$0"`" cd "`dirname "$0"`" if [ ! -f AnaplanClient.sh ]; then     echo "Please ensure this script is in the same directory as AnaplanClient.sh." >&2     exit 1 elif [ ! -x AnaplanClient.sh ]; then     echo "Please ensure you have executable permissions on AnaplanClient.sh." >&2     exit 1 fi Command="./AnaplanClient.sh ${Credentials} ${Operation}" /bin/echo "${Command}" exec /bin/sh -c "${Command}"   
View full article
Reducing the number of calculations will lead to quicker calculations and improve performance. But this doesn’t mean combining all your calculations into fewer line items, as breaking calculations into smaller parts has major benefits for performance. Learn more about this in the Formula Structure article. How is it possible to reduce the number of calculations? Here are three easy methods: Turn off unnecessary Summary method calculations. Avoid formula repetition by creating modules to hold formulas that are used multiple times. Ensure that you are not including more dimensions than necessary in your calculations. Turn off Summary method calculations Model builders often include summaries in a model without fully thinking through if they are necessary. In many cases the summaries can be eliminated. Before we get to how to eliminate them, let’s recap on how the Anaplan engine calculates. In the following example we have a Sales Volume line-item that varies by the following hierarchies: Region Hierarchy Product Hierarchy Channel Hierarchy City SKU Channel Country Product All Channels Region All Products   All Regions     This means that from the detail values at SKU, City, and Channel level, Anaplan calculates and holds all 23 of the aggregate combinations shown below—24 blocks in total. With the Summary options set to Sum, when a detailed item is amended (represented in the grey block), all the other aggregations in the hierarchies are also re-calculated. Selecting the None summary option means that no calculations happen when the detail item changes. The varying levels of hierarchies are quite often only there to ease navigation and the roll-up calculations are not actually needed, so there may be a number of redundant calculations being performed. The native summing of Anaplan is a faster option, but if all the levels are not needed it might be better to turn off the summary calculations and use a SUM formula instead.  For example, from the structure above, let’s assume that we have a detailed calculation for SKU, City, and Channel (SALES06.Final Volume). Let’s also assume we need a summary report by Region and Product, and we have a module (REP01) and a line item (Volume) dimensioned as such. REP01.Volume = SALES06 Volume Calculation.Final Volume is replaced with REP01.Volume = SALES06.Final Volume[SUM:H01 SKU Details.Product, SUM:H02 City Details.Region] The second formula replaces the native summing in Anaplan with only the required calculations in the hierarchy. How do you know if you need the summary calculations? Look for the following: Is the calculation or module user-facing? If it is presented on a dashboard, then it is likely that the summaries will be needed. However, look at the dashboard views used. A summary module is often included on a dashboard with a detail module below; effectively the hierarchy sub-totals are shown in the summary module, so the detail module doesn’t need the sum or all the summary calculations. Detail to Detail Is the line item referenced by another detailed calculation line item? This is very common, and if the line item is referenced by another detailed calculation the summary option is usually not required. Check the Referenced by column and see if there is anything referencing the line item. Calculation and staging modules If you have used the DISCO module design, you should have calculation/staging modules. These are often not user-facing and have many detailed calculations included in them. They also often contain large cell counts, which will be reduced if the summary options are turned off. Can you have different summaries for time and lists? The default option for Time Summaries is to be the same as the lists. You may only need the totals for hierarchies, or just for the timescales. Again, look at the downstream formulas. The best practice advice is to turn off the summaries when you create a line item, particularly if the line item is within a Calculation module (from the DISCO design principles). Avoid Formula Repetition An optimal model will only perform a specific calculation once. Repeating the same formula expression multiple times will mean that the calculation is performed multiple times. Model builders often repeat formulas related to time and hierarchies. To avoid this, refer to the module design principles (DISCO) and hold all the relevant calculations in a logical place. Then, if you need the calculation, you will know where to find it, rather than add another line item in several modules to perform the same calculation. If a formula construct always starts with the same condition evaluation, evaluate it once and then refer to the result in the construct. This is especially true where the condition refers to a single dimension but is part of line item that goes across multiple dimension intersections. A good example of this can be seen in the example below: START() <= CURRENTPERIODSTART() appears five times and similarly START() > CURRENTPERIODSTART() appears twice. To correct this, include these time-related formulas in their own module and then refer to them as needed in your modules. Remember, calculate once; reference many times! Taking a closer look at our example, not only is the condition evaluation repeated, but the dimensionality of the line items is also more than required. The calculation only changes by day, as per the diagram below: But the Applies To here also contains Organization, Hour Scale, and Call Center Type. Because the formula expression is contained within the line item formula, for each day the following calculations are also being performed: And, as above, it is repeated in many other line items. Sometimes model builders use the same expression multiple times within the same line item. To reduce this overcalculation, reference the expression from a more appropriate module; for example, Days of Week (dimensioned solely by day) which was shown above. The blueprint is shown below, and you can see that the two different formula expressions are now contained in two line items and will only be calculated by day; the other dimensions that are not relevant are not calculated. Substitute the expression by referencing the line items shown above. In this example, making these changes to the remaining lines in this module reduces the calculation cell count from 1.5 million to 1500. Check the Applies to for your formulas, and if there are extra dimensions, remove the formula and place it in a different module with the appropriate dimensionality .
View full article
Table of Contents   Overview A data hub is a separate model that holds an organization’s data. Data can be shared with all your models, making expands easier to implement and ensuring data integrity across models. The data hub model can be placed in a different workspace, allowing for role segregation. This allows you to assign administrator rights to users to manage the data hub without allowing those users access to the production models. The method for importing to the data hub (into modules, rather than lists) allows you to reconcile properties using formulas. One type of data hub can be integrated with an organization’s data warehouse and hold ERP, CRM, HR, and other data as shown in this example. Anaplan Data Architecture But this isn’t the only type of data hub. Some organizations may require a data hub for transactional data, such as bookings, pipeline, or revenue. Whether you will be using a single data hub or multiple hubs, it is a good idea to plan your approach for importing from the organization’s systems into the data hub(s) as well as how you will synchronize the imports from the data hub to the appropriate model. The graphic below shows best practices.   High level best practices   When building a data hub, the best practice is to import a list with properties into a module rather than directly into a list. Using this method, you set up line items to correspond with the properties and import them using the text data type. This imports all the data without errors or warnings. The data in the data hub module can be imported to a list in the required model. The exception for importing into a module is if you are using a numbered list without a unique code (or in other words, you are using combination of properties). In that case, you will need to import the properties into the list.   Implementation steps Here are the steps to create the basics of a hub and spoke architecture. 1) Create a model and name it master data hub You can create the data hub in the same workspace where all the other models are, but a better option is to put the data hub in a different workspace. The advantage is role segregation; you can assign administrator rights to users to manage the Hub and not provide them with access to the actual production models, which are in a different workspace. Large customers may require this segregation of duties. Note: This functionality became available in release 2016.2.   2) Import your data files into the data hub Set up your lists. Identify the lists that are required in the data hub. Create these lists using good naming conventions. Set up any needed hierarchies, working from the top level down. Import data into the list from the source files, mapping only the unique name, the parent (if the name rolls up into a hierarchy), and code, if available. Do not import any list properties. These will be imported into a module. Create corresponding modules for those lists that include properties. For each list, create a module. Name the module [List Name] Properties. In the module, create a line item for each property and use the data type TEXT. Import the source file into the corresponding module. There should be no errors or warnings. Automate the process with actions. Each time you imported, an action was created. Name your actions using the appropriate naming conventions. Note: Indicate the name of the source in the name of the import action. To automate the process, you’ll want to create one process that includes all your imports. For hierarchies, it is important to get the actions in the correct order. Start with the highest level of the hierarchy list import, then the next level list and on down the hierarchy. Then add the module imports. (The order of the module imports is not critical.) Now, let's look at an example: You have a four-level hierarchy to load, such as 1) Employee→ 2) State → 3) Region → 4) Country   Lists Create lists with the right naming conventions. For this example, create these lists: G1 Country G2 Region G3 State Employee G4 Set the parent hierarchy to create the composite hierarchy. Import into each list from the source file(s), and only map name and parent. The exception is the employee list, which includes a code (employee ID) which should be mapped. Properties will be added to the data hub later.   Properties → Modules Create one module for each list that includes properties. Name the module [List Name] Properties. For this example, only the Employees list includes properties, so create one module named Employee Properties. In each module, create as many line items as you have properties. For this example, the line items are Salary and Bonus. Open the Blueprint view of the module and in the Format column, select Text. Pivot the module so that the line items are columns. Import the properties. In the grid view of the module, click on the property you are going to import into. Set up the source as a fixed line item. Select the appropriate line item from the Line Item tab and on the Mapping tab, select the correct column for the data values. You’ll need to import each property (line item) separately. There should be no errors or warnings.     Actions  Each time you run an import, an action is created. You can view these actions by selecting Actions from the Model Settings tab. The previous imports into lists and modules have created one import action per list. You can combine these actions into a process that will run each action in the correct order. Name your actions following the naming conventions. Note, the source is included in the action name.   Create one process that includes the imports. Name your process Load [List Name]. Make sure the order is correct: Put the list imports first, starting with the top hierarchy level (numbered as 1) and working down the module imports in any order.   3) Reconcile These list imports should be running with zero errors because imports are going into text formatted items. If some properties should match with items in lists, it's recommended to use FINDITEM formulas to match text to list items: FINDITEM simply looks at the text formatted line item, and finds the match in the list that you specify. Every time data is uploaded into Anaplan, you just need to make sure all items from the text formatted line item are being loaded into the list. This will be useful as you will be able to always compare the "raw data" to the "Anaplan data," and not have to load that data more than once if there are concerns about the data quality in Anaplan. If there is not a list of the properties included in your data hub model, first, create that list. Let’s use the example of Territory. Add a line item to the module and select list as the format type, then select the list name of your list of properties—in this case, Territory from the drop-down. Add the FINDITEM formula FINDITEM(x,y) where x is the name of your list (Territory for our example) and y is the line item. You can then filter this line item so that it shows all of the blank items. Correct the data in the source system. If you will be importing frequently, you may want to set up a dashboard to allow users to view the data so they can make corrections in the source system. Set up a saved view for the errors and add conditional formatting to highlight the missing (blank items) data. You can also include a counter to show the number of errors and add that information to the dashboard.   4) Split models: Filter and Set up Saved Views If the architecture of your model includes spoke models by regions, you need one master hierarchy that covers all regions and a corresponding module that stores the properties. Use that module and create as many saved views as you have spoke region models. For example, filter on Country GI = Canada if you want to import only Canadian accounts into the spoke model. You will need to create a saved view for each hierarchy and spoke model.   5) Import to the spoke module Use the cross-workspace imports if you have decided to put your Master data hub in a separate workspace. Create the lists that correspond to the hierarchy levels in each spoke model. There is no way to create a list via import for now. Create the properties in the list where needed. Keep in mind that the import of properties into the data hub as line items is an exception. List properties generally do not vary, unlike a line item in a module, which are often measured over time. Note: Properties can also be housed in modules and there are some benefits to this. See Anapedia - Model Building (specifically, the "List Attributes" and "List attributes in a module" topics). If you decide to use a module to hold the properties, you will need to create a line item for each property type and then import the properties into the module. To simplify the mapping, make sure the property names in each spoke model match the line item names of the data hub model. In each spoke model, create an import from the filtered module view of the data hub model into the lists you created in step 1. In the Actions window, name your imports using naming conventions. Create a process that includes these actions (imports). Begin with the highest level in the hierarchy and work down to the lowest. Well done! You have imported your hierarchy from a data hub model.   6) Incremental list imports When you are in the midst of your peak planning cycle and your large lists are changing frequently, you’ll want to update the data hub and push the changes to the spoke models. Running imports of several thousand list members, may cause performance issues and block users during the import activity. In a best case scenario, your data warehouse provides a date field that shows when the item was added or modified, and is able to deliver a flat file or table that includes only the changes. Your import into the HUB model will just take few seconds, and you can filter on this date field to only export the changes to the spoke models. But in most cases, all you have is a full list from the data warehouse, regardless of what has changed. To mitigate this, we'll use a technique to export only the list items that have changed (edited, deleted, updated) since the last export, using the logic in Anaplan.   Setting up the incremental loads: In the data hub model: Create a text formatted line item in your module. Name it CHECKSUM, set the format as Text, and enter a formula to concatenate of all the properties that you want to track changes for. These properties will form the base of the incremental import. Example: CHECKSUM = State & Segment & Industry & Parent & Zip Code Create a second line item, name it CHECKSUM OLD, set the format as Text, and create an import that imports CHECKSUM into CHEKSUM_OLD. Ignore any other mappings. Name this import: 1/2 im DELTA and put it in a process called "RESET DELTA" Create a line item and name it "DELTA" and set the format as Boolean. Enter this formula: IF CHECKSUM <> CHECKSUM OLD THEN TRUE ELSE FALSE. Update the filtered view that you created to export only the hierarchy for a specific region or geography. Add a filter criteria "DELTA = true". You will only see the list items which differ from the last time you imported into the data hub In the example above, we'll import into a spoke model only the list items that are in US East, and that have changed since the last import. Execute the import from the source into the data hub and then into the spoke models. In the data hub model, upload the new files and run the process import. In the spoke models, run the process import that takes the list from the data hub's filtered view. → Check the import logs and verify that only the number of items that have changed are actually imported. Back in the data hub model, run the RESET DELTA process (1/2 im DELTA import). The RESET DELTA process resets the changes, so you are ready for the next set of changes. Your source, data hub model and spoke models are all in sync.   7) Import actuals (transaction data) into the data hub and then into the spoke models Rather than importing actuals or transactions directly into a working model, import them to the data hub to make it easier for business users (with workspace admin rights) to easily select the imports they want to add to their spoke models There is one requirement: the file must include a transaction or primary key (identification code) that uniquely identifies each transaction. If there is not a transaction key, your options are as follows: Option 1: Work with the IT team to determine if it is possible to include a transaction ID in the source. This is the best option, but not always possible. Option 2: Create the transaction ID in Excel ® . Keep in mind there is a limit of 1 million rows in Excel. Also be careful about how you create the transaction ID in Excel, as some methods may delete leading zeros. Option 3: Create a numbered list in Anaplan.   Creating a numbered list and importing transaction IDs: Add a Transaction list (follow your naming conventions!) to the data hub model. In the General Lists window, select the Numbered option to change the Transaction list to a numbered list   In the Transaction list, create a property called "transaction ID", set the format to text. In the General Lists window, select Transaction ID in the Display Name Property field. Open the Transaction list and add the formula: CODE(ITEM('Transaction')) to the Transaction ID property. It will be used as the display name of the numbered list. When importing into the Transaction list, set it up as indicated below    Map the Transaction ID of the source file to the Code. Remove any selection from the Transactions drop-down list (first source field). If duplicates on the transaction ID are found, reject the import. Otherwise you will introduce corrupted data into the model. Import the transaction IDs into the Transactions list.   Import transactions Create the Actuals module. Include the transaction list and as many line items as you have fields (columns) in your source file. Set up the format of your line items. They should be set up as format type text, with the exception of columns that include values that are numbers. For those, the format should be number and include any further definitions needed (for example decimal places, units.) Add a line item called "Transaction ID" and set the format as text. Enter the formula: CODE(ITEM(Transactions)). This will be used when importing the numbered list into the spoke models. Run the import of the source file into the Actuals module. Name your two actions (imports): Import into Transactions (this was the import of the transaction IDs into the Transactions list) and Import into Actuals (this was the import from the source file into the Actuals module). Create a process that includes both imports: first, Import into Transactions, then Import into Actuals. Why a 2-dimensional module? It is important to understand that the Actuals module is a staging module with two dimensions only: transaction and line items. You can load multiple millions of these transactions and have 50+ line items, which corresponds to the properties of each transaction including version and time. Anaplan will scale without any issues. Do not create a multi dimensional module at this stage. This will be done in the spoke models, and you will carefully pick what properties will become dimensions. This will impact the spoke model size significantly if you have large lists. In the Actuals module, create a view that you will use for importing into the spoke model. Create as many saved views as required, based on how you have split the spoke models.   Reconcile The import into the module will run without errors or warnings. It does not mean that all is clean, as we just loaded some text. The reconciliation in the data hub consists of verifying that every field of the source system matches an existing item of the list of values for that field. In the module, create a list formatted line item that corresponds to each field, and use the FINDITEM() function to lookup the actual item. If the name does not match, it will return a blank cell. These cells needs to be tracked in a reconciliation dashboard. The source file will need to be fixed until all transactions actually have a corresponding item in a list. If there is not a list of the fields included in your data hub model, first create that list. Add a line item to the module and select list as the format type, then select the list name of your list of fields. Add the FINDITEM formula FINDITEM(x,y) where x is the name of your list and y is the line item. See example below: transaction 0001 is clean, transaction 0002 has an account A4 code that does not match Set up a dashboard to allow users to view the data so they can make corrections in the source system. Set up a saved view for the errors and add conditional formatting to highlight the missing (blank items) data. You can also include a counter to show the number of errors and add that information to the dashboard.   Import into the spoke models In the spoke models: Create the transaction numbered list. Import into this list from the transaction module saved view that you created in the data hub, filtered on any property you need to limit the transactions you want to push. Map the Code of the numbered list of the spoke model to the calculated Transaction ID of the Master data hub model. Create a transaction flat module. Import into this module from the same transaction module, filtered on any property you need to limit the transactions you want to push that were created in the data hub. Make sure you select the Calculated Transaction ID as your source. Do not use the Transaction name as it will be different for the same transaction in the data hub model and the spoke model. Create a target multi dimensional module, using SUM functions from the Transactional module across the line items formatted as list or time. Simple 2 dimensional module Account, Product Use SUM functions as much as possible, as it will enable users to use the drill to transaction feature that shows the transaction that make up an aggregated number.   8) Incremental data load The Actual transaction file might need to be imported several times into the data hub model and from there into the spoke models during the planning peak cycle. If the file is large, it can create performance issues for end users. Since not all transactions will change as the data is imported several times a day, there is a strong opportunity to optimize this process. In the data hub model transaction module, create the same CHECKSUM, CHECKSUM OLD and DELTA line items. CHECKSUM should concatenate all the fields you want to track the delta on, including the values. "DELTA" line item will actually catch new transactions, as well as modified transactions. See 6. Incremental List Imports above for more information   Filter the view using DELTA to only import transaction list items into the list, and the actuals transaction into the module. Create an import from CHECKSUM to CHECKSUM OLD, to be able to reset the delta after the imports have run, name this import: 2/2 im DELTA, and add it to the DELTA process created for the list. In the spoke model, import into the transaction list and into the transaction module, from the transaction filtered view. Run the DELTA import or process.   9) Automation You can semi-automate this process and have it run automatically on a frequent basis if incremental loads have been implemented. That provides immediacy of master data and actuals across all models during a planning cycle. It's semi-automatic because it requires a review of the reconciliation dashboards before pushing the data to the spoke models. There are a few ways to automate, all requiring an external tool: Anaplan Connect or the customer's ETL. The automation script needs to execute in this order: Connect to the master data hub model. Load the external files into the master data hub model. Execute the process that imports the list into the data hub. Execute the process that imports actuals (transactions) into the data hub. Manual step: Open your reconciliation dashboards, and check that data and the list are clean. Again, these imports should run with zero errors or warnings. Connect to the spoke model. Execute the list import process. Execute the transaction import models. Repeat 5, 6, and 7 for all spoke models. Connect to the master data hub model. Run the Clear DELTA process to reset the incremental checks.   Other best practices Create deletes for all your lists Create a module called Clear Lists. In the module, create a line item of type Boolean in the module where you have list and properties, call it "CLEAR ALL" and set a formula to TRUE. In Actions, create a "delete from list using selection" action and set it as below: Repeat this for all lists and create one process that executes all these delete actions.   Example of a maintenance/reconcile dashboard Use a maintenance/reconcile dashboard when manual operations are required to update applications from the hub. One method that works well is to create a module that highlights if there are errors in each data source. In that module, create a line item message that displays on the dashboard if there are errors, for example: There are errors that need correcting. A link on this dashboard to the error status page will make it easy for users to check on errors. A best practice is to automate the list refresh. Combine this with a modeling solution that only exports what has changed.   Dev-test-prod considerations There should be two saved views: One for development and one for production. That way, the hub can feed the development models with shortened versions of the lists and the production models will get the full lists. ALM considerations: The development (DEV) model will need the imports set up for DEV and production (PROD) if the different saved view option is taken. The additional ALM consideration is that the lists that are imported into the spoke models from the hub need to be marked as production data.   Development DATA HUB The data hub houses all global data needed to execute the Anaplan use case. The data hub often houses complex calculations and readies data for downstream models. DEVELOPMENT MODEL The development model is built to the 80/20 rule. It is built upon a global process, regional specific functionality is added in the deployment phase. The model is built to receive data from the data hub. DATA INTEGRATION During this stage, Anaplan Connect or a 3rd party tool is used to automate data integration. Data feeds are built from the source system into the data hub and from the data hub to downstream models. PERFORMANCE TESTING The application is put through rigorous performance testing, including automated and end user testing. These tests mimic real world usage and exceptionally heavy traffic to see how the system will perform.   Deployment DATA HUB The data hub is refreshed with the latest information from the source systems. The data hub readies data for downstream models. DEPLOYMENT  MODEL The development model is copied and the appropriate data is loaded from the data hub. Regional specific functionality is added during this phase. DATA INTEGRATION Additional data feeds from the data hub to downstream models are finalized. The integrations are tested and timed to establish baseline SLA. Automatic feeds are placed on timed schedules to keep the data up to date. PERFORMANCE TESTING The application is again put through rigorous performance testing.   Expansion DATA HUB The need for additional data for new use cases is often handled by splitting the data hub into regional data hubs. This helps the system perform more efficiently. MODEL  DEVELOPMENT The models built for new use cases are developed and thoroughly tested. Additional functionality can be added to the original models deployed. DATA INTEGRATION Data integration is updated to reflect the new system architecture. Automatic feeds are tested and scheduled according to business needs. PERFORMANCE TESTING At each stage, the application is put through rigorous performance testing. These tests mimic real world usage and exceptionally heavy traffic to see how the system will perform.
View full article
Summary We explain here a dynamic way to filter specific levels of a hierarchy. This provides a better way to filter & visualize hierarchies.  Overview This tutorial explains how to calculate the level of a list in a hierarchy in order to apply specific calculations (custom summary) or filters by level. In this example we have an organization hierarchy of 4 levels (Org L1 to Org L4). For each item in the hierarchy we want to calculate a filtering module value that returns the associated level. Context and notes This technique addresses a specific limitation within dashboards where a composite hierarchy's level cannot be selected if the list is synchronized to multiple module objects on the dashboard. We show the technique of creating a static filtering module based on the levels of the composite structure. The technique utilizes the Summary method Ratio of line items corresponding to the list levels to define the value of the filtering line items. Note that it is not a formula calculation but a use of the summary method Ratio applied to the composite hierarchy.   Example list We defined in this example a 4-levels list as follows: Defining the level of each list In order to calculate the level of each item in the lists, we need to create a module that calculates it by: Creating as many line items as level of hierarchy + one technical line item. Changing the settings in the blueprint of those line items according to the following table: Line Item Formula Applies to Summary Summary method Setting Ratio Technical line item 1 (empty) Formula   Level or L4 (lowest level) 4 Org L4 Ratio* L3 / Technical L3 3 Org L3 Ratio L2 / Technical L2 2 Org L2 Ratio L1 / Technical L1 1 Org L1 Ratio L1 / Technical                       When applying these settings, the calculation module looks like this: *Note that the Technical line item Summary method is using Formula, Minimum Summary method can be used but will return an error when a level of the hierarchy does not have any children and the level calculated is blank.     We can now use the line item at the lowest level—“Level (or L4)” in the example—as the basis of filters or calculations.   Applying a filter on specific levels in case of synchronization When synchronization is enabled, the option “Select levels to show” is not available. Instead, a filter based on the level calculated can be used to show only specific levels. In the example, we apply a filter on the level 4 and 1:   This gives the following result:    
View full article
Overview: A dashboard with grids that include large lists that have been filtered and/or sorted can take time to open. The opening action can also become a blocking operation; when this happens, you'll see the blue toaster box showing "Processing....." when the dashboard is opening. This article includes some guidelines to help you avoid this situation.  Rule 1: Filter large lists by creating a Boolean line item  Avoid the use of filters on text or non-Boolean formatted items for large lists on the dashboard. Instead, create a line item with the format type Boolean and add calculations to the line item so that the results return the same data set as the filter would. This is especially helpful if you implement user-base filters, where the Boolean will be by user, and by the list to be filtered. The memory footprint of a Boolean line item is 8x smaller than other types of line items. Warning on a known issue: On an existing dashboard where a saved view is being modified by replacing the filters with a Boolean line item for filtering, you must republish it to the dashboard. Simply removing the filters from the published dashboard will not improve performance. Rule 2: Use the default Sort Use sort carefully, especially on large list. Opening a dashboard that has a grid where a large list is sorted on a text formatted line item will likely take 10 seconds or more and may be a blocking operation. To avoid using the sort: Your list is (by default) sorted by the criteria you need. If it is not sorted, you can still make the grid usable by reducing the items using a user-based filter. Rule 3: Reduce the amount of dashboard components There are times when the dashboard includes too many components, which slows performance. A reasonably large dashboard is no wider than 1.5 page (avoiding too much horizontal scrolling) and 3 pages deep. Once you exceed these limits, consider moving the components into multiple dashboards. Doing so will help both performance and usability. Rule 4: Avoid using large lists as page selectors If you have a large list and use it as a page selector on a dashboard, that dashboard will open slowly.  It may take10 seconds or more. The loading of the page selector takes more than 90% of the total time. Known issue / This is how Anaplan works: If a dashboard grid contains list formatted line items, the contents of page selector drop-downs are automatically downloaded until the size of the list meets a certain threshold; once this size is exceeded, the download happens on demand, or in other words, when a user clicks the drop down.  The issue is that when Anaplan requests the contents of list formatted cell drop-downs, it also requests contents of ALL other drop-downs INCLUDING page selectors. Recommendation: Limit the page selectors on medium to large lists using the following tips: a) Make the page selector available in one grid and use the synchronized paging option for all other grids and charts. No need to allow users to edit the page in every dashboard grid or chart. b) If you have a large list, it makes for a poor user experience, as there is no search available. Using a large list as a page selector creates both a performance and a usability issue. Solution 1: Design a dashboard dedicated to searching a line item: From the original dashboard (where you wanted to include the large list page selector), the user clicks a custom search button that opens a dashboard where the large list is displayed as the rows of a grid. The user can then use a search to find the item needed. If possible, implement user-based filters to help the user further reduce the list and quickly find the item. The user highlights the item found, closes the tab, and returns to the original dashboard where all grids are set on the highlighted item. Alternate solution: If the dashboard elements don't require the use of the list, you should publish them from a module that doesn't contain this list. For example, floating page selectors for time or versions, or grids that are displayed as rows/columns-only should be published from modules that does not include the list. Why? The view definitions for these elements will contain all the source module's dimensions, even if they are not shown, and so will carry the overhead of populating the large page selector if it was present in the source.
View full article
Audience: Anaplan Internal and Customers/Partners Workiva Wdesk Integration Is Now Available We are excited to announce the general availability of Anaplan’s integration with Workiva’s product, known as the Wdesk. Wdesk easily imports planning, analysis and reporting data from Anaplan to deliver integrated narrative reporting, compliance, planning and performance management on the cloud. The platform is utilized by over 3,000 organizations for SEC reporting, financial reporting, SOX compliance, and regulatory reporting. The Workiva and Anaplan partnership delivers enterprise compliance and performance management on the cloud. Workiva Wdesk, the leading narrative reporting cloud platform, and Anaplan, the leading connected-planning cloud platform, offer reliable, secure integration to address high-value use cases in the last mile of finance, financial planning and analysis, and industry specific regulatory compliance. GA Launch: March 5th  How does the Workiva Wdesk integration work? Please contact Will Berger, Partnerships (william.berger@workiva.com) from Workiva to discuss how to enable integration. Anaplan reports will feed into the Wdesk platform. Wdesk will integrate with Anaplan via Wdesk Connected Sheets. This is a Workiva built and maintained connection. What use cases are supported by the Workiva Wdesk Integration? The Workiva Wdesk integration supports a number of use cases, including: Last mile of finance: Complete regulatory reporting and filing as part of the close, consolidate, report and file process. Workiva automates and structures the complete financial reporting cycle and pulls consolidated actuals from Anaplan. Financial planning and analysis: Complex multi-author, narrative reports that combine extensive commentary and data such as budget books, board books, briefing books and other FP&A management and internal reports. Workiva creates timely, reliable narrative reports pulling actuals, targets and forecast data from Anaplan. Industry specific regulatory compliance & extensive support of XBRL and iXBRL: Workiva is used to solve complex compliance and regulatory reporting requirements in a range of industries.  In banking, Workiva supports documentation process such as CCAR, DFAST and RRP, pulling banking stress test data from Anaplan. Also, Workiva is the leading provider of XBRL software and services accounting for more than 53% of XBRL facts filed with the SEC in the first quarter of 2017.
View full article
I recently posted a Python library for version 1.3 of our API. With the GA announcment of API 2.0, I'm sharing a new library that works with these endpoints. Like the previous library, it does support certificate authentication, however it requires the private key in a particular format (documented in the code, and below). I'm pleased to announce, the use of Java keystore is now supported. Note:   While all of these scripts have been tested and found to be fully functional, due to the vast amount of potential use cases, Anaplan does not explicitly support custom scripts built by our customers. This article is for information only and does not suggest any future product direction. This library is a work in progress, and will be updated with new features once they have been tested.   Getting Started The attached Python library serves as a wrapper for interacting with the Anaplan API. This article will explain how you can use the library automate many of the requests that are available in our Apiary, which can be found at   https://anaplanbulkapi20.docs.apiary.io/#. This article assumes you have the requests and M2Crypto modules installed as well as the Python 3.7. Please make sure you are installing these modules with Python 3, and not for an older version of Python. For more information on these modules, please see their respective websites: Python   (If you are using a Python version older or newer than 3.7 we cannot guarantee validity of the article)   Requests   M2Crypto Note:   Please read the comments at the top of every script before use, as they more thoroughly detail the assumptions that each script makes. Gathering the Necessary Information In order to use this library, the following information is required: Anaplan model ID Anaplan workspace ID Anaplan action ID CA certificate key-pair (private key and public certificate), or username and password There are two ways to obtain the model and workspace IDs: While the model is open, go Help>About:  Select the workspace and model IDs from the URL:  Authentication Every API request is required to supply valid authentication. There are two (2) ways to authenticate: Certificate Authentication Basic Authentication For full details about CA certificates, please refer to our Anapedia article. Basic authentication uses your Anaplan username and password. To create a connection with this library, define the authentication type and details, and the Anaplan workspace and model IDs: Certificate Files: conn = AnaplanConnection(anaplan.generate_authorization("Certificate","<path to private key>", "<path to public certificate>"), "<workspace ID>", "<model ID>") Basic: conn = AnaplanConnection(anaplan.generate_authorization("Basic","<Anaplan username>", "<Anaplan password>"), "<workspace ID>", "<model ID>")   Java Keystore: from anaplan_auth import get_keystore_pair key_pair=get_keystore_pair('/Users/jessewilson/Documents/Certificates/my_keystore.jks', '<passphrase>', '<key alias>', '<key passphrase>') privKey=key_pair[0] pubCert=key_pair[1] #Instantiate AnaplanConnection without workspace or model IDs conn = AnaplanConnection(anaplan.generate_authorization("Certificate", privKey, pubCert), "", "") Note: In the above code, you must import the get_keystore_pair method from the anaplan_auth module in order to pull the private key and public certificate details from the keystore. Getting Anaplan Resource Information You can use this library to get the necessary file or action IDs. This library builds a Python key-value dictionary, which you can search to obtain the desired information: Example: list_of_files = anaplan.get_list(conn, "files") files_dict = anaplan_resource_dictionary.build_id_dict(list_of_files) This code will build a dictionary, with the file name as the key. The following code will return the ID of the file: users_file_id = anaplan_resource_dictionary.get_id(files_dict, "file name") print(users_file_id) To build a dictionary of other resources, replace "files" with the desired resource: actions, exports, imports, processes.  You can use this functionality to easily refer to objects (workspace, model, action, file) by name, rather than ID. Example: #Fetch the name of the process to run process=input("Enter name of process to run: ") start = datetime.utcnow() with open('/Users/jessewilson/Desktop/Test results.txt', 'w+') as file: file.write(anaplan.execute_action(conn, str(ard.get_id(ard.build_id_dict(anaplan.get_list(conn, "processes"), "processes"), process)), 1)) file.close() end = datetime.utcnow() The code above prompts for a process name, queries the Anaplan model for a list of processes, builds a key-value dictionary based on the resource name, then searches that dictionary for the user-provided name, and executes the action, and writes the results to a local file. Uploads You can upload a file of any size, and define a chunk size up to 50mb. The library loops through the file or memory buffer, reading chunks of the specified size and uploading to the Anaplan model. Flat file:  upload = anaplan.file_upload(conn, "<file ID>", <chunkSize (1-50)>, "<path to file>") "Streamed" file: with open('/Users/jessewilson/Documents/countries.csv', "rt") as f: buf=f.read() f.close() print(anaplan.stream_upload(conn, "113000000000", buf)) print(anaplan.stream_upload(conn, "113000000000", "", complete=True)) The above code reads a flat file and saves the data to a  buffer (this can be replaced with any data source, it does not necessarily need to read from a file). This data is then passed to the "streaming" upload method. This method does not accept the chunk size input, instead, it simply ensures that the data in the buffer is less than 50mb before uploading. You are responsible for ensuring that the data you've extracted is appropriately split. Once you've finished uploading the data, you must make one final call to mark the file as complete and ready for use by Anaplan actions. Executing Actions You can run any Anaplan action with this script, and define a number of times to retry the request if there's a problem. In order to execute an Anaplan action, the ID is required. To execute, all that is required is the following: run_job = execute_action(conn, "<action ID>", "<retryCount>") print(run_job) This will run the desired action, loop until complete, then print the results to the screen. If failure dump(s) exits, this will also be returned. Example output: Process action 112000000082 completed. Failure: True Process action 112000000079 completed. Failure: True Details: hierarchyName Worker Report successRowCount 0 successCreateCount 0 successUpdateCount 0 warningsRowCount 435 warningsCreateCount 0 warningsUpdateCount 435 failedCount 4 ignoredCount 0 totalRowCount 439 totalCreateCount 0 totalUpdateCount 435 invalidCount 4 updatedCount 435 renamedCount 435 createdCount 0 lineItemName Code rowCount 0 ignoredCount 435 Failure dump(s): Error dump for 112000000082 "_Status_","Employees","Parent","Code","Prop1","Prop2","_Line_","_Error_1_" "E","Test User 2","All employees","","101.1a","1.0","2","Error parsing key for this row; no values" "W","Jesse Wilson","All employees","a004100000HnINpAAN","","0.0","3","Invalid parent" "W","Alec","All employees","a004100000HnINzAAN","","0.0","4","Invalid parent" "E","Alec 2","All employees","","","0.0","5","Error parsing key for this row; no values" "W","Test 2","All employees","a004100000HnIO9AAN","","0.0","6","Invalid parent" "E","Jesse Wilson - To Delete","All employees","","","0.0","7","Error parsing key for this row; no values" "W","#1725","All employees","69001","","0.0","8","Invalid parent" [...] "W","#2156","All employees","21001","","0.0","439","Invalid parent" "E","All employees","","","","","440","Error parsing key for this row; no values" Error dump for 112000000079 "Worker Report","Code","Value 1","_Line_","_Error_1_" "Jesse Wilson","a004100000HnINpAAN","0","434","Item not located in Worker Report list: Jesse Wilson" "Alec","a004100000HnINzAAN","0","435","Item not located in Worker Report list: Alec" "Test 2","a004100000HnIO9AAN","0","436","Item not located in Worker Report list: Test 2 Downloading a File If the above code is used to execute an export action, the fill will not be downloaded automatically. To get this file, use the following: download = get_file(conn, "<file ID>", "<path to local file>") print(download) This will save the file to the desired location on the local machine (or mounted network share folder) and alert you once the download is complete, or warn you if there is an error. Get Available Workspaces and Models API 2.0 introduced a new means of fetching the workspaces and models available to a given user. You can use this library to build a key-value dictionary (as above) for these resources. #Instantiate AnaplanConnection without workspace or model IDs conn = AnaplanConnection(anaplan.generate_authorization("Certificate", privKey, pubCert), "", "") #Setting session variables uid = anaplan.get_user_id(conn) #Fetch models and workspaces the account may access workspaces = ard.build_id_dict(anaplan.get_workspaces(conn, uid), "workspaces") models = ard.build_id_dict(anaplan.get_models(conn, uid), "models") #Select workspace and model to use while True: workspace_name=input("Enter workspace name to use (Enter ? to list available workspaces): ") if workspace_name == '?': for key in workspaces: print(key) else: break while True: model_name=input("Enter model name to use (Enter ? to list available models): ") if model_name == '?': for key in models: print(key) else: break #Extract workspace and model IDs from dictionaries workspace_id = ard.get_id(workspaces, workspace_name) model_id = ard.get_id(models, model_name) #Updating AnaplanConnection object conn.modelGuid=model_id conn.workspaceGuid=workspace_id The above code will create an AnaplanConnection instance with only the user authentication defined. It queries the API to return the ID of the user in question, then queries for the available workspaces and models, and builds a dictionary with these results. You can then enter the name of the workspace and model you wish to use (or print to screen all available), then finally update the AnaplanConnection instance to be used in all future requests.
View full article
Introduction Data Integration is a set of processes that bring data from disparate sources in to Anaplan models.  These processes could include activities that help understand the data (Data Profiling), cleanse & standardize data (Data Quality), and transform/load data (ETL). Anaplan offers following data integration options.  Manual import Anaplan Connect Extract Transform & Load (ETL) REST API  Anaplan learning center offers several on-demand courses on Anaplan’s data integration options.  Following is a list of on-demand courses. Data Integration Anaplan Data Integration Basics (303) Anaplan Connect (301) Hyperconnect This article presents step by step instructions on different integration tasks that can be performed using Anaplan integration APIs.  These tasks include: Import data into Anaplan Export data from Anaplan Run a process Downloading files Delete files Setup Install & Configure Postman Download latest Postman application for your platform (Ex: Mac, Windows, Linux) from  https://www.getpostman.com/apps . Instructions to install Postman app for your platform may be found here . Postman account: Signing up for a postman account is optional.  However, having an account will give you additional benefits of backing up history, collections, environments, and header presets (ex: authorization credentials).  Instructions for creating a postman account may be accessed here .  Download Files You may follow instructions provided in this article against your instance of Anaplan platform.  You will need to download a set of files for these exercises. Customers.csv: Download the .csv file to a directory on your workstation.  This file consists a list of customers you will import into a list using Anaplan integration APIs. Anaplan Community REST API Solution.txt: This is an export (json) from postman that contains solution to the exercises outlined in this article. You may choose to import this file into postman to review the solution. Although the file extension is .txt, it is a json file that can be imported into Postman. Anaplan Setup  Anaplan RESTful API, Import, allows you to bring data into Anaplan.  This is done by using POST HTTPs verb to call an import.   This means, an import action must exist in Anaplan prior to the API call.  Initially, you will import Employees.csv into Anaplan using the application.  Subsequent imports into this list will be carried out via API calls. Create a new model named   Data Integration API Import Customers.csv Create a list named Customers Using Anaplan application import Customers.csv to Customers list. Set File Options as shown below Map each column to a property in the list as shown below and Run Import. 31 records should be imported into the list.   Create an Export action. In this article, you will also learn how to export the data from Anaplan using APIs.  Anaplan API, Export, calls an export action previously created.  Therefore, create an Export of Customers list & save the export definition.  This will create an export action (ex: Grid – Customers.csv). Note:  Set file type to .csv in export action. You may choose to rename the export action under Settings ==> Actions ==> Exports. Create a Process Along with Import & Export, you will also learn how to leverage APIs to call an Anaplan process. Create a Process named “Import & Export a List” that calls Import (ex: Import Customers from Customers.csv) first followed by Export (Ex: Grid – Customers.csv).  Name the process, Import & Export a List. Anaplan Integration API Fundamentals Anaplan Integration APIs (v1.3) are RESTful API that allow for requests to be made via HTTPS using GET, PUT, POST, & DELETE verbs.  Using these APIs, you can perform integration tasks such as: Import data into a module/list Export data from a module/list Upload files for import Run an Anaplan Process Download Files that have been uploaded or file that were created during an export Delete from list using selection End points enable you to obtain information regarding workspaces, models, imports, exports, processes, etc… Many end points contain a chain of parameters. Example We want to get a list of models in a workspace.  In order to get a list of models, we will, first, need to select a workspace a model belongs to.  Obtain base URI for Anaplan API. Base URI for Anaplan Integration API is https://api.anaplan.com Select version of API that will be used in API calls. This article is based on version 1.3.  Therefore, updated base URI will be https://api.anaplan.com/1/3 Retrieve a list of workspaces you have access to GET <base URI>/workspaces. Where <base URI> is https://api.anaplan.com/1/3 GET https://api.anaplan.com/1/3/workspaces Above GET call returns a guid & name for each workspace user has access to.                             {                                    "guid": "8a81b09d5e8c6f27015ece3402487d33",                                    "name": "Pavan Marpaka"                              } Retrieve a list of models in a selected workspace by providing {guid} as a parameter value. https://api.anaplan.com/1/3/workspaces/{guid}/models https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models   Chaining Parameters Many end points contain a set of parameters that can be chained together in a request.  For example, to get a list of import actions we can chain together workspaceId & modelId as parameters in a GET request.  Request call to get a list of import action might look something like:               https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/imports Following sequence of requests need to be made to get a list of import actions in a selected model. GET a list of workspaces user has access to           https://api.anaplan.com/1/3/workspaces Select a workspaceID (guid) from the result GET a list of models in a workspace providing workspaceID as a parameter value            https://api.anaplan.com/1/3/workspaces /{workspaceID}/models Select a modelID from the result GET a list of Imports from a model in a workspace.           https://api.anaplan.com/1/3/workspaces /{workspaceID}/models/{modelID}/imports Formats  Format for most request and responses is application-json.  Exception to this are when uploading files in a single chunk or multiple chunks and getting data in a chunk. These requests use application/octet-stream format.  These formats are specified in header of an API request. They are also specified in header of a response. Data Integration with Anaplan APIs & Postman Background  Next few sections will provide you with step by step instructions on how to perform different data integration tasks via Anaplan integration API requests. You will perform following data integration tasks using Anaplan APIs: Upload file(s) to Anaplan Import data into a list Export data from a list Download file that has been uploaded or exported Run an Anaplan Process Delete uploaded file(s)  Postman application, an HTTP client for making RESTful API calls, will be used to perform these integration tasks.  You should have installed and configured Postman on your workstation using instructions provided in the beginning of this article.  You may follow steps outlined in the next few sections.  You may also import Postman collection (json file) provided with this article.  Navigating Postman UI  This section presents basics of Postman user interface (UI).  You will learn how to perform simple tasks required to make API calls.  These tasks include: Create a new collection Adding a Folder Add a Request Submit a Request Selecting a Request Method (GET, POST, PUT, DELETE) Specifying a Resource URI Specify Authorization, Headers, and Body (raw, binary) You will perform above steps repeatedly for each integration task. Create a new collection From New orange drop down box select “Collection” New Collection   Provide name for the collection (Ex: Data Integration API) Click Create Add Folders Create following folders in the collection Authentication, Upload, Import, Export, Download Files, Process, Delete Files. Folders in a collection Add a Request You don’t need to perform this step right now. Following steps will outline how a request can be added to a folder.  You will use this instruction each time a new request is created. Select a folder where you want to add a new request. Click on and select Add Request Add a request Provide a Request Name and click on Save Submit a Request Select a Request Method (GET, PUT, POST, DELETE) Select request method   Provide a resource URI (ex: https://api.anaplan.com/1/3/workspaces ) Click on Authorization and select “Basic Auth” for Authorization Type. Provide your Anaplan credentials (username & password) Authorization Provide necessary Headers. Common Headers include Authorization (should be pre-populated from Authorization tab), and Content-Type. Header variables & values   Some requests may also require a Body. Information for Body will be available in API documentation. Body   Click on Submit. Import data into a List using Anaplan APIs One of the data integration tasks is to bring data into Anaplan.  Popular method to bring data into Anaplan platform is via Import feature in Anaplan application.  Once imported, an import action is created.  This import action can be executed via an API request.  Earlier, you have imported Employees.csv file into a hierarchy.  In this section, you will use Anaplan Integration APIs to import employees’ data into the hierarchy.  Following sequence of requests will be made to import data into the list.     Get a list of workspaces In Postman, under the folder “Authentication”, create a new request and label it “GET List of Workspaces” Select request method GET Type https://api.anaplan.com/1/3/workspaces for resource URI Under “Authorization” tab, select Basic Auth and provide your Anaplan credentials. Click Send Response to this request should result in the following. Status: 200 OK Body: guid & name. “guid” is the workspaceID.  Sample result is shown below.  WorkspaceID for workspace “Pavan Marpaka” is 8a81b09d5e8c6f27015ece3402487d33.  This workspaceID will be passed as an input parameter in the next request, GET List of Models in a Workspace.    Get a list of Models in a workspace In Postman, under the folder “Authentication”, create a new request and label it “GET List of Models in a Workspaces” Select request method GET Input parameter for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33), which was retrieved in the last request. Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models for resource URI. Ex: https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models Under “Authorization” tab, select   Basic Auth   and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Content-Type, application/json. Click Send Response to this request should result in the following. Status: 200 OK Body: activeState, id & name. “id” is the modelID, which will passed as an input parameter in subsequent request calls.  In the result is shown below (your result may vary), “Top 15 DI API” is the model name.  92269C17A8404B7A90C536F4642E93DE is the modelID.  Get a list of files In Postman, under the folder “Upload”, create a new request and label it “GET List of Files and FileID” Select request method GET Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33) and modelID (92269C17A8404B7A90C536F4642E93DE) that were retrieved in the last request. Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/files for resource URI. Example:https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/files Under “Authorization” tab, select   Basic Auth   and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Content-Type, application/json. Click   Send Response to this request should result in the following. Status: 200 OK Body: id & name of the files that were either previously uploaded or exported. In the result below (your result may vary), fileID is 113000000001. This fileID will be passed as an input parameter in the next request (PUT) that will upload the file, Customers.csv   Upload a file In Postman, under the folder “Upload”, create a new request and label it “Upload File” Select request method PUT Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33) and modelID (92269C17A8404B7A90C536F4642E93DE) that were retrieved in the last request. Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/files for resource URI. Example:https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/files Under “Authorization” tab, select   Basic Auth   and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Content-Type, application/octet-stream. Click on “Body” tab, select “binary” radio button, and click “Choose Files” to select  Customers. csv   file you downloaded earlier. Click   Send Response to this request should result in the following. Status: 204 No Content. This is an expected response.  It just means the request was successful, but the response is empty. Get a list of Import actions in a model In Postman, under the folder “Import”, create a new request and label it “GET a list of Import Actions” Select request method GET Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33) and modelID (92269C17A8404B7A90C536F4642E93DE) that were retrieved in the last request. (Note: Your workspaceID and modelID may be different) Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/imports for resource URI. Example: https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/imports Under “Authorization” tab, select   Basic Auth   and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Content-Type, application/json. Click   Send   button Response to this request should result in the following. Status: 200 OK Body: “id”   is the importID (112000000001). This value will be passed as an input parameter to a POST request in the next step.  The POST request will call an import action that will import data from the uploaded Customers.csv into the list. Call an import Action In Postman, under the folder “Import”, create a new request and label it “Call an Import Action” Select request method POST Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33), modelID (92269C17A8404B7A90C536F4642E93DE), and importID (112000000001) that were retrieved in the last request. (Note: Your workspaceID, modelID, and importID may be different) Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/imports/{importID}/tasks for resource URI. Example:https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/imports/112000000001/tasks Under “Authorization” tab, select Basic Auth and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Content-Type, application/json. Click on “Body” tab, select “raw” and type following {   "localeName": "en_US" }  Click Send button Response to this request should result in the following. Status: 200 OK Body: “taskId” is for the import is returned as a json object. This task id can be used to check for status of import.  {     "taskId": "2D88EBAA093B4D4C9603DD9278521EBC" } Check status of an import call In Postman, under the folder “Import”, create a new request and label it “Check Status of Import Call” Select request method GET Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33), modelID (92269C17A8404B7A90C536F4642E93DE), importID (112000000000), and taskId (2D88EBAA093B4D4C9603DD9278521EBC) that were retrieved in the last request. (Note: Your workspaceID, modelID, importID, and taskId may be different) Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/imports/{importID}/tasks/taskId for resource URI.Example:https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/imports/112000000000/tasks/2D88EBAA093B4D4C9603DD9278521EBC Under “Authorization” tab, select Basic Auth and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Accept, application/json. Click Send button Response to this request should result in the following. Status: 200 OK Response should include “Complete” status, Number of records, and a value of “true” for “successful”. Validate import in Anaplan In Anaplan application, validate the Customers list with a list of customers. Export data using Anaplan APIs An export definition can be saved for later use.  Saved export definitions can be viewed under Settings > Actions > Exports. Earlier (Section 2), you exported the organization hierarchy and saved the export definition.  This should have created an export action (ex: Grid – Customers.csv). In this section, we will use Anaplan APIs to execute the export action.  Following sequence of requests will be made to export data. Get a list of Export Definitions In Postman, under the folder “Export”, create a new request and label it “Get a list of Export Definitions” Select request method GET Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33) and modelID (92269C17A8404B7A90C536F4642E93DE) that were retrieved earlier. Refer to results for requests under “Authentication” folder to obtain your workspaceId and modelId. Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/exports for resource URI. Example:https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/exports Under “Authorization” tab, select Basic Auth and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Content-Type, application/json. Click Send Response to this request should result in the following. Status: 200 OK Body: Should consist of id & name of export action.  Run the export In Postman, under the folder “Export”, create a new request and label it “Run the export” Select request method POST Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33), modelID (92269C17A8404B7A90C536F4642E93DE), and exportId (116000000001) that were retrieved in the previous request. Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/exports/{exportId}/tasks for resource URI. Example: https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/exports/116000000001/tasks Under “Authorization” tab, select Basic Auth and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Content-Type, application/json. Click on “Body” tab, select “raw” radio button, and type the following. {   "localeName": "en_US" } Click Send Response to this request should result in the following. Status: 200 OK. Body: Response should return a taskId.  The taskId can be used to determine status of export. {     "taskId": "29B4617C3D8646018B269F428AC396A3" } Get status of an export task In Postman, under the folder “Export”, create a new request and label it “Get status of an export task”. Select request method GET Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33), modelID (92269C17A8404B7A90C536F4642E93DE), exportId (116000000001) and taskId (29B4617C3D8646018B269F428AC396A3) that were retrieved in the previous request. (Note:  Your workspaceID, modelID, exportId, and taskId may be different) For resource URI type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/exports/{exportId}/tasks/{taskId} Example:https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/exports/116000000001/tasks/29B4617C3D8646018B269F428AC396A3 Under “Authorization” tab, select Basic Auth and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Content-Type, application/json. Click Send button Response to this request should result in the following. Status: 200 OK Body Download File using Anaplan APIs Files that have been either previously uploaded or exported can be downloaded using Anaplan API. In previous section, you exported the list to a csv file via APIs.  In this section, you will use APIs to download the exported file. Following sequence of requests will be made to download files. Get a list of files In Postman, under the folder “Download Files”, create a new request and label it “Get a list files” Select request method GET Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33) and modelID (92269C17A8404B7A90C536F4642E93DE) that were retrieved earlier. Refer to results for requests under “Authentication” folder to obtain your workspaceId and modelId. Your workspaceId and modelId may be different. Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/files for resource URI. Example:https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/files Under “Authorization” tab, select Basic Auth and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Content-Type, application/json. Click Send Response to this request should result in the following. Status: 200 OK Body: Response body returns information about available files in json format. “id” is the fileId, which will be passed as an input parameter in the next request to download the file Get chunkID and Name a file In Postman, under the folder “Download Files”, create a new request and label it “Get chunkID and Name of a file” Select request method GET Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33) and modelID (92269C17A8404B7A90C536F4642E93DE), and fileId (116000000001), that were retrieved earlier. Your workspaceId, modelId, and fileId may be different. Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/files/{fileId}/chunks for resource URI. For Example: https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/files/116000000001/chunks Under “Authorization” tab, select Basic Auth and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Accept, application/json. Click Send Response to this request should result in the following Status: 200 OK Body: Response body returns chunkID and chunk name in json format. Get a chunk of data  In Postman, under the folder “Download Files”, create a new request and label it “Get a chunk of data”. Select request method GET Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33) and modelID (92269C17A8404B7A90C536F4642E93DE), and fileId (116000000001), that were retrieved earlier. Your workspaceId, modelId, and fileId may be different. Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/files/{fileId}/chunks/{chunkID} for resource URI. For Example: https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/files/116000000001/chunks/0 Under “Authorization” tab, select Basic Auth and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Accept, application/octet-stream. Click Send Response to this request should result in the following Status: 200 OK Body: Response body returns data in csv format. Repeat  Repeat the above step for each chunkID returned from the "Get chunkID and Name" API call.  Concatenate all the data into a single file. Concatenate chunks into a single file  After collecting data from all the chunks, concatenate the chunks into a single output file.   CAUTION:  If you would like to download the file in a single chunk, DO NOT make the following API call.  It is NOT supported by Anaplan and may result in performance issues.  Best practice for large files is to download the files in chunks using steps described above. GET https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/files/{fileId}   Delete File using Anaplan APIs Files that have been either previously uploaded or exported can be deleted using Anaplan API. In previous sections, you have uploaded a file to Anaplan for import.  You’ve also exported a list to a csv file via APIs.  In this section, you will use APIs to delete the exported file. In Postman, under the folder “Delete File”, create a new request and label it “Delete an export file” Select request method DELETE Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33) and modelID (92269C17A8404B7A90C536F4642E93DE), and fileId (116000000001), that were retrieved earlier. Your workspaceId, modelId, and fileId may be different. Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/files/{fileId} for resource URI. For Example: https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/files/116000000001 Under “Authorization” tab, select Basic Auth and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Content-Type, application/json. Click Send Response to this request should result in the following. Status: 204 OK No Content. This is an expected response.  It just means the request was successful, but the response is empty Run a Process using Anaplan APIs A process is a sequence of actions.  Actions such as import, and export can be included in a process. In an earlier section (Setup), you created a process called “Import & Export a List”.  In this section, we will execute this process using Anaplan APIs.  Following sequence of requests will be made to execute a process.   Get a list of Processes in a model In Postman, under the folder “Process”, create a new request and label it “Get a list of Processes in a model” Select request method GET Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33) and modelID (92269C17A8404B7A90C536F4642E93DE) that were retrieved earlier. Refer to results for requests under “Authentication” folder to obtain your workspaceId and modelId. Your workspaceId and modelId may be different. Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/processes for resource URI. For example: https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/processes Under “Authorization” tab, select Basic Auth and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Content-Type, application/json. Click Send Response to this request should result in the following. Status: 200 OK Body: Response body returns proccessId and name of each process.         Run a Process In Postman, under the folder “Process”, create a new request and label it “Run a Process” Select request method POST Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33) and modelID (92269C17A8404B7A90C536F4642E93DE), and processId (118000000001), that were retrieved earlier. Your workspaceId, modelId, and processId may be different. Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/processes/{processId}/tasks for resource URI. For example: https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/processes/118000000001/tasks Under “Authorization” tab, select Basic Auth and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Content-Type, application/json. Click on “Body” tab. Select “raw” radio button and type the following. {   "localeName": "en_US" } Click Send Response to this request should result in the following. Status: 200 OK Body: Response body returns a taskId for executed process. This taskId can be used to request status of process excecution. {     "taskId": "1573150F0B3A4F9D90676E777FFFB7C1" } Get status of a process task In Postman, under the folder “Process”, create a new request and label it “Get status of a process” Select request method GET Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33) and modelID (92269C17A8404B7A90C536F4642E93DE), processId (118000000001), and taskId (1573150F0B3A4F9D90676E777FFFB7C1) that were retrieved earlier. Your workspaceId, modelId, processId, and taskId may be different. Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/processes/{processId}/tasks/1573150F0B3A4F9D90676E777FFFB7C1 for resource URI. For example: https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/processes/118000000001/tasks/1573150F0B3A4F9D90676E777FFFB7C1 Under “Authorization” tab, select Basic Auth and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Content-Type, application/json. Click Send Response to this request should result in the following. Status: 200 OK Conclusion In this article, you learned fundamentals of Anaplan integration APIs & their structure.  You were also presented with step by step instructions on how to call Anaplan REST APIs to perform various data integration tasks.  Attached with this article is an export of Postman collection in .json format.  If you choose to, you may import this export into your Postman environment for solution to exercises described in this article.  You will need to modify various variables (ex: username/password) and end points specific to your environment, for the solution to run successfully.
View full article
Overview The Anaplan Optimizer aids business planning and decision making by solving complex problems involving millions of combinations quickly to provide a feasible solution. Optimization provides a solution for selected variables within your Anaplan model that matches your objective based on your defined constraints. The Anaplan model must be structured and formatted to enable Optimizer to produce the correct solution. You are welcome to read through the materials and watch the videos on this page, but Optimizer is a premium service offered by Anaplan (Contact your Account Executive if you don't see Optimizer as an action on the settings tab). This means that you will not be able to actually do the training exercises until the feature is turned on in your system. Training The training involves an exercise along with documentation and videos to help you complete it. The goal of the exercise is to setup the optimization exercise for two use cases; network optimization and production optimization. To assist you in this process we have created an optimization exercise guide document which will walk you through each of the steps. To further help we have created three videos you can reference: An exercise walk-through A demo of each use case A demo of setting up dynamic time Follow the order of the items listed below to assist with understanding how Anaplan's optimization process works: Watch the use case video which demos the Optimizer functionality in Anaplan Watch the exercise walkthrough video Review documentation about how Optimizer works within Anaplan Attempt the Optimizer exercise Download the exercise walkthrough document Download the Optimizer model into your workspace How to configure Dynamic Time within Optimizer Download the Dynamic Time document Watch the Dynamic Time video Attempt Network Optimization exercise Attempt Production Optimization exercise
View full article
A revision tag is a snapshot of a model’s structural information at a point in time. Revision tags save all of the structural changes made in an application since the last revision tag was stored. By default, Anaplan allows you to add a title and description when creating a revision tag. This article covers:   Suggestions for naming revision tags Creating a revisions tracking list and module Note: For guidance on when to add revision tags, see When should I add revision tags?   Suggestions for naming revision tags It’s best to define a standard naming convention for your revision tags early in the model-building process. You may want to discuss with your Anaplan Business Partner or IT group if there is an existing naming convention that would be best to follow. The following suggestions are designed to ensure consistency when there are large number of changes or model builders as well as allow the team to better choose which revision tag to choose when syncing a production application. Option 1: 1.0 = Major revision/release 1 = Minor changes within a release In this option, 1.0 indicates the first major release. As subsequent minor changes are tagged, they will be noted as 1.2, 1.3, etc until the next major release: 2.0. Option 2: X = Major revision/release X.1 = Minor changes within a release In this option, YYYY indicates the year and X indicates the release number. For example, the first major release of 2017, would be: 2017.1. Subsequent minor changes would be tagged: 2017.1.1, 2017.1.2, etc until the next major release of the year: 2017.2.   Creating a revisions tracking list and module Revision tag descriptions are only visible from within Settings. That means that it can be difficult for an end user to know what changes have been made in the current release. Additionally, there may be times where you want to store additional information about revisions beyond what is in the revision tag description. To provide release visibility in a production application, consider creating a revisions list and module to store key information about revisions. Revisions list: In your Development application, create a list called: Revisions Do not set this list as Production. You want these list members to be visible in your production model    Revisions details module: In your Development application, create a list called: Revisions Details Add your Revisions List Remove Time Add your Line Items Since this module will be used to document release updates and changes, consider which of the following may be appropriate: Details: What changes were made Date: What date was this revision tag created Model History ID: What was the Model History ID when this tag was created Requested By: Who requested these changes? Tested By: Who tested these changes? Tested Date: When were these changes tested? Approved By: Who signed off on these changes? Note: Standard Selective Access rules apply to your production application. Consider who should be able to see this list and module as part of your application deployment.
View full article
Announcements


Join us in San Francisco, CA, to explore what’s possible with business leaders, industry visionaries, and your peers.
Take $200 off your registration with code COMMUNITYCPX200.


Anapedia

Review the official documentation of the Anaplan platform.

Share what you know!

Share what you know! Contribute your best practices and Anaplan expertise using our Contributor's Toolkit.