Choose a label or article, or search below to begin.
Sort by:
PLANS is the new standard for Anaplan modelling; “the way we model”. This will cover more than just the formulas and will include and evolve existing best practices around user experience and data hubs. The initial focus is to develop a set of rules on the structure and detailed design of Anaplan models. This set of rules will provide both a clear route to good model design for the individual Anaplanner, and common guidance on which Anaplanners and reviewers can rely when passing models amongst themselves.  In defining the standard, everything we do will consider or be based around: Performance – Use the correct structures and formulae to optimize the Hyperblock Logical – Build the models and formulae more logically – See D.I.S.C.O below Auditable – Break up formulae for better understanding, performance and maintainability Necessary – Don’t duplicate expressions, store reference data and attributes once, no unnecessary calculations Sustainable – Build with the future in mind, think about process cycles and updates        The standards will be based around three axes: Performance - How do the structures and formulae impact the performance of the system? Usability/Auditability - Is the user able to understand how to interact with the functionality? Sustainability - Can the solution be easily maintained by model builders and support? We will define the techniques to use that balance the three areas to ensure the optimal design of Anaplan models and architecture       D.I.S.C.O As part of model and module design we recommend categorizing modules as follows: Data – Data hubs, transactional modules, source data; reference everywhere Inputs – Design for user entry, minimize the mix of calculations and output System – Time management, filters, mappings etc.; reference everywhere Calculations – Optimize for performance (turn summaries off, combine structures) Outputs -  Reporting modules, minimize data flows out Recommended Content: Performance Dimension Order Formula Optimization in Anaplan Formula Structure for Performance Logical Best Practices for Module Design Auditable Formula Structure for Performance Necessary Reduce Calculations for Better Performance Formula Optimization in Anaplan Sustainable Dynamic Cell Access Tips and Tricks Dynamic Cell Access - Learning App Personal Dashboards Tips and Tricks Time Range Application Ask Me Anything (AMA) sessions
View full article
Overview G uide for the new  Statistical Forecasting Calculation Engine Models (monthly and weekly).  Includes enablement videos, practice data import exercise, model documentation, and specific steps when using the model for implementations .  1. Enablement Videos & Practice Exercise # Item Details Link 1a Intro and Overview Video Model overview and review of new key features   Video Below 1b Initial Model & Data Import Steps Steps on how to setup model, product hierarchy, customer list and multi-level forecast analysis  Video Below  1c Practice Exercise—Import data to setup stat forecast Two sets of load files included to practice setup for single level product set or multi-level product set w/ customers, product and brand level.  Start on "Initial App Setup" dashboard and load   either Single OR Multi Level files   into model, use Import video as guide if needed.  .Zip File Attached  2. Documentation  # Item Details Link 2a Lucidchart Process Maps Lucidchart Process Map document includes High Level process flow for end user navigation and detailed tabs for each section  **Details & links also on "Training & Enablement" dashboard Process Maps  2b High Level Process Map PDF High level process map PDF format  Attached 2c Forecast Methods PDFs High level version with forecast algorithms list and overview  Detailed version which includes slide for each forecast method, m ethod overview, advantages/disadvantages, equation and graph example output   **These slides are also included on "Forecast Methods Overview & Formulas" dashboard     Attached 3. Implementation Specifics # Item Details 3a Training & Enablement Dashboard Training & Enablement dashboard contains details on process map navigation  3b Initial Model Setup  Initial Setup: current model staged with chocolate data from data hub, execute CLEAR MODEL action prior to loading customer specific data 3c Changing Model Time Scale— align Native & Dynamic Time Settings If a Time Settings change is required, need to review Initial App Setup dashboard to align Native Time with Dynamic Time setup in model   3d Monthly Update Process After initial setup, use Monthly Data History Upload dashboard to update prior period actuals and settings  3e Single Level vs. Multi-Level Forecast Setup Two implementation options & when to use:  Single Level Forecast:  Forecast at one level of product hierarchy (i.e. all stat forecasts calculated at Item level). Most use cases will leverage single level forecast setup. Multi-Level Forecast : Ability to forecast at different levels of the product hierarchy (i.e. Top Item | Customers, Item and Brand level can all have stat forecast generated). This requires complex forecast reconciliation process, review "Multi-Level Forecast Overview" dashboard if this process is needed.   3f Troubleshooting Tips Follow troubleshooting tips on Training & Enablement dashboard if having issues with stat forecast generating before reaching out for support  3g Model Notes & Documentation Module Notes—includes DISCO classification and module purpose   3h "Do Not Modify" Items Module notes contain DO NOT MODIFY for items that should not be changed during implementation process     3i User Roles & Selective Access Demo, Demand Planner, Demand Planning Manager ro les can be adjusted  After Selective Access process run on Flat List Management dashboard then users can be given access to certain product groups / brands etc 3j Batch Processing Details on daily batch processing and how to prepare a roadmap of your batch processes – files, queries, import actions / processes in Anaplan (see attachment) 4. Videos Intro & Model Intro and Overview Video Data Import and Setup Steps    5. Model Download Links Monthly Statistical Forecasting Calculation Engine Weekly Statistical Forecasting Calculation Engine
View full article
Learn how small changes can lead to dramtic improvements in model calculations
View full article
There are two different types of distributed models to consider as early as possible when a client chooses to implement Anaplan: Split models A split model is where one model, known as the primary model, is partitioned into multiple satellite models that contain the exact same structure or metadata (such as versions and dimensions) as the primary model. The split models will be 90% identical to the primary model and will have about a 10% difference. The split model method is most common when a client's workspace involves multiple regions. For example, the primary model may contain three different product lines. Region 1 sells product lines A and B, while Region 2 sells only product C. In this case, a split model may provide consistency in structure across the models, but variation with the product lines since not all product lists are applicable to each region. ALM application: Split models For split models, ALM allows clients to maintain the primary model as well as all satellite models in their workspace using one development model. Clients may make changes to their development model, and then deploy updates to their live models without disrupting the application cycle. Similiar models Similar models are models that vary slightly in structure or metadata. The degree of difference is usually less than 5%. If it gets to be greater than this, or there’s a greater difference in user experiences, it may be impractical to use similar models. For example, you could use the similar models method if you have multiple regions that must view the same data, ideally from a master data hub. ALM application: Similar models For similar models, ALM requires clients to maintain one development model for each similar model in use. Comparable to split models, each development model may be edited, tested, and then deployed to the production model without disrupting the application cycle.
View full article
You can interact with the data in your models using Anaplan's RESTful API. This enables you to securely import and export data, as well as run actions through any programmatic way you desire. The API can be leveraged in any custom integration, allowing for a wide range of integration solutions to be implemented. Completing an integration using the Anaplan API is a technical process that will require significant action by an individual with programming experience. Visit the links below to learn more: API Documentation Anaplan API Guide You can also view demonstration videos to understand how to implement APIs in your custom Integration client. The below videos show step-by-step guides of sequencing API calls and exporting data from Anaplan, importing data into Anaplan, and running delete actions and Anaplan processes. API sequence for uploading a file to Anaplan and running an import action: API sequence for running an export action and downloading a file from Anaplan: API sequence for running an Anaplan process and a delete action:
View full article
Introduction Data Integration is a set of processes that bring data from disparate sources in to Anaplan models.  These processes could include activities that help understand the data (Data Profiling), cleanse & standardize data (Data Quality), and transform/load data (ETL). Anaplan offers following data integration options.  Manual import Anaplan Connect Extract Transform & Load (ETL) REST API  Anaplan learning center offers several on-demand courses on Anaplan’s data integration options.  Following is a list of on-demand courses. Data Integration Anaplan Data Integration Basics (303) Anaplan Connect (301) Hyperconnect This article presents step by step instructions on different integration tasks that can be performed using Anaplan integration APIs.  These tasks include: Import data into Anaplan Export data from Anaplan Run a process Downloading files Delete files Setup Install & Configure Postman Download latest Postman application for your platform (Ex: Mac, Windows, Linux) from  https://www.getpostman.com/apps . Instructions to install Postman app for your platform may be found here . Postman account: Signing up for a postman account is optional.  However, having an account will give you additional benefits of backing up history, collections, environments, and header presets (ex: authorization credentials).  Instructions for creating a postman account may be accessed here .  Download Files You may follow instructions provided in this article against your instance of Anaplan platform.  You will need to download a set of files for these exercises. Customers.csv: Download the .csv file to a directory on your workstation.  This file consists a list of customers you will import into a list using Anaplan integration APIs. Anaplan Community REST API Solution.txt: This is an export (json) from postman that contains solution to the exercises outlined in this article. You may choose to import this file into postman to review the solution. Although the file extension is .txt, it is a json file that can be imported into Postman. Anaplan Setup  Anaplan RESTful API, Import, allows you to bring data into Anaplan.  This is done by using POST HTTPs verb to call an import.   This means, an import action must exist in Anaplan prior to the API call.  Initially, you will import Employees.csv into Anaplan using the application.  Subsequent imports into this list will be carried out via API calls. Create a new model named   Data Integration API Import Customers.csv Create a list named Customers Using Anaplan application import Customers.csv to Customers list. Set File Options as shown below Map each column to a property in the list as shown below and Run Import. 31 records should be imported into the list.   Create an Export action. In this article, you will also learn how to export the data from Anaplan using APIs.  Anaplan API, Export, calls an export action previously created.  Therefore, create an Export of Customers list & save the export definition.  This will create an export action (ex: Grid – Customers.csv). Note:  Set file type to .csv in export action. You may choose to rename the export action under Settings ==> Actions ==> Exports. Create a Process Along with Import & Export, you will also learn how to leverage APIs to call an Anaplan process. Create a Process named “Import & Export a List” that calls Import (ex: Import Customers from Customers.csv) first followed by Export (Ex: Grid – Customers.csv).  Name the process, Import & Export a List. Anaplan Integration API Fundamentals Anaplan Integration APIs (v1.3) are RESTful API that allow for requests to be made via HTTPS using GET, PUT, POST, & DELETE verbs.  Using these APIs, you can perform integration tasks such as: Import data into a module/list Export data from a module/list Upload files for import Run an Anaplan Process Download Files that have been uploaded or file that were created during an export Delete from list using selection End points enable you to obtain information regarding workspaces, models, imports, exports, processes, etc… Many end points contain a chain of parameters. Example We want to get a list of models in a workspace.  In order to get a list of models, we will, first, need to select a workspace a model belongs to.  Obtain base URI for Anaplan API. Base URI for Anaplan Integration API is https://api.anaplan.com Select version of API that will be used in API calls. This article is based on version 1.3.  Therefore, updated base URI will be https://api.anaplan.com/1/3 Retrieve a list of workspaces you have access to GET <base URI>/workspaces. Where <base URI> is https://api.anaplan.com/1/3 GET https://api.anaplan.com/1/3/workspaces Above GET call returns a guid & name for each workspace user has access to.                             {                                    "guid": "8a81b09d5e8c6f27015ece3402487d33",                                    "name": "Pavan Marpaka"                              } Retrieve a list of models in a selected workspace by providing {guid} as a parameter value. https://api.anaplan.com/1/3/workspaces/{guid}/models https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models   Chaining Parameters Many end points contain a set of parameters that can be chained together in a request.  For example, to get a list of import actions we can chain together workspaceId & modelId as parameters in a GET request.  Request call to get a list of import action might look something like:               https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/imports Following sequence of requests need to be made to get a list of import actions in a selected model. GET a list of workspaces user has access to           https://api.anaplan.com/1/3/workspaces Select a workspaceID (guid) from the result GET a list of models in a workspace providing workspaceID as a parameter value            https://api.anaplan.com/1/3/workspaces /{workspaceID}/models Select a modelID from the result GET a list of Imports from a model in a workspace.           https://api.anaplan.com/1/3/workspaces /{workspaceID}/models/{modelID}/imports Formats  Format for most request and responses is application-json.  Exception to this are when uploading files in a single chunk or multiple chunks and getting data in a chunk. These requests use application/octet-stream format.  These formats are specified in header of an API request. They are also specified in header of a response. Data Integration with Anaplan APIs & Postman Background  Next few sections will provide you with step by step instructions on how to perform different data integration tasks via Anaplan integration API requests. You will perform following data integration tasks using Anaplan APIs: Upload file(s) to Anaplan Import data into a list Export data from a list Download file that has been uploaded or exported Run an Anaplan Process Delete uploaded file(s)  Postman application, an HTTP client for making RESTful API calls, will be used to perform these integration tasks.  You should have installed and configured Postman on your workstation using instructions provided in the beginning of this article.  You may follow steps outlined in the next few sections.  You may also import Postman collection (json file) provided with this article.  Navigating Postman UI  This section presents basics of Postman user interface (UI).  You will learn how to perform simple tasks required to make API calls.  These tasks include: Create a new collection Adding a Folder Add a Request Submit a Request Selecting a Request Method (GET, POST, PUT, DELETE) Specifying a Resource URI Specify Authorization, Headers, and Body (raw, binary) You will perform above steps repeatedly for each integration task. Create a new collection From New orange drop down box select “Collection” New Collection   Provide name for the collection (Ex: Data Integration API) Click Create Add Folders Create following folders in the collection Authentication, Upload, Import, Export, Download Files, Process, Delete Files. Folders in a collection Add a Request You don’t need to perform this step right now. Following steps will outline how a request can be added to a folder.  You will use this instruction each time a new request is created. Select a folder where you want to add a new request. Click on and select Add Request Add a request Provide a Request Name and click on Save Submit a Request Select a Request Method (GET, PUT, POST, DELETE) Select request method   Provide a resource URI (ex: https://api.anaplan.com/1/3/workspaces ) Click on Authorization and select “Basic Auth” for Authorization Type. Provide your Anaplan credentials (username & password) Authorization Provide necessary Headers. Common Headers include Authorization (should be pre-populated from Authorization tab), and Content-Type. Header variables & values   Some requests may also require a Body. Information for Body will be available in API documentation. Body   Click on Submit. Import data into a List using Anaplan APIs One of the data integration tasks is to bring data into Anaplan.  Popular method to bring data into Anaplan platform is via Import feature in Anaplan application.  Once imported, an import action is created.  This import action can be executed via an API request.  Earlier, you have imported Employees.csv file into a hierarchy.  In this section, you will use Anaplan Integration APIs to import employees’ data into the hierarchy.  Following sequence of requests will be made to import data into the list.     Get a list of workspaces In Postman, under the folder “Authentication”, create a new request and label it “GET List of Workspaces” Select request method GET Type https://api.anaplan.com/1/3/workspaces for resource URI Under “Authorization” tab, select Basic Auth and provide your Anaplan credentials. Click Send Response to this request should result in the following. Status: 200 OK Body: guid & name. “guid” is the workspaceID.  Sample result is shown below.  WorkspaceID for workspace “Pavan Marpaka” is 8a81b09d5e8c6f27015ece3402487d33.  This workspaceID will be passed as an input parameter in the next request, GET List of Models in a Workspace.    Get a list of Models in a workspace In Postman, under the folder “Authentication”, create a new request and label it “GET List of Models in a Workspaces” Select request method GET Input parameter for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33), which was retrieved in the last request. Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models for resource URI. Ex: https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models Under “Authorization” tab, select   Basic Auth   and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Content-Type, application/json. Click Send Response to this request should result in the following. Status: 200 OK Body: activeState, id & name. “id” is the modelID, which will passed as an input parameter in subsequent request calls.  In the result is shown below (your result may vary), “Top 15 DI API” is the model name.  92269C17A8404B7A90C536F4642E93DE is the modelID.  Get a list of files In Postman, under the folder “Upload”, create a new request and label it “GET List of Files and FileID” Select request method GET Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33) and modelID (92269C17A8404B7A90C536F4642E93DE) that were retrieved in the last request. Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/files for resource URI. Example:https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/files Under “Authorization” tab, select   Basic Auth   and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Content-Type, application/json. Click   Send Response to this request should result in the following. Status: 200 OK Body: id & name of the files that were either previously uploaded or exported. In the result below (your result may vary), fileID is 113000000001. This fileID will be passed as an input parameter in the next request (PUT) that will upload the file, Customers.csv   Upload a file In Postman, under the folder “Upload”, create a new request and label it “Upload File” Select request method PUT Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33) and modelID (92269C17A8404B7A90C536F4642E93DE) that were retrieved in the last request. Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/files for resource URI. Example:https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/files Under “Authorization” tab, select   Basic Auth   and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Content-Type, application/octet-stream. Click on “Body” tab, select “binary” radio button, and click “Choose Files” to select  Customers. csv   file you downloaded earlier. Click   Send Response to this request should result in the following. Status: 204 No Content. This is an expected response.  It just means the request was successful, but the response is empty. Get a list of Import actions in a model In Postman, under the folder “Import”, create a new request and label it “GET a list of Import Actions” Select request method GET Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33) and modelID (92269C17A8404B7A90C536F4642E93DE) that were retrieved in the last request. (Note: Your workspaceID and modelID may be different) Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/imports for resource URI. Example: https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/imports Under “Authorization” tab, select   Basic Auth   and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Content-Type, application/json. Click   Send   button Response to this request should result in the following. Status: 200 OK Body: “id”   is the importID (112000000001). This value will be passed as an input parameter to a POST request in the next step.  The POST request will call an import action that will import data from the uploaded Customers.csv into the list. Call an import Action In Postman, under the folder “Import”, create a new request and label it “Call an Import Action” Select request method POST Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33), modelID (92269C17A8404B7A90C536F4642E93DE), and importID (112000000001) that were retrieved in the last request. (Note: Your workspaceID, modelID, and importID may be different) Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/imports/{importID}/tasks for resource URI. Example:https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/imports/112000000001/tasks Under “Authorization” tab, select Basic Auth and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Content-Type, application/json. Click on “Body” tab, select “raw” and type following {   "localeName": "en_US" }  Click Send button Response to this request should result in the following. Status: 200 OK Body: “taskId” is for the import is returned as a json object. This task id can be used to check for status of import.  {     "taskId": "2D88EBAA093B4D4C9603DD9278521EBC" } Check status of an import call In Postman, under the folder “Import”, create a new request and label it “Check Status of Import Call” Select request method GET Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33), modelID (92269C17A8404B7A90C536F4642E93DE), importID (112000000000), and taskId (2D88EBAA093B4D4C9603DD9278521EBC) that were retrieved in the last request. (Note: Your workspaceID, modelID, importID, and taskId may be different) Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/imports/{importID}/tasks/taskId for resource URI.Example:https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/imports/112000000000/tasks/2D88EBAA093B4D4C9603DD9278521EBC Under “Authorization” tab, select Basic Auth and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Accept, application/json. Click Send button Response to this request should result in the following. Status: 200 OK Response should include “Complete” status, Number of records, and a value of “true” for “successful”. Validate import in Anaplan In Anaplan application, validate the Customers list with a list of customers. Export data using Anaplan APIs An export definition can be saved for later use.  Saved export definitions can be viewed under Settings > Actions > Exports. Earlier (Section 2), you exported the organization hierarchy and saved the export definition.  This should have created an export action (ex: Grid – Customers.csv). In this section, we will use Anaplan APIs to execute the export action.  Following sequence of requests will be made to export data. Get a list of Export Definitions In Postman, under the folder “Export”, create a new request and label it “Get a list of Export Definitions” Select request method GET Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33) and modelID (92269C17A8404B7A90C536F4642E93DE) that were retrieved earlier. Refer to results for requests under “Authentication” folder to obtain your workspaceId and modelId. Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/exports for resource URI. Example:https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/exports Under “Authorization” tab, select Basic Auth and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Content-Type, application/json. Click Send Response to this request should result in the following. Status: 200 OK Body: Should consist of id & name of export action.  Run the export In Postman, under the folder “Export”, create a new request and label it “Run the export” Select request method POST Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33), modelID (92269C17A8404B7A90C536F4642E93DE), and exportId (116000000001) that were retrieved in the previous request. Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/exports/{exportId}/tasks for resource URI. Example: https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/exports/116000000001/tasks Under “Authorization” tab, select Basic Auth and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Content-Type, application/json. Click on “Body” tab, select “raw” radio button, and type the following. {   "localeName": "en_US" } Click Send Response to this request should result in the following. Status: 200 OK. Body: Response should return a taskId.  The taskId can be used to determine status of export. {     "taskId": "29B4617C3D8646018B269F428AC396A3" } Get status of an export task In Postman, under the folder “Export”, create a new request and label it “Get status of an export task”. Select request method GET Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33), modelID (92269C17A8404B7A90C536F4642E93DE), exportId (116000000001) and taskId (29B4617C3D8646018B269F428AC396A3) that were retrieved in the previous request. (Note:  Your workspaceID, modelID, exportId, and taskId may be different) For resource URI type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/exports/{exportId}/tasks/{taskId} Example:https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/exports/116000000001/tasks/29B4617C3D8646018B269F428AC396A3 Under “Authorization” tab, select Basic Auth and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Content-Type, application/json. Click Send button Response to this request should result in the following. Status: 200 OK Body Download File using Anaplan APIs Files that have been either previously uploaded or exported can be downloaded using Anaplan API. In previous section, you exported the list to a csv file via APIs.  In this section, you will use APIs to download the exported file. Following sequence of requests will be made to download files. Get a list of files In Postman, under the folder “Download Files”, create a new request and label it “Get a list files” Select request method GET Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33) and modelID (92269C17A8404B7A90C536F4642E93DE) that were retrieved earlier. Refer to results for requests under “Authentication” folder to obtain your workspaceId and modelId. Your workspaceId and modelId may be different. Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/files for resource URI. Example:https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/files Under “Authorization” tab, select Basic Auth and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Content-Type, application/json. Click Send Response to this request should result in the following. Status: 200 OK Body: Response body returns information about available files in json format. “id” is the fileId, which will be passed as an input parameter in the next request to download the file Get chunkID and Name a file In Postman, under the folder “Download Files”, create a new request and label it “Get chunkID and Name of a file” Select request method GET Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33) and modelID (92269C17A8404B7A90C536F4642E93DE), and fileId (116000000001), that were retrieved earlier. Your workspaceId, modelId, and fileId may be different. Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/files/{fileId}/chunks for resource URI. For Example: https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/files/116000000001/chunks Under “Authorization” tab, select Basic Auth and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Accept, application/json. Click Send Response to this request should result in the following Status: 200 OK Body: Response body returns chunkID and chunk name in json format. Get a chunk of data  In Postman, under the folder “Download Files”, create a new request and label it “Get a chunk of data”. Select request method GET Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33) and modelID (92269C17A8404B7A90C536F4642E93DE), and fileId (116000000001), that were retrieved earlier. Your workspaceId, modelId, and fileId may be different. Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/files/{fileId}/chunks/{chunkID} for resource URI. For Example: https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/files/116000000001/chunks/0 Under “Authorization” tab, select Basic Auth and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Accept, application/octet-stream. Click Send Response to this request should result in the following Status: 200 OK Body: Response body returns data in csv format. Repeat  Repeat the above step for each chunkID returned from the "Get chunkID and Name" API call.  Concatenate all the data into a single file. Concatenate chunks into a single file  After collecting data from all the chunks, concatenate the chunks into a single output file.   CAUTION:  If you would like to download the file in a single chunk, DO NOT make the following API call.  It is NOT supported by Anaplan and may result in performance issues.  Best practice for large files is to download the files in chunks using steps described above. GET https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/files/{fileId}   Delete File using Anaplan APIs Files that have been either previously uploaded or exported can be deleted using Anaplan API. In previous sections, you have uploaded a file to Anaplan for import.  You’ve also exported a list to a csv file via APIs.  In this section, you will use APIs to delete the exported file. In Postman, under the folder “Delete File”, create a new request and label it “Delete an export file” Select request method DELETE Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33) and modelID (92269C17A8404B7A90C536F4642E93DE), and fileId (116000000001), that were retrieved earlier. Your workspaceId, modelId, and fileId may be different. Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/files/{fileId} for resource URI. For Example: https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/files/116000000001 Under “Authorization” tab, select Basic Auth and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Content-Type, application/json. Click Send Response to this request should result in the following. Status: 204 OK No Content. This is an expected response.  It just means the request was successful, but the response is empty Run a Process using Anaplan APIs A process is a sequence of actions.  Actions such as import, and export can be included in a process. In an earlier section (Setup), you created a process called “Import & Export a List”.  In this section, we will execute this process using Anaplan APIs.  Following sequence of requests will be made to execute a process.   Get a list of Processes in a model In Postman, under the folder “Process”, create a new request and label it “Get a list of Processes in a model” Select request method GET Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33) and modelID (92269C17A8404B7A90C536F4642E93DE) that were retrieved earlier. Refer to results for requests under “Authentication” folder to obtain your workspaceId and modelId. Your workspaceId and modelId may be different. Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/processes for resource URI. For example: https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/processes Under “Authorization” tab, select Basic Auth and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Content-Type, application/json. Click Send Response to this request should result in the following. Status: 200 OK Body: Response body returns proccessId and name of each process.         Run a Process In Postman, under the folder “Process”, create a new request and label it “Run a Process” Select request method POST Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33) and modelID (92269C17A8404B7A90C536F4642E93DE), and processId (118000000001), that were retrieved earlier. Your workspaceId, modelId, and processId may be different. Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/processes/{processId}/tasks for resource URI. For example: https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/processes/118000000001/tasks Under “Authorization” tab, select Basic Auth and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Content-Type, application/json. Click on “Body” tab. Select “raw” radio button and type the following. {   "localeName": "en_US" } Click Send Response to this request should result in the following. Status: 200 OK Body: Response body returns a taskId for executed process. This taskId can be used to request status of process excecution. {     "taskId": "1573150F0B3A4F9D90676E777FFFB7C1" } Get status of a process task In Postman, under the folder “Process”, create a new request and label it “Get status of a process” Select request method GET Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33) and modelID (92269C17A8404B7A90C536F4642E93DE), processId (118000000001), and taskId (1573150F0B3A4F9D90676E777FFFB7C1) that were retrieved earlier. Your workspaceId, modelId, processId, and taskId may be different. Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/processes/{processId}/tasks/1573150F0B3A4F9D90676E777FFFB7C1 for resource URI. For example: https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/processes/118000000001/tasks/1573150F0B3A4F9D90676E777FFFB7C1 Under “Authorization” tab, select Basic Auth and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Content-Type, application/json. Click Send Response to this request should result in the following. Status: 200 OK Conclusion In this article, you learned fundamentals of Anaplan integration APIs & their structure.  You were also presented with step by step instructions on how to call Anaplan REST APIs to perform various data integration tasks.  Attached with this article is an export of Postman collection in .json format.  If you choose to, you may import this export into your Postman environment for solution to exercises described in this article.  You will need to modify various variables (ex: username/password) and end points specific to your environment, for the solution to run successfully.
View full article
Creates the Java KeyStore required for Anaplan Connect 1.4
View full article
In most use cases, a single model provides the solution you are seeking, but there are times it makes sense to separate, or distribute, models rather than have them in a single instance. The following articles provide insight that can help you during the design process to determine if a distributed model is needed. What is Application Lifecycle Management (ALM)? What types of distributed models are there? When should I consider a distrbuted model? How do changes to the primary model impact distributed models? What should I do after building a distributed model?
View full article
I recently posted a Python library for version 1.3 of our API. With the GA announcment of API 2.0, I'm sharing a new library that works with these endpoints. Like the previous library, it does support certificate authentication, however it requires the private key in a particular format (documented in the code, and below). I'm pleased to announce, the use of Java keystore is now supported. Note:   While all of these scripts have been tested and found to be fully functional, due to the vast amount of potential use cases, Anaplan does not explicitly support custom scripts built by our customers. This article is for information only and does not suggest any future product direction. This library is a work in progress, and will be updated with new features once they have been tested.   Getting Started The attached Python library serves as a wrapper for interacting with the Anaplan API. This article will explain how you can use the library automate many of the requests that are available in our Apiary, which can be found at   https://anaplanbulkapi20.docs.apiary.io/#. This article assumes you have the requests and M2Crypto modules installed as well as the Python 3.7. Please make sure you are installing these modules with Python 3, and not for an older version of Python. For more information on these modules, please see their respective websites: Python   (If you are using a Python version older or newer than 3.7 we cannot guarantee validity of the article)   Requests   M2Crypto Note:   Please read the comments at the top of every script before use, as they more thoroughly detail the assumptions that each script makes. Gathering the Necessary Information In order to use this library, the following information is required: Anaplan model ID Anaplan workspace ID Anaplan action ID CA certificate key-pair (private key and public certificate), or username and password There are two ways to obtain the model and workspace IDs: While the model is open, go Help>About:  Select the workspace and model IDs from the URL:  Authentication Every API request is required to supply valid authentication. There are two (2) ways to authenticate: Certificate Authentication Basic Authentication For full details about CA certificates, please refer to our Anapedia article. Basic authentication uses your Anaplan username and password. To create a connection with this library, define the authentication type and details, and the Anaplan workspace and model IDs: Certificate Files: conn = AnaplanConnection(anaplan.generate_authorization("Certificate","<path to private key>", "<path to public certificate>"), "<workspace ID>", "<model ID>") Basic: conn = AnaplanConnection(anaplan.generate_authorization("Basic","<Anaplan username>", "<Anaplan password>"), "<workspace ID>", "<model ID>")   Java Keystore: from anaplan_auth import get_keystore_pair key_pair=get_keystore_pair('/Users/jessewilson/Documents/Certificates/my_keystore.jks', '<passphrase>', '<key alias>', '<key passphrase>') privKey=key_pair[0] pubCert=key_pair[1] #Instantiate AnaplanConnection without workspace or model IDs conn = AnaplanConnection(anaplan.generate_authorization("Certificate", privKey, pubCert), "", "") Note: In the above code, you must import the get_keystore_pair method from the anaplan_auth module in order to pull the private key and public certificate details from the keystore. Getting Anaplan Resource Information You can use this library to get the necessary file or action IDs. This library builds a Python key-value dictionary, which you can search to obtain the desired information: Example: list_of_files = anaplan.get_list(conn, "files") files_dict = anaplan_resource_dictionary.build_id_dict(list_of_files) This code will build a dictionary, with the file name as the key. The following code will return the ID of the file: users_file_id = anaplan_resource_dictionary.get_id(files_dict, "file name") print(users_file_id) To build a dictionary of other resources, replace "files" with the desired resource: actions, exports, imports, processes.  You can use this functionality to easily refer to objects (workspace, model, action, file) by name, rather than ID. Example: #Fetch the name of the process to run process=input("Enter name of process to run: ") start = datetime.utcnow() with open('/Users/jessewilson/Desktop/Test results.txt', 'w+') as file: file.write(anaplan.execute_action(conn, str(ard.get_id(ard.build_id_dict(anaplan.get_list(conn, "processes"), "processes"), process)), 1)) file.close() end = datetime.utcnow() The code above prompts for a process name, queries the Anaplan model for a list of processes, builds a key-value dictionary based on the resource name, then searches that dictionary for the user-provided name, and executes the action, and writes the results to a local file. Uploads You can upload a file of any size, and define a chunk size up to 50mb. The library loops through the file or memory buffer, reading chunks of the specified size and uploading to the Anaplan model. Flat file:  upload = anaplan.file_upload(conn, "<file ID>", <chunkSize (1-50)>, "<path to file>") "Streamed" file: with open('/Users/jessewilson/Documents/countries.csv', "rt") as f: buf=f.read() f.close() print(anaplan.stream_upload(conn, "113000000000", buf)) print(anaplan.stream_upload(conn, "113000000000", "", complete=True)) The above code reads a flat file and saves the data to a  buffer (this can be replaced with any data source, it does not necessarily need to read from a file). This data is then passed to the "streaming" upload method. This method does not accept the chunk size input, instead, it simply ensures that the data in the buffer is less than 50mb before uploading. You are responsible for ensuring that the data you've extracted is appropriately split. Once you've finished uploading the data, you must make one final call to mark the file as complete and ready for use by Anaplan actions. Executing Actions You can run any Anaplan action with this script, and define a number of times to retry the request if there's a problem. In order to execute an Anaplan action, the ID is required. To execute, all that is required is the following: run_job = execute_action(conn, "<action ID>", "<retryCount>") print(run_job) This will run the desired action, loop until complete, then print the results to the screen. If failure dump(s) exits, this will also be returned. Example output: Process action 112000000082 completed. Failure: True Process action 112000000079 completed. Failure: True Details: hierarchyName Worker Report successRowCount 0 successCreateCount 0 successUpdateCount 0 warningsRowCount 435 warningsCreateCount 0 warningsUpdateCount 435 failedCount 4 ignoredCount 0 totalRowCount 439 totalCreateCount 0 totalUpdateCount 435 invalidCount 4 updatedCount 435 renamedCount 435 createdCount 0 lineItemName Code rowCount 0 ignoredCount 435 Failure dump(s): Error dump for 112000000082 "_Status_","Employees","Parent","Code","Prop1","Prop2","_Line_","_Error_1_" "E","Test User 2","All employees","","101.1a","1.0","2","Error parsing key for this row; no values" "W","Jesse Wilson","All employees","a004100000HnINpAAN","","0.0","3","Invalid parent" "W","Alec","All employees","a004100000HnINzAAN","","0.0","4","Invalid parent" "E","Alec 2","All employees","","","0.0","5","Error parsing key for this row; no values" "W","Test 2","All employees","a004100000HnIO9AAN","","0.0","6","Invalid parent" "E","Jesse Wilson - To Delete","All employees","","","0.0","7","Error parsing key for this row; no values" "W","#1725","All employees","69001","","0.0","8","Invalid parent" [...] "W","#2156","All employees","21001","","0.0","439","Invalid parent" "E","All employees","","","","","440","Error parsing key for this row; no values" Error dump for 112000000079 "Worker Report","Code","Value 1","_Line_","_Error_1_" "Jesse Wilson","a004100000HnINpAAN","0","434","Item not located in Worker Report list: Jesse Wilson" "Alec","a004100000HnINzAAN","0","435","Item not located in Worker Report list: Alec" "Test 2","a004100000HnIO9AAN","0","436","Item not located in Worker Report list: Test 2 Downloading a File If the above code is used to execute an export action, the fill will not be downloaded automatically. To get this file, use the following: download = get_file(conn, "<file ID>", "<path to local file>") print(download) This will save the file to the desired location on the local machine (or mounted network share folder) and alert you once the download is complete, or warn you if there is an error. Get Available Workspaces and Models API 2.0 introduced a new means of fetching the workspaces and models available to a given user. You can use this library to build a key-value dictionary (as above) for these resources. #Instantiate AnaplanConnection without workspace or model IDs conn = AnaplanConnection(anaplan.generate_authorization("Certificate", privKey, pubCert), "", "") #Setting session variables uid = anaplan.get_user_id(conn) #Fetch models and workspaces the account may access workspaces = ard.build_id_dict(anaplan.get_workspaces(conn, uid), "workspaces") models = ard.build_id_dict(anaplan.get_models(conn, uid), "models") #Select workspace and model to use while True: workspace_name=input("Enter workspace name to use (Enter ? to list available workspaces): ") if workspace_name == '?': for key in workspaces: print(key) else: break while True: model_name=input("Enter model name to use (Enter ? to list available models): ") if model_name == '?': for key in models: print(key) else: break #Extract workspace and model IDs from dictionaries workspace_id = ard.get_id(workspaces, workspace_name) model_id = ard.get_id(models, model_name) #Updating AnaplanConnection object conn.modelGuid=model_id conn.workspaceGuid=workspace_id The above code will create an AnaplanConnection instance with only the user authentication defined. It queries the API to return the ID of the user in question, then queries for the available workspaces and models, and builds a dictionary with these results. You can then enter the name of the workspace and model you wish to use (or print to screen all available), then finally update the AnaplanConnection instance to be used in all future requests.
View full article
Overview The Anaplan Optimizer aids business planning and decision making by solving complex problems involving millions of combinations quickly to provide a feasible solution. Optimization provides a solution for selected variables within your Anaplan model that matches your objective based on your defined constraints. The Anaplan model must be structured and formatted to enable Optimizer to produce the correct solution. You are welcome to read through the materials and watch the videos on this page, but Optimizer is a premium service offered by Anaplan (Contact your Account Executive if you don't see Optimizer as an action on the settings tab). This means that you will not be able to actually do the training exercises until the feature is turned on in your system. Training The training involves an exercise along with documentation and videos to help you complete it. The goal of the exercise is to setup the optimization exercise for two use cases; network optimization and production optimization. To assist you in this process we have created an optimization exercise guide document which will walk you through each of the steps. To further help we have created three videos you can reference: An exercise walk-through A demo of each use case A demo of setting up dynamic time Follow the order of the items listed below to assist with understanding how Anaplan's optimization process works: Watch the use case video which demos the Optimizer functionality in Anaplan Watch the exercise walkthrough video Review documentation about how Optimizer works within Anaplan Attempt the Optimizer exercise Download the exercise walkthrough document Download the Optimizer model into your workspace How to configure Dynamic Time within Optimizer Download the Dynamic Time document Watch the Dynamic Time video Attempt Network Optimization exercise Attempt Production Optimization exercise
View full article
L'application Bring Your Own Key (BYOK) vous permet maintenant de vous approprier les clés de chiffrement de vos données de modèle. Si vous avez accès à l'outil Anaplan Administration, vous pouvez chiffrer et déchiffrer des espaces de travail sélectionnés à l'aide de vos propres clés AES-256. À la différence des clés principales système, les clés créées par BYOK vous appartiennent et vous en assurez la sécurité. Aucun mécanisme ne permet au personnel Anaplan d'accéder à vos clés. Bring Your Own Key (BYOK) - Guide de l'utilisateur  Bring Your Own Key (BYOK) est un produit complémentaire que votre organisation peut acheter si elle possède l'édition Enterprise.
View full article
Bring Your Own Key (BYOK) is now available. This enables designated Encryption Administrators to encrypt model data using your organization's encryption keys. For more information, see Bring Your Own Key in Anapedia. Note: Bring Your Own Key is an additional product that your organization can purchase if it has the Enterprise edition. Best practices This section contains some best practices to follow when using BYOK. Development Practices Identify or create a workspace that does not contain any essential model data. Encrypt the workspace to practice using BYOK.  After successfully encrypting the workspace: Run the tests on models in the workspace that you want. Follow the same procedure to encrypt your production workspace. If required, decrypt the development workspace. Ensure Workspaces are not in use Workspaces can't be encrypted when they are active. Ensure that your users are no longer using any models in the workspace before starting encryption. Do not start encryption until the workspace state is "Ready". Encrypting before loading data The first encryption is known as encryption in place. This is an offline event. To reduce the amount of time for this encryption, we recommend encrypting a workspace when it is first created or before significant data is loaded. Data added to models within the workspace after encryption is automatically encrypted. This is known as encryption on the fly. It's likely that this is sensitive data and it is more secure to load it after the workspace is encrypted. Identify users for key roles Identify users to be assigned the Encryption Admins role as early as possible. Identify users to be assigned the Tenant Auditor role. Encryption Admin role To maintain separation of duties, Encryption Admins should not have access to any model data. Ensure that Encryption Admins are added as members of at least one workspace with a model permission of "no access". Let your account representative know the email addresses of the Encryption Admins when you first order BYOK. Ideally, assign more than one person to the Encryption Admin role. Encryption Admin users can assign other users in their tenant the Encryption Admin role or remove it using the Access Control feature of the Administration app. Note: Only a limited set of users are eligible to be assigned the Encryption Admin role. Only users who were submitted to Anaplan as potential Encryption Admins appear in the Access Control section of the Administration app. If any users are missing, add them to the workspace in your tenant with the role 'No Access' then contact Anaplan Support and request that those users are added to the list of eligible Encryption Admins. Tenant Auditor role The Tenant Auditor role can access the BYOK audit logs. You might want to specify different users to the ones assigned the Encryption Admin role, but that’s your choice. Your Tenant Administrator can assign users to this role. Tenant Auditors need to be a user in at least one Anaplan workspace, ideally with a model permission of "no access". Wait When the "BYOK" status changes following a successful encryption or decryption action in a workspace, wait two minutes before running another operation on that workspace. This enables trailing processes to complete and helps to prevent unexpected errors. Features As an Encryption Admin, you can use the Reassign Key button on the Encrypted Workspaces page to easily apply key rotation on your workspaces. BYOK now has audit logging. You can use the Audit Service API to: Retrieve up to 30 days of logs. Get the BYOK history for your tenant. Get the BYOK history for your tenant for specific dates that you specify. Get information about who carried out an action in BYOK, when it was done, and what was done. For more information, see Administration: Security - Audit in Anapedia. Issues Resolved Issue Description As an Encryption Administrator, you can now assign or remove the Encryption Admin role. Known Issues and Workarounds Issue Workaround When generating a key using the required values, but without waiting before entering values, key generation fails with the "Invalid Key Name" message. Wait a few seconds before entering data on the Generate New Encryption Key popup. When editing an encryption key, the Key Alias field is disabled and cannot be changed. –
View full article
An easy to use set of PowerShell wrapper scripts This article outlines the features of the PowerShell scripts that are used as wrappers to the standard Anaplan Connect scripts. These PowerShell scripts will enable the following features: A file watcher that waits for the arrival of files to start importing into Anaplan that can run through Enterprise Schedulers Copy/move, import, and back up the source files as required after the Success or Failure of the import Provide email notifications of the outcome of the processes Can be used to trigger Actions on Anaplan that do not have file operations, but as required through schedulers The scripts are avaliable in the links below. GitHub Repository Please contribute enhancements here: https://github.com/upaliw/anaplanconnect_ps Releases Latest releases for AC1.4 & AC1.3: https://github.com/upaliw/anaplanconnect_ps/releases  Contents of the ZIP file The contents of the ZIP file are Object Comments exceptions Folder to hold the errors/messages generated from Anaplan Connect java_keystore Folder to hold the Java KeyStore file for CA Certificate authentication. See the complete Anaplan Connect Guide. lib Folder that holds the required Java libraries to run Anaplan Connect logs Folder to hold the logging information of the PowerShell scripts AnaplanClient.bat anaplan-connect.jar Anaplan Connect script and Java package AnaplanConfig.bat The connection details for Anaplan (i.e. Basic Authentication or CA Cert details) Anaplan_Action.bat Main script that runs the various types of Anaplan Actions FileInterface.ini Config file for all file-based operations FileWatch.ps1 FileCopy.ps1 FileRun.ps1 Functions.ps1 Main PowerShell scripts for all operations FW.bat FWCPY.bat FWCPYRUN.bat RUN.bat Windows batch scripts that can be used to call the main PowerShell scripts through Enterprise Schedulers EmailNotifications.ini Config file for email notification settings EmailPassword.txt Config file to hold the encrypted password for SMTP authentication  Step 1 – Anaplan Connect Authentication The following file should be updated as required to denote the connection type to Anaplan. The connection can be either one of the two possible types: Basic Authentication: Anaplan username and password, where the password is maintained by Anaplan and the Anaplan username is set to be an Exception user for SSO workspaces. The password will need to be reset every 90 days. CA Certificate Authentication: A client certificate procured using a Certification Authority that is attached to the Anaplan Username (see the Administration: Security - Certificates article in Anapedia). Step 2 – Email configuration The following steps need to be completed for email notifications. Update the EmailNotifications.ini file with the SMTP parameters. As required, create the encrypted password file txt for the SMTP authentication. To use the default encryption of PowerShell the following command can be issued in the PowerShell prompt and redirect the output to a file as: "smtpPassword" | ConvertTo-SecureString -AsPlainText -Force | ConvertFrom-SecureString | Out-File ".\EmailPassword.txt"Step 3 – File import configuration Step 3 – File import configuration This is the main configuration file for all the file import operations. The FileInterface.ini file will have the following information: Config Entry Comments Key Mandatory: The main parameter passed to the scripts that picks all the details of the operations Inbound filename Optional: The inbound filename as a Regular Expression, so that it can recognize any timestamps Load filename Optional: The filename the Anaplan Action is tied to Backup filename Optional: The filename the file should be backed up as Inbound location Optional: The folder the file arrives to from a source system Load location Optional: The folder the file moves from the inbound location Backup location Optional: The folder where the backups are located, which is by date-stamped subfolders Command to run Mandatory: The Anaplan Action Notify Optional: One of Success, Fail, or Both Notify email addresses Optional: The email addresses comma (,) separated Action Type Mandatory: One of Import, Export, Process, Action, ImportAndProcess, JDBCImport, or JDBCProcess Export filename Optional: Only for Export Action Type JDBC Properties file Optional: Only for JDBCImport and JDBCProcess Action Type Workspace GUID Mandatory: Workspace ID Model GUID Mandatory: Model ID  Calling the scripts The scripts can be called manually or via an Enterprise Scheduler. The Key should be passed as the argument. The following scenarios can be provided as examples of these operations. Wait for an arrival of a file, then import it to Anaplan FWCPYRUN “Key” Run an Anaplan action per schedule RUN “Key” Email notifications If an email notification is enabled per config entry, a sample with an attachment of any exceptions generated will look like: Note: The email will contain 1 of 3 statuses Success: No issues. Success with data errors: Import was successful, but some data items had issues. There will be an attachment with the details of the exceptions generated from Anaplan. Fail: The import failed, details will be attached in the email. Logging All steps of the interface processes will be logged in the logs folder for each operation (i.e. FileWatch, FileCopy, and FileRun) separately. The generated exceptions will be in the exceptions folder.  Note: There is no process to clean-up the older log files, which should be done on a case-by-case basis.
View full article
We're pleased to announce the February 2018 release of the Anaplan Connector for Informatica Cloud. This release fixes Success/Error row counts in Monitor Log for Data Synchronization Tasks (DST).   Exports Anaplan List exports Success rows is the number of Anaplan List rows exported. Error row count should be 0. Anaplan Module exports Success rows is the number of Anaplan Module rows exported. Error row count should be 0. Imports Anaplan List imports Success rows is sum of number of rows successfully updated/inserted & number of rows updated/inserted with warning. Error row count is number of failed rows. Anaplan Module imports Success rows is the sum of number of Anaplan cells successfully updated/inserted & number of Anaplan cells updated/inserted with warning. Error rows is number of failed Anaplan cells. Note: Cells ignored by Anaplan Import action are not included in above count. For example, during Module Import, any parent hierarchy level cells will be ignored. For more information, see the Anaplan Informatica Connector Guide. 
View full article
Anaplan Connect is a downloadable tool that empowers you to automate Anaplan actions. This lightweight tool still relies on the same types of flat files that can be manually uploaded into Anaplan. Once this tool is installed on your computer, you can package that point-and-click process in a script (.bat or .sh files). These scripts work well with external scheduling tools, enabling you to schedule and automate a data upload/download from Anaplan's cloud platform. Most often, Anaplan Connect is used in conjunction with flat files, but it can also be used to connect to any relational database with JDBC.   JDBC JDBC stands for Java Database Connectivity. It is the industry standard API for database-independent connectivity between Java and a wide range of SQL databases, as well as other tabular data sources. A JDBC connection relies on Anaplan Connect to handle the Anaplan side of the integration; it has a separate category because this is the only type of Anaplan Connect script that will contain an SQL query. As with any non-JDBC integration using Anaplan Connect, Anaplan must already have a template file stored as a data source. As long as this data source is available within the Anaplan model, a JDBC integration differs from a flat file Anaplan Connect script when it comes to selecting the file for import. With a JDBC integration, this is the result of an SQL query instead of the location of a flat file. The results of this query are passed directly to Anaplan without needing to store a file. Learn more about Anaplan Connect and download the Anaplan Connect Quick Start Guide in Anapedia.
View full article
There are several business use cases that require the ability to compute distances between pairs of locations. Optimizing sales territory realignment Logistics cost optimization Transportation industry passenger revenue or cost per mile Franchise territory design Brick-and-mortar market area analysis (stores, hotels, bank branches, …) Optimizing inventory among geographic Distribution Centers At their core, each of these requires knowing how far apart a pair of sites are positioned. This article provides step-by-step instructions for creating a dashboard where users select a location and set a market area radius, then the dashboard shows all population centers in that vicinity with some demographic information. Doing the Math: Trig functions in Anaplan  Distance between two latitude-longitude points (lat1, lon1) and (lat2, lon2) requires solving this equation: Radius of Earth *  ACOS(  COS(90 - lat1) * COS(90 - lat2)       + SIN(90 - lat1) * SIN(90 - lat2) * COS(lon1 - lon2)     )  This formula works quite well. We know the Earth isn’t flat, but it’s not a perfect sphere either. Our home world bulges a bit at the equator and is flattened a bit at the poles. But for most purposes other than true rocket science, this equation gives sufficiently accurate results.  Unfortunately, Anaplan doesn’t have the functions SIN, COS, or ACOS built in, and the usual workaround – lookup modules – simply won't do in this situation because we need much higher precision than lookups can practically handle. But don't despair, it is possible to calculate trig functions to 8 decimal point precision using nothing more sophisticated than Anaplan's POWER() function and some ingenuity. In the following demonstration model, the trig functions needed for distance calculation have been built for you using equations called Taylor Series expansions. Step-by-Step Construction  Here's a small educational project: In our example model, the user will select one post code, enter a market area radius value, and click a button. Changing the selected post code updates rows in a filtered module, so we need to refresh the dashboard to see the result. The dashboard will identify all post codes in the vicinity location and display their population, growth rate, median age, and distance. Step 1 Get U.S. postal code demographic and geolocation data. Our model will use Census Zip Code tabulation areas. ZCTAs are essentially postal Zip Codes adjusted to remove unpopulated Zip Codes that are only for PO Boxes and combining some codes where that solves practical census tallying problems. There are about 32,000 ZCTAs and 43,000 Zip Codes in the U.S. Download the US.zip file from http://download.geonames.org/export/zip/ That file provides a full list of US Zip Codes and their county, state, latitude, and longitude. Other countries post codes are also listed in that folder. Download demographic data by post code from the US Census Bureau report DP05, choose the 5-digit ZCTA geographic option for the entire US. To calculate growth rate, you will need datasets for both the most recent year available and for the fifth year prior to that. (2017 and 2012 at the time this was written.)  Notes: Import maps in the next two steps will need some manipulation by concatenating fields to get nice looking names (such as "Boston, MA 02134") and to get codes to match up among the lists. You'll need to either import to transaction modules or do this manipulation in Excel. Step 2 Create a list named "Loc 3 - Post Codes". Set a top level member with a name like “Total Population Centers”.  It is generally a best practice to create a Clear action for any list to be run before future list reloads. Notes: For the purposes of this demonstration, a flat list of 5-digit codes is sufficient. I found it helpful to roll up ZCTAs by state (Loc 1) and county (Loc 2). This is optional. I will leave “give friendly names to your list members and assign them to parents” as an exercise for the advanced reader. Step 3 Create a module named "DATA: Loc 3 - Post Codes" dimensionalized by the list "Loc 3 - Post Codes" (no time, no version). Notes: There are a LOT of data fields in the tables you downloaded, and much more data is available in other Census Bureau products (gender, households, age details, income, …). Feel free to add line items for any census fields that you find useful. I found it helpful to pull the data into Excel and keep only the fields of interest to streamline the mapping process in Anaplan. Expect a few rejects due to mismatches between Zip Code and ZCTA files. The geonames.org zip code list US.zip doesn't include Puerto Rico and other island territories. Census data does include them. As a result, Census ZCTAs that begin with 006## and 009## will report there is no matching list member. In a "real world" application, a significant effort goes into insuring that data "ties out" by addressing issues like this. You may either ignore the small percentage of rejects (my sincere apologies to the people of Puerto Rico) or you may find and add those missing zip codes to your list. Your choice.  For this exercise, the module must contain, at minimum, these line items:   Formula Format Applies To DATA: Loc 3 - Post Codes     Loc 3 - Post Codes Latitude   Number - Longitude   Number - Total Population   Number - Total Population 5 yr prior   Number - Growth Rate POWER(Total Population / 'Total Population 5 yr prior', 0.2) – 1 Number - Median Age   Number - Median Age * Tot Pop Median Age * Total Population Number - Set the Summary properties as follows: 'Total Population’, ‘Total Population 5 yr prior’, and ‘Median Age * Tot Pop’ aggregate by Sum. ‘Growth Rate’ aggregates by Formula. ‘Median Age’ aggregates by Ratio: ‘Median Age * Tot Pop’ / ‘Total Population’ Create import actions to load your downloaded data into “DATA: Loc 3 – Post Codes” Step 4 Create a module named "INPUT: Globals". It holds four constants and two inputs as line items. There is no List, Time or Version dimension. I put those line items’ values into the Formula so users cannot change them. Line Items are:   Formula Format Applies To INPUT: Globals      <none> UI   No Data   Select a Location   List, Loc 3 – Post Code - Market Area Radius (miles)   Number - Constants   No Data - Earth Radius (km) 6371 Number - Pi 3.141592654 Number - km / mi 1.609344 Number - ACOS(2/3) 0.588002604 Number - Publish the “Select a Location” and “Market Area Radius (miles) line items to a new dashboard with the name “Distance Demo”. Note: Distance calculations in kilometers are provided below. Feel free to adjust your model’s inputs, outputs, and filters to the needs of your locale. Step 5 Create a module named "CALC: Post Code - Nearby Population Centers" dimensionalized by only the list “Loc 3 - Post Codes”. There are no Time or Versions dimensions.   Formula Format Applies To CALC: Post Code - Nearby Population Centers     Loc 3 - Post Codes Origination Location:   No Data - Selected Post Code 'INPUT: Globals'.'Select a Location' List: Loc 3 - Post Codes <none> Selected Post Code Latitude 'DATA: Loc 3 - Post Codes'.Latitude[LOOKUP: Selected Post Code] Number <none>  Selected Post Code Longitude 'DATA: Loc 3 - Post Codes'.Longitude[LOOKUP: Selected Post Code] Number <none>  Destination Location:   No Data   Population Center ITEM('Loc 3 - Post Codes') List: Loc 3 - Post Codes - Population If 'In Market Area?' Then 'DATA: Loc 3 - Post Codes'.Total Population Else 0 Number - Population 5 yr prior IF In Market Area? THEN 'DATA: Loc 3 - Post Codes'.'Total Population 5 yr prior' ELSE 0 Number -  Growth Rate IF In Market Area? THEN POWER(Population / 'Population 5 yr prior', -2) - 1 ELSE 0 Number, Percent - Median Age If 'In Market Area?' Then  'DATA: Loc 3 - Post Codes'.Median Age Else 0 Number - Median Age * Pop If 'In Market Area?' Then  Median Age * Population Else 0 Number - Pop Center Latitude 'DATA: Loc 3 - Post Codes'.Latitude Number - Pop Center Longitude 'DATA: Loc 3 - Post Codes'.Longitude Number - Calculated Distance:   No Data   Distance (miles) 'EarthRadius (miles)' * 'ACOS(x)' Number - Distance (km) 'EarthRadius (km)' * 'ACOS(x)' Number - Staging   No Data   EarthRadius (km) 'INPUT: Globals'.'Earth Radius (km)' Number   EarthRadius (miles) 'EarthRadius (km)' / 'INPUT: Globals'.'km / mi' Number   Pi 'INPUT: Globals'.Pi Number   Radians(90 - Lat1) 2 * Pi * (90 - Selected Post Code Latitude) / 360 Number - COS(Radians(90 -  Lat1)) 1 - POWER('Radians(90 - Lat1)', 2) / 2 + POWER('Radians(90 - Lat1)', 4) / 24 - POWER('Radians(90 - Lat1)', 6) / 720 + POWER('Radians(90 - Lat1)', 8) / 40320 - POWER('Radians(90 - Lat1)', 10) / 3628800 + POWER('Radians(90 - Lat1)', 12) / 479001600 - POWER('Radians(90 - Lat1)', 14) / 87178291200 + POWER('Radians(90 - Lat1)', 16) / 20922789888000 - POWER('Radians(90 - Lat1)', 18) / 6402373705728000 + POWER('Radians(90 - Lat1)', 20) / 2432902008176640000 Number - SIN(Radians(90 - Lat1)) 'Radians(90 - Lat1)' - POWER('Radians(90 - Lat1)', 3) / 6 + POWER('Radians(90 - Lat1)', 5) / 120 - POWER('Radians(90 - Lat1)', 7) / 5040 + POWER('Radians(90 - Lat1)', 9) / 362880 - POWER('Radians(90 - Lat1)', 11) / 39916800 + POWER('Radians(90 - Lat1)', 13) / 6227020800 - POWER('Radians(90 - Lat1)', 15) / 1307674368000 + POWER('Radians(90 - Lat1)', 17) / 355687428096000 - POWER('Radians(90 - Lat1)', 19) / 121645100408832000 + POWER('Radians(90 - Lat1)', 21) / 51090942171709440000 Number - Radians(90 - Lat2) 2 * Pi * (90 - Pop Center Latitude) / 360 Number - COS(Radians(90 -  Lat2)) 1 - POWER('Radians(90 - Lat2)', 2) / 2 + POWER('Radians(90 - Lat2)', 4) / 24 - POWER('Radians(90 - Lat2)', 6) / 720 + POWER('Radians(90 - Lat2)', 8) / 40320 - POWER('Radians(90 - Lat2)', 10) / 3628800 + POWER('Radians(90 - Lat2)', 12) / 479001600 - POWER('Radians(90 - Lat2)', 14) / 87178291200 + POWER('Radians(90 - Lat2)', 16) / 20922789888000 - POWER('Radians(90 - Lat2)', 18) / 6402373705728000 + POWER('Radians(90 - Lat2)', 20) / 2432902008176640000 Number - SIN(Radians(90 - Lat2)) 'Radians(90 - Lat2)' - POWER('Radians(90 - Lat2)', 3) / 6 + POWER('Radians(90 - Lat2)', 5) / 120 - POWER('Radians(90 - Lat2)', 7) / 5040 + POWER('Radians(90 - Lat2)', 9) / 362880 - POWER('Radians(90 - Lat2)', 11) / 39916800 + POWER('Radians(90 - Lat2)', 13) / 6227020800 - POWER('Radians(90 - Lat2)', 15) / 1307674368000 + POWER('Radians(90 - Lat2)', 17) / 355687428096000 - POWER('Radians(90 - Lat2)', 19) / 121645100408832000 + POWER('Radians(90 - Lat2)', 21) / 51090942171709440000 Number - Radians(Long1-Long2) 2 * Pi * (Selected Post Code Longitude - Pop Center Longitude) / 360 Number - COS(RADIANS(Long1-Long2)) 1 - POWER('Radians(Long1-Long2)', 2) / 2 + POWER('Radians(Long1-Long2)', 4) / 24 - POWER('Radians(Long1-Long2)', 6) / 720 + POWER('Radians(Long1-Long2)', 8) / 40320 - POWER('Radians(Long1-Long2)', 10) / 3628800 + POWER('Radians(Long1-Long2)', 12) / 479001600 - POWER('Radians(Long1-Long2)', 14) / 87178291200 + POWER('Radians(Long1-Long2)', 16) / 20922789888000 - POWER('Radians(Long1-Long2)', 18) / 6402373705728000 + POWER('Radians(Long1-Long2)', 20) / 2432902008176640000 Number - X - pre adj 'COS(Radians(90 -  Lat1))' * 'COS(Radians(90 -  Lat2))' + 'SIN(Radians(90 - Lat1))' * 'SIN(Radians(90 - Lat2))' * 'COS(RADIANS(Long1-Long2))' Number - X IF ABS('X - pre adj') <= 1 / POWER(2, 0.5) THEN 'X - pre adj' ELSE IF ABS('X - pre adj') > 1 THEN SQRT(-1) ELSE POWER(1 - POWER('X - pre adj', 2), 0.5) Number - ASIN (Taylor Series) X + 1 / 6 * POWER(X, 3) + 3 / 40 * POWER(X, 5) + 5 / 112 * POWER(X, 7) + 35 / 1152 * POWER(X, 9) + 63 / 2816 * POWER(X, 11) + 231 / 13312 * POWER(X, 13) + 143 / 10240 * POWER(X, 15) / 6435 / 557056 * POWER(X, 17) + 12155 / 1245184 * POWER(X, 19) + 46189 / 5505024 * POWER(X, 21) + 88179 / 12058624 * POWER(X, 23) Number - ASIN(x) IF ABS('X - pre adj') <= 1 / SQRT(2) THEN 'ASIN (Taylor Series)' ELSE IF 'X - pre adj' > 1 / SQRT(2) AND 'X - pre adj' <= 1 THEN Pi / 2 - 'ASIN (Taylor Series)' ELSE IF 'X - pre adj' < -1 / SQRT(2) AND 'X - pre adj' > -1 THEN -Pi / 2 + 'ASIN (Taylor Series)' ELSE SQRT(-1) Number - ACOS(x) Pi / 2 - 'ASIN(x)' Number - Filters   No Data   In Market Area? Distance (miles)' > 0 AND 'Distance (miles)' <= 'INPUT: Globals'.'Market Area Radius (miles)' Boolean - Set summary settings for the user-facing population and age line items just as you did in step 2. The line items under Distance Calculations and Staging should not roll up, so use summary: None. (This is a best practice for conserving model size). The ‘In Market Area?’ Boolean should roll up using summary: Any. Filter the list with ‘In Market Area?’ = TRUE and publish the 'CALC: Post Code - Nearby Population Centers' module to your dashboard. In grid view, use pivot / filter / hide in the module:  ‘Loc 3 – Post Codes’ is the row dimension, Filter on ‘Is Market Area?’ = True, Line items are in the columns and only the desired line items show, Adjust column settings for heading wrap and column widths. Save the view and publish it to your dashboard. Step 6 Create a new Action that opens the dashboard, name it "Refresh Surrounding Locations". Publish it to your dashboard and position it between the two inputs and the output module. This action button is needed because the output module is filtered for "In Market Area?" = True but that filtering is only updated when the dashboard is refreshed. This completes the build instructions, following are more insights on the calculations. The calculation logic Take a look at the line item formulas under Staging. In those, we build the distance equation from its component parts. You might find it helpful to know that each trig operation, such as COS(90 - lat1), is a line item. Radius of Earth *  ACOS(  COS(90 - lat1) * COS(90 - lat2)       + SIN(90 - lat1) * SIN(90 - lat2) * COS(lon1 - lon2)     ) In overview, the line items represent these steps: Get the constants Pi, Earth’s radius, etc. Convert latitude and longitude from degrees to radians Use Taylor Series formulas to calculate the variety of SIN and COS components Use another Taylor Series formula and a trig identity to calc ASIN, then convert ASIN to ACOS using another trig identity. Multiply the finished ACOS by Earth’s radius. Going Multidimensional This example model is intentionally small; it uses a single list of locations and computes their distances from a selected location. In most "real world" applications, you need to know the distance between every pairing of two lists of locations, for example Stores and Towns, or DCs and Stores. Let’s call them origin and destination locations. To compute distance between every possible pairing, you would dimensionalize the CALC module above by those two lists and replace the user selection with ITEM(<origin location list>). Good luck!!
View full article
  NOTE: The following information is also attached as a PDF for downloading and using off-line.   Overview The process of designing a model will help you: Understand the customer’s problem more completely Bring to light any incorrect assumptions you may have made, allowing for correction before building begins Provide the big picture view for building. (If you were working on an assembly line building fenders, wouldn’t it be helpful to see what the entire car looked like?)   Steps: Understand the requirements and the customer’s technical ecosystem when designing a model When you begin a project, gather information and requirements using a number of tools. These include: Statement of Work (SOW): Definition of the project scope and project objectives/high level requirements Project Manifesto: Goal of the project – big picture view of what needs to be accomplished IT ecosystem: Which systems will provide data to the model and which systems will receive data from the model? What is the Anaplan piece of the ecosystem? Current business process: If the current process isn’t working, it needs to be fixed before design can start. Business logic: What key pieces of business logic will be included in the model?  Is a distributed model needed? High user concurrency Security where the need is a separate model Regional differences that are better handled by a separate model Is the organization using ALM, requiring split or similar models to effectively manage development, testing, deployment, and maintenance of applications? (This functionality requires a premium subscription or above.) User stories: These have been written by the client—more specifically, by the subject matter experts (SMEs) who will be using the model.   Why do this step? To solve a problem, you must completely understand the current situation. Performing this step provides this information and the first steps toward the solution.   Results of this step: Understand the goal of the project Know the organizational structure and reporting relationships (hierarchies) Know where data is coming from and have an idea of how much data clean-up might be needed If any of the data is organized into categories (for example, product families) or what data relationships exist that need to be carried through to the model (for example, salespeople only sell certain products) What lists currently exist and where are they are housed Know which systems the model will either import from or export to Know what security measures are expected Know what time and version settings are needed   Document the user experience Front to back design has been identified as the preferred method for model design. This approach puts the focus on the end user experience. We want that experience to align with the process so users can easily adapt to the model. During this step focus on: User roles. Who are the users? Identifing the business process that will be done in Anaplan. Reviewing and documenting the process for each role. The main steps. If available, utilize user stories to map the process. You can document this in any way that works for you. Here is a step-by-step process you can try: What are the start and end points of the process? What is the result or output of the process? What does each role need to see/do in the process? What are the process inputs and where do they come from? What are the activities the user needs to engage in? Verb/object—approve request, enter sales amount, etc. Do not organize during this step. Use post-its to capture them. Take the activities from step 4 and put them in the correct sequence. Are there different roles for any of these activities? If no, continue with step 8. If yes, assign a role to each activity. Transcribe process using PowerPoint ®  or Lucid charts. If there are multiple roles, use swim lanes to identify the roles. Check with SMEs to ensure accuracy. Once the user process has been mapped out, do a high level design of the dashboards Include: Information needed What data does the user need to see? What the user is expected to do or decisions that the user makes Share the dashboards with the SMEs. Does the process flow align?   Why do this step?  This is probably the most important step in the model design process. It may seem as though it is too early to think about the user experience, but ultimately the information or data that the user needs to make a good business decision is what drives the entire structure of the model. On some projects, you may be working with a project manager or a business consultant to flesh out the business process for the user. You may have user stories, or it may be that you are working on design earlier in the process and the user stories haven’t been written. In any case, identify the user roles, the business process that will be completed in Anaplan, and create a high level design of the dashboards. Verify those dashboards with the users to ensure that you have the correct starting point for the next step.   Results of this step: List of user roles Process steps for each user role High level dashboard design for each user role   Use the designed dashboards to determine what output modules are necessary Here are some questions to help you think through the definition of your output modules: What information (and in what format) does the user need to make a decision? If the dashboard is for reporting purposes, what information is required? If the module is to be used to add data, what data will be added and how will it be used? Are there modules that will serve to move data to another system? What data and in what format is necessary?   Why do this step? These modules are necessary for supporting the dashboards or exporting to another system. This is what should guide your design—all of the inputs and drivers added to the design are added with the purpose of providing these output modules with the information needed for the dashboards or export.   Results of this step: List of outputs and desired format needed for each dashboard   Determine what modules are needed to transform inputs to the data needed for outputs Typically, the data at the input stage requires some transformation. This is where business rules, logic, and/or formulas come into play: Some modules will be used to translate data from the data hub. Data is imported into the data hub without properties, and modules are used to import the properties. Reconciliation of items takes place before importing the data into the spoke model. These are driver modules that include business logic, rules.    Why do this step?  Your model must translate data from the input to what is needed for the output    Results of this step: Business rules/calculations needed   Create a model schema You can whiteboard your schema, but at some point in your design process, your schema must be captured in an electronic format. It is one of the required pieces of documentation for the project and is also used during the Model Design Check-in, where a peer checks over your model and provides feedback.  Identify the inputs, outputs, and drivers for each functional area Identify the lists used in each functional area Show the data flow between the functional areas Identify time and versions where appropriate   Why do this step?   It is required as part of The Anaplan Way process. You will build your model design skills by participating in a Model Design Check-in, which allows you to talk through the tougher parts of design with a peer. More importantly, designing your model using a schema means that you must think through all of the information you have about the current situation, how it all ties together, and how you will get to that experience that meets the exact needs of the end user without fuss or bother.    Result of this step: Model schema that provides the big picture view of the solution. It should include imports from other systems or flat files, the modules or functional areas that are needed to take the data from current state to what is needed to support the dashboards that were identified in Step 2. Time and versions should be noted where required. Include the lists that will be used in the functional areas/modules.  Your schema will be used to communicate your design to the customer, model builders, and others. While you do not need to include calculations and business logic in the schema, it is important that you understand the state of the data going into a module, the changes or calculations that are performed in the module and the state of the data leaving the module, so that you can effectively explain the schema to others.  For more information, check out 351 Schemas.  This 10 to 15 minute course provides basic information about creating a model schema. Verify that the schema aligns with basic design principles When your schema is complete, give it a final check to ensure: It is simple. “Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius — and a lot of courage to move in the opposite direction.”  ― Ernst F. Schumacher “Design should be easy in the sense that every step should be obviously and clearly identifiable. Simplify elements to make change simple so you can manage the technical risk.” — Kent Beck The model aligns with the manifesto. The business process is defined and works well within the model.
View full article
Note: While all of these scripts have been tested and found to be fully functional, due to the vast amount of potential use cases, Anaplan does not explicitly support custom scripts built by our customers. This article is for information only and does not suggest any future product direction. Getting Started Python 3 offers many options for interacting with an API. This article will explain how you can use Python 3 to automate many of the requests that are available in our apiary, which can be found at   https://anaplan.docs.apiary.io/#. This article assumes you have the requests (version 2.18.4), base64, and JSON modules installed as well as the Python 3 version 3.6.4. Please make sure you are installing these modules with Python 3, and not for an older version of Python. For more information on these modules, please see their respective websites: Python   (If you are using a Python version older or newer than 3.6.4 or requests version older or newer than 2.18.4 we cannot guarantee validity of the article)   Requests   Base Converter   JSON   (Note: install instructions are not at this site but will be the same as any other Python module) Note:   Please read the comments at the top of every script before use, as they more thoroughly detail the assumptions that each script makes. Authentication To start, let's talk about Authentication. Every script run that connects to our API will be required to supply valid authentication. There are 2 ways to authenticate a Python script that I will be covering. Certificate Authentication Basic Encoded Authentication Certificate authentication will require that you have a valid Anaplan certificate, which you can read more about   here. Once you have your certificate saved locally, to properly convert your Anaplan certificate to be usable with the API, first you will need   openssl. Once you have that, you will need to convert the certificate to PEM format by running the following code in your terminal: openssl x509 -inform der -in certificate-(certnumber).cer -out certtest.pem If you are using Certificate Authorization, the scripts we use in this article will assume you know the Anaplan account email associated with the certificate. If you do not know it, you can extract the common name (CN) from the PEM file by running the following code in your terminal: openssl x509 -text -in certtest.pem To be used with the API, the PEM certificate string will need to be converted to base64, but the scripts we will be covering will take care of that for you, so I won't cover that in this section. To use basic authentication, you will need to know the Anaplan account email that is being used, as well as the password. All scripts in this article will have the following code near the top: # Insert the Anaplan account email being used username = '' ----------------- # If using cert auth, replace cert.pem with your pem converted certificate # filename. Otherwise, remove this line. cert = open('cert.pem').read() # If using basic auth, insert your password. Otherwise, remove this line. password = '' # Uncomment your authentication method (cert or basic). Remove the other. user = 'AnaplanCertificate ' + str(base64.b64encode(( f'{username}:{cert}').encode('utf-8')).decode('utf-8')) # user = 'Basic ' + str(base64.b64encode((f'{username}:{password}' # ).encode('utf-8')).decode('utf-8') Regardless of authentication method, you will need to set the username variable to the Anaplan account email being used. If you are using a certificate to authenticate, you will need to have your PEM converted certificate in the same folder or a child folder of the one you are running the scripts from. If your certificate is in a child folder, please remember to include the file path when replacing cert.pem (e.g. cert/cert.pem). You can remove the password line and its comments, and its respective user variable. If you are using basic authentication, you will need to set the password variable to your Anaplan account password and you can remove the cert line, its comments, and its respective user variable. Getting the Information Needed for Each Script Most of the scripts covered in this article will require you to know an ID or metadata for the file, action, etc., that you are trying to process. Each script that gets this information for their respective fields is titled get_____.py. For example, if you want to get your files metadata, you'll run getFiles.py, which will write the file metadata for each file in the selected model in the selected workspace in an array to a JSON file titled files.json. You can then open the JSON file, find the file you need to reference, and use the metadata from that entry in your other scripts. TIP:   If you open the raw data tab of the JSON file it makes it much easier to copy the whole set of metadata. The following are the links to download each get____.py script. Each get script uses the requests.get method to send a get request to the proper API endpoint. getWorkspaces.py: Writes an array to workspaces.json of all the workspaces the user has access to. getModels.py: Writes an array to models.json of either all the models a user has access to if wGuid is left blank, or all of the models the user has access to in a selected workspace if a workspace ID was inserted. getModelInfo.py: Writes an array to modelInfo.json of all metadata associated with the selected model. getFiles.py: Writes an array to files.json of all metadata for each file the user has access to in the selected model and workspace. (Please refer to   the Apiary   for more information on private vs default files. Generally it is recommended that all scripts be run via the same user account.) getChunkData.py: Writes an array to chunkData.json of all metadata for each chunk of the selected file in the selected model and workspace. getImports.py: Writes an array to imports.json of all metadata for each import in the selected model and workspace. getExports.py: Writes an array to exports.json of all metadata for each export in the selected model and workspace. getActions.py: Writes an array to actions.json of all metadata for all actions in the selected model and workspace. getProcesses.py: Writes an array to processes.json of all metadata for all processes in the selected model and workspace. Uploads A file can be uploaded to the Anaplan API endpoint either in chunks, or as a single chunk. Per our apiary: We recommend that you upload files in several chunks. This enables you to resume an upload that fails before the final chunk is uploaded. In addition, you can compress files on the upload action. We recommend compressing single chunks that are larger than 50MB. This creates a Private File. Note: To upload a file using the, API that file must exist in Anaplan. If the file has not been previously uploaded, you must upload it initially using the Anaplan user interface. You can then carry out subsequent uploads of that file using the API. Multiple Chunk Uploads The script we have for reference is built so that if the script is interrupted for any reason, or if any particular chunk of a file fails to upload, simply rerunning the script will start uploading the file again, starting at the last successful chunk. For this to work, the file must be initially split using a standard naming convention, using the terminal script below. split -b [numberofBytes] [path and filename] [prefix for output files] You can store the file in any location as long as you the proper file path when setting the chunkFilePrefix (e.g. chunkFilePrefix = ''upload_chunks/chunk-" This will look for file chunks named chunk-aa, chunk-ab, chunk-ac etc., up to chunk-zz in the folder script_origin/upload_chunks/. It is very unlikely that you will ever exceed chunk-zz). This will let the script know where to look for the chunks of the file to upload. You can download the script for running a multiple chunk upload from this link: chunkUpload.py Note:   The assumed naming conventions will only be standard if using Terminal, and they do not necessarily work if the file was split using another method in Windows. If you are using Windows you will need to either create a way to standardize the naming of the chunks alphabetically {chunkFilePrefix}(aa - zz) or run the script as detailed in the   Apiary. Note:   The chunkUpload.py script keeps track of the last successful chunk by writing the name of the last successful chunk to a .txt file chunkStop.txt. This file is deleted once the import completes successfully. If the file is modified in between runs of the script, the script may not function correctly. Best practice is to leave the file alone, and delete it if you want to start the upload from the first chunk. Single Chunk Upload The single chunk upload should only be used if the file is small enough to upload in a reasonable time frame. If the upload fails, it will have to start again from the beginning. If your file has a different name then that of its version of the server, you will need to modify line 31 ("name" : '') to reflect the name of the local file. This script runs a single put request to the API endpoint to upload the file. You can download the script for running a single chunk upload from this link: singleChunkUpload.py Imports The import.py script sends a post request to the API endpoint for the selected import. You will need to set the importData value to the metadata for the import. See Getting the Information Needed for Each Script for more information. You can download the script for running an import from this link: Import.py Once the import is finished, the script will write the metadata for the import task in an array to postImport.json, which you can use to verify which task you want to view the status of while running the importStatus.py script. The importStatus.py script will return a list of all tasks associated with the selected importID and their respective list index. If you are wanting to check the status of the last run import, make sure you are checking postImport.json to verify you have the correct taskID. Enter the index for the task and the script will write the task status to an array in file importStatus.json. If the task is still in progress, it will print the task status and progress. If the task finished and a failure dump is available, it will write the failure dump in comma delimited format to importDump.csv which can be used to review cause of the failure. If the task finished with no failures, you will get a message telling you the import has completed with no failures. You can download the script for importStatus.py from this link: importStatus.py Note:   If you check the status of a task with an old taskID for an import that has been run since you last checked it, the dump will no longer exist and importDump.csv will be overwritten with an HTTP error, and the status of the task will be 410 Gone. Exports The export.py script sends a post request to the API endpoint for the selected export. You will need to set the exportData value to the metadata for the export. See Getting the Information Needed for Each Script for more information. You can download the script for running an export from this link: Export.py Once the export is finished, the script will write the metadata for the export task in an array to postExport.json, which you can use to verify which task you want to view the status of while running the exportStatus.py script. The exportStatus.py script will return a list of all tasks associated with the selected exportID and their respective list index. If you are wanting to check the status of the last run import, make sure you are checking postExport.json to verify you have the correct taskID. Enter the index for the task and the script will write the task status to an array in file exportStatus.json. If the task is still in progress, it will print the task status and progress. It is important to note that no failure dump will be generated if the export fails. You can download the script for exportStatus.py from this link: exportStatus.py Actions The action.py script sends a post request to the API endpoint for the selected action (for use with actions other than imports or exports). You will need to set the actionData value to the metadata for the action. See Getting the Information Needed for Each Script for more information. You can download the script for running an action from this link: actionStatus.py. Processes The process.py script sends a post request to the API endpoint for the selected process. You will need to set the processData value to the metadata for the process. See Getting the Information Needed for Each Script for more information. You can download the script for running a process from this link: Process.py Once the process is finished, the script will write the metadata for the process task in an array to postProcess.json, which you can use to verify which task you want to view the status of while running the processStatus.py script. The processStatus.py script will return a list of all tasks associated with the selected processID and their respective list index. If you are wanting to check the status of the last run import, make sure you are checking postProcess.json to verify you have the correct taskID. Enter the index for the task and the script will write the task status to an array in file processStatus.json. If the task is still in progress, it will print the task status and progress. If the task finished and a failure dump is available, it will write the failure dump in comma delimited format to processDump.csv which can be used to review cause of the failure. It is important to note that no failure dump will be generated for the process itself, only if one of the imports in the process failed. If the task finished with no failures, you will get a message telling you the process has completed with no failures. You can download the script for processStatus.py from this link: processStatus.py Downloading a File Downloading a file from the Anaplan API endpoint will download the file in however many chunks it exists in on the endpoint. It is important to note that you should set the variable fileName to the name it has in the file metadata. First, the downloads individual chunk metadata will be written in an array to downloadChunkData.json for reference. The script will then download the file chunk by chunk and write each chunk to a new local file with the same name as the 'name' listed in the files metadata. You can download the link for this script from this link: downloadFile.py Note:If a file already exists in the same folder as your script with the same name as the name value in the files metadata, the local file will be overwritten with the file being downloaded from the server. Deleting a File You can delete the file contents of any file that the user has access to that exists in the Anaplan server. Note: This only removes private content. Default content and the import data source model object will remain. You can download the link for this script from this link: deleteFile.py Standalone Requests Code and Their Required Headers In this section, I will list the code for each request detailed above, including the API URL and the headers necessary to complete the call. I will be leaving the content right of Authorization: headers blank. Authorization header values can be either Basic encoded_username:password or AnaplanCertificate encoded_CommonName:PEM_Certificate_String (see   Certificate-Authorization-Using-the-Anaplan-API   for more information on encoded certificates) Note: requests.get will only generate a response body from the server, and no data will be locally saved unless written to a local file. Get Workspaces List requests.get('https://api.anaplan.com/1/3/workspaces/', headers='Authorization':) Get Models List requests.get('https://api.anaplan.com/1/3/models/', headers={'Authorization':}) or requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models', headers={'Authorization':}) Get Model Info requests.get(f'https://api.anaplan.com/1/3/models/{mGuid}', headers={'Authorization':}) Get Files/Imports/Exports/Actions/Processes List The get request for files, imports, exports, actions, or processes are largely the same. Change files to imports, exports, actions, or processes to run each. requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files', headers={'Authorization':}) Get Chunk Data requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/chunks', headers={'Authorization':}) Post Chunk Count requests.post('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/chunks/{chunkNumber}', headers={'Authorization': , 'Content-type': 'application/json'}, json={fileMetaData}) Upload a Chunk of a File requests.put('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/chunks/{chunkNumber}', headers={'Authorization': , 'Content-Type': 'application/octet-stream'}, data={raw contents of local chunk file}) Mark an upload complete requests.put('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/complete', headers=={'Authorization': , 'Content-Type': 'application/json'}, json={fileMetaData}) Upload a File in a Single Chunk requests.put('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}', headers={'Authorization': , 'Content-Type': 'application/octet-stream'}, data={raw contents of local file}) Run an Import/Export/Process The post request for imports, exports, and processes are largely the same. Change imports to exports, actions, or processes to run each. requests.post('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/imports/{Id}/tasks', headers={'Authorization': , 'Content-Type': 'application/json'}, data=json.dumps({'localeName': 'en_US'})) Run an Action requests.post('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/imports/{Id}/tasks', data={'localeName': 'en_US'}, headers={'Authorization': , 'Content-Type': 'application/json'}) Get Task list for an Import/Export/Action/Process The get request for import, export, action and process task lists are largely the same. Change imports to exports, actions, or processes to get each task list. requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/imports/{importID}/tasks', headers={'Authorization':}) Get Status for an Import/Export/Action/Process Task The get request for import, export, action and process task statuses are largely the same. Change imports to exports, actions, or processes to get each task list. Note: Only imports and processes will ever generate a failure dump. requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/imports/{ID}/tasks/{taskID}' headers={'Authorization':}) Download a File Note:   You will need to get the chunk metadata for each chunk of a file you want to download. requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/chunks/{chunkID}, headers={'Authorization': ,'Accept': 'application/octet-stream'}) Delete a File Note:   This only removes private content. Default content and the import data source model object will remain. requests.delete('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}', headers={'Authorization': , 'Content-type': 'application/json'} Note:  SFDC user administration is not covered in this article, but the same concepts from the scripts provided can be applied to SFDC user administration. For more information on SFDC user administration see the apiary entry for  SFDC user administration .
View full article
Introduction The new Anaplan APIs and integration connectors leverage Certificate Authority (CA) -issued certificates.  These certificates can be obtained through your company's intermediary CA (typically issued by IT) or by purchasing it from a trusted Certificate Authority. Anaplan clients leveraging REST API v2.0 use both basic authentication and CA certificate based authentication. Examples of these clients include Anaplan Connect 1.4, Informatica Anaplan Connector, and Mulesoft 2.0.1. If you are migrating your Anaplan Connector scripts from v1.3 to v1.4, your available options for authentication will be basic authentication or CA certificate based authentication. This article outlines steps to perform in preparation for CA certificate authentication. Steps to prepare for CA certificate authentication Obtain a certificate from a CA authority Convert CA certificate to either a p12 or pfx file Import CA certificate into Internet Explorer/Mozilla Firefox to convert to a p12/pfx file Export CA certtificate from Internt Explorer/Mozilla Firefox to covert to a p12/pfx file Optional: Install Openssl tool Convert the p12/pfx file into a Java Keystore Manage CA certificates in Anaplan Tenant Administrator Validate CA certificate authentication via Anaplan Connect 1.4 script. Obtain a certificate from a CA authority You can obtain a certificate from CA authority by submitting a request or submit a request with a certificate signing requiest (CSR) containing your private key.  Contact your IT or Security Operations organization to determine if your company already has an existing relationship with a CA or intermediary CA. If your organization has an existing relationship with a CA or Intermediate CA you can request a client certificate be issued for your integration user. If your organization does not have an existing CA relationship, you should contact a valid CA to procure a client certificate. Convert CA certificate to either a p12 or pfx file Import CA certificate into IE/Firefox to convert to a p12/pfx file This section presents steps to import CA certificate into Internet Explorer and Mozilla Firefox. CA certificate will be exported in the next section to either a p12 or pfx format. CA certificates may have .crt or .cer as file extensions. Internet Explorer Within Internet explorer, click on the Settings icon and select Internet option.    Navigate to the Content tab and then click on Certificates.   Click Import to launch the Certificate Import Wizard.   Click Browse to search & select the CA Certificate file. This file may have a file extension of .crt or .cer.    If a password was used when requesting the Certificate, enter it in this screen. Ensure that the “Mark this key as exportable” option is selected and click Next.    Select the certificate store in which to import the certificate and click Next.     Review the setting and click Finish.     The certificate should appear in the certificate store selected. Mozilla Firefox Within Firefox, select Options from the settings menu.    In the Options window, click Privacy & Security from the navigation pane on the left. Scroll to the very bottom and click on the View Certificates… button.    In the Certificate Manager, click the Import… button and select the certificate to convert and click Open.   If a password was provided when the certificate was requested, enter that password and click OK.    The certificate should now show up in the Certificate Manager.   Export CA certificate from IE/Firefox to convert to a p12/pfx file This section presents steps to export CA certificate from Internet Explorer (pfx) and Mozilla Firefox (p12). Internet Explorer (pfx) Select the certificate imported above and click the Export… button to initiate the Certificate Export Wizard.      Select the option “Yes, export the private key” and click Next.   Select the option for Personal Informatica Exchange – PCKS #12 (.PFX) and click Next.    Create a password, enter it and confirm it in the following screen.  This password will be used later on in the process. Click Next to continue.    Select a location to export the file and click Save.    Verify the file location and click Next.    Review the export settings, ensure that the Export Keys settings says “Yes”, if not start the export over. If all looks good, click Next. A message will appear when the export is successful.      Mozilla Firefox (p12) To export the certificate from Firefox, click the Backup… button in the Certificate Manager.  Select a location and a name for the file.  Ensure that the Save as type: is “PKCS12 Files (*.p12)”. Click the Save button to continue.    Enter a password to be used later when exporting the public and private keys. Click the OK button to finish.   Install openssl tool (Optional) If you haven't done so already, install openssl tool for your operating system.  List of third party binary distributions may be found on www.openssl.org or here. Examples in this article are shown for Windows platform. Convert the p12/pfx file into a Java Keystore Execute the following toto export the public and private keys exported above. In the commands listed below, the values that are customer specific are in Bold Italics. There is a screen shot at the end of this section that shows all of the commands run in sequence and it shows how the passwords relate between the steps. Examples in this article assume location of the certificate as the working directory. If you are executing these commands from a different directory (ex: ...\openssl\bin), then ensure you provide absolute directory path to all the files. Export the public key Public key will be exported from the certificate (p12/pfx) using openssl tool. Result is a .pem (public_key.pem) file that will be imported into Anaplan using Anaplan's Tenant Administrator client.   NOTE: The command below will prompt for a password. This password was created in steps above during export. openssl pkcs12 -clcerts -nokeys -in ScottSmithExportedCert.pfx -out public_key.pem Edit the public_key.pem file Remove everything before ---Begin Certificate --- (section highlighted in yellow). Ensure that the emailAddress value is populated with the user that will run the integrations. Export the Private Key This command will prompt for a password. This password is the password created in the export above. It will the prompt for a new password for the Private Key. It will also ask to confirm that password.  openssl pkcs12 -nocerts -in ScottSmithExportedCert.pfx -out private_key.pem Create P12 Bundle This command will prompt for the private key password from the step above. It will the prompt for a new password for the Bundle. It will also ask to confirm that password. openssl pkcs12 -export -in public_key.pem -inkey private_key.pem -out bundle.p12 -name Scott -CAfile public_key.pem -caname Scott In the command above,  public_key.pem is the file that was created in the step "Export the Public Key".  This is the file that will be registered with Anaplan using Anaplan Tenant Administrator.  private_key.pem is the file that was created in the step "Export the Private Key". bundle.p12  is the output file from this command, which will be used in the next step to create Java Keystore. Scott is the keystore alias. Add to Java Keystore (jks) Using keytool (typically found in <Java8>/bin), create a .jks file. This file will be referenced in Anaplan Connect 1.4 scripts for authentication. Command below will prompt for a new password for the entry into the keystore. It will also ask to confirm that password.  It will, then, prompt for the Bundle password from the step above. keytool -importkeystore -destkeystore my_keystore.jks -srckeystore bundle.p12 -srcstoretype PKCS12 In the command above: my_keystore.jks is the keystore file that will be referenced in your Anaplan Connect 1.4 scripts. bundle.p12 is the P12 bundle that was created in the last step.   Manage CA certificates in Anaplan Tenant Administrator In this step, you will add public_key.pem file to list of certificates in Anaplan Tenant Administrator. This file was created & edited in the first two steps of the last section. Log on to Anaplan Tenant Administrator. Navigate to Administration --> Security --> Certificates --> Add Certificate.   Validate CA certificate authentication via Anaplan Connect 1.4 script. Since you will be migrating to CA Certificate based authentication, you will need to upgrade your Anaplan Connect and associated scripts from v1.3 to v1.4. Community article, Migrating from Anaplan Connect 1.3.x.x to Anaplan Connect 1.4 will guide you through necessary steps. Follow the steps outlined in the article to edit & execute your Anaplan Connect 1.4 script. Examples provided (Windows & Linux) at the end of the article will validate authentication to Anaplan using CA Certificates and will return list of user's workspaces in a tenant.
View full article
Allowing model users to export data out of an Anaplan model on a large scale mode (e.g. many end user-run exports) is not a good practice. One approach is to create an "export model" in Anaplan that is specifically for exporting purposes. This export model will have the same data set and selective access definitions as in the main model, but will not have any of the data entry or reporting dashboards. In comparison, it will only have dashboards with buttons that run specific exports. To ensure a good user experience, provide a hyperlink to the export model from a dashboard in the main model. For example, users start from their usual, main model, see a link named "Exports," and click it. The link redirects them to the export model where they see a set of predefined buttons that run exports. It is important to explain to the customer and model users that: Exports execute sequentially (first in, first served): users have to wait until previously executed exports are finished before they can run their own export. There will be data latency as the export model will likely sync once or twice a day from the main model. The export from the main model to the export model is a blocking operation and must ideally be run at times that are least likely to not disrupt operations. Users will need to understand the schedule and plan their exports accordingly.
View full article
Announcements


Join us in San Francisco, CA, to explore what’s possible with business leaders, industry visionaries, and your peers.
Take $200 off your registration with code COMMUNITYCPX200.


Anapedia

Review the official documentation of the Anaplan platform.

Share what you know!

Share what you know! Contribute your best practices and Anaplan expertise using our Contributor's Toolkit.