Choose a label or article, or search below to begin.
Sort by:
Note: While all of these scripts have been tested and found to be fully functional, due to the vast amount of potential use cases, Anaplan does not explicitly support custom scripts built by our customers. This article is for information only and does not suggest any future product direction. Getting Started Python 3 offers many options for interacting with an API. This article will explain how you can use Python 3 to automate many of the requests that are available in our apiary, which can be found at   https://anaplan.docs.apiary.io/#. This article assumes you have the requests (version 2.18.4), base64, and JSON modules installed as well as the Python 3 version 3.6.4. Please make sure you are installing these modules with Python 3, and not for an older version of Python. For more information on these modules, please see their respective websites: Python   (If you are using a Python version older or newer than 3.6.4 or requests version older or newer than 2.18.4 we cannot guarantee validity of the article)   Requests   Base Converter   JSON   (Note: install instructions are not at this site but will be the same as any other Python module) Note:   Please read the comments at the top of every script before use, as they more thoroughly detail the assumptions that each script makes. Authentication To start, let's talk about Authentication. Every script run that connects to our API will be required to supply valid authentication. There are 2 ways to authenticate a Python script that I will be covering. Certificate Authentication Basic Encoded Authentication Certificate authentication will require that you have a valid Anaplan certificate, which you can read more about   here. Once you have your certificate saved locally, to properly convert your Anaplan certificate to be usable with the API, first you will need   openssl. Once you have that, you will need to convert the certificate to PEM format by running the following code in your terminal: openssl x509 -inform der -in certificate-(certnumber).cer -out certtest.pem If you are using Certificate Authorization, the scripts we use in this article will assume you know the Anaplan account email associated with the certificate. If you do not know it, you can extract the common name (CN) from the PEM file by running the following code in your terminal: openssl x509 -text -in certtest.pem To be used with the API, the PEM certificate string will need to be converted to base64, but the scripts we will be covering will take care of that for you, so I won't cover that in this section. To use basic authentication, you will need to know the Anaplan account email that is being used, as well as the password. All scripts in this article will have the following code near the top: # Insert the Anaplan account email being used username = '' ----------------- # If using cert auth, replace cert.pem with your pem converted certificate # filename. Otherwise, remove this line. cert = open('cert.pem').read() # If using basic auth, insert your password. Otherwise, remove this line. password = '' # Uncomment your authentication method (cert or basic). Remove the other. user = 'AnaplanCertificate ' + str(base64.b64encode(( f'{username}:{cert}').encode('utf-8')).decode('utf-8')) # user = 'Basic ' + str(base64.b64encode((f'{username}:{password}' # ).encode('utf-8')).decode('utf-8') Regardless of authentication method, you will need to set the username variable to the Anaplan account email being used. If you are using a certificate to authenticate, you will need to have your PEM converted certificate in the same folder or a child folder of the one you are running the scripts from. If your certificate is in a child folder, please remember to include the file path when replacing cert.pem (e.g. cert/cert.pem). You can remove the password line and its comments, and its respective user variable. If you are using basic authentication, you will need to set the password variable to your Anaplan account password and you can remove the cert line, its comments, and its respective user variable. Getting the Information Needed for Each Script Most of the scripts covered in this article will require you to know an ID or metadata for the file, action, etc., that you are trying to process. Each script that gets this information for their respective fields is titled get_____.py. For example, if you want to get your files metadata, you'll run getFiles.py, which will write the file metadata for each file in the selected model in the selected workspace in an array to a JSON file titled files.json. You can then open the JSON file, find the file you need to reference, and use the metadata from that entry in your other scripts. TIP:   If you open the raw data tab of the JSON file it makes it much easier to copy the whole set of metadata. The following are the links to download each get____.py script. Each get script uses the requests.get method to send a get request to the proper API endpoint. getWorkspaces.py: Writes an array to workspaces.json of all the workspaces the user has access to. getModels.py: Writes an array to models.json of either all the models a user has access to if wGuid is left blank, or all of the models the user has access to in a selected workspace if a workspace ID was inserted. getModelInfo.py: Writes an array to modelInfo.json of all metadata associated with the selected model. getFiles.py: Writes an array to files.json of all metadata for each file the user has access to in the selected model and workspace. (Please refer to   the Apiary   for more information on private vs default files. Generally it is recommended that all scripts be run via the same user account.) getChunkData.py: Writes an array to chunkData.json of all metadata for each chunk of the selected file in the selected model and workspace. getImports.py: Writes an array to imports.json of all metadata for each import in the selected model and workspace. getExports.py: Writes an array to exports.json of all metadata for each export in the selected model and workspace. getActions.py: Writes an array to actions.json of all metadata for all actions in the selected model and workspace. getProcesses.py: Writes an array to processes.json of all metadata for all processes in the selected model and workspace. Uploads A file can be uploaded to the Anaplan API endpoint either in chunks, or as a single chunk. Per our apiary: We recommend that you upload files in several chunks. This enables you to resume an upload that fails before the final chunk is uploaded. In addition, you can compress files on the upload action. We recommend compressing single chunks that are larger than 50MB. This creates a Private File. Note: To upload a file using the, API that file must exist in Anaplan. If the file has not been previously uploaded, you must upload it initially using the Anaplan user interface. You can then carry out subsequent uploads of that file using the API. Multiple Chunk Uploads The script we have for reference is built so that if the script is interrupted for any reason, or if any particular chunk of a file fails to upload, simply rerunning the script will start uploading the file again, starting at the last successful chunk. For this to work, the file must be initially split using a standard naming convention, using the terminal script below. split -b [numberofBytes] [path and filename] [prefix for output files] You can store the file in any location as long as you the proper file path when setting the chunkFilePrefix (e.g. chunkFilePrefix = ''upload_chunks/chunk-" This will look for file chunks named chunk-aa, chunk-ab, chunk-ac etc., up to chunk-zz in the folder script_origin/upload_chunks/. It is very unlikely that you will ever exceed chunk-zz). This will let the script know where to look for the chunks of the file to upload. You can download the script for running a multiple chunk upload from this link: chunkUpload.py Note:   The assumed naming conventions will only be standard if using Terminal, and they do not necessarily work if the file was split using another method in Windows. If you are using Windows you will need to either create a way to standardize the naming of the chunks alphabetically {chunkFilePrefix}(aa - zz) or run the script as detailed in the   Apiary. Note:   The chunkUpload.py script keeps track of the last successful chunk by writing the name of the last successful chunk to a .txt file chunkStop.txt. This file is deleted once the import completes successfully. If the file is modified in between runs of the script, the script may not function correctly. Best practice is to leave the file alone, and delete it if you want to start the upload from the first chunk. Single Chunk Upload The single chunk upload should only be used if the file is small enough to upload in a reasonable time frame. If the upload fails, it will have to start again from the beginning. If your file has a different name then that of its version of the server, you will need to modify line 31 ("name" : '') to reflect the name of the local file. This script runs a single put request to the API endpoint to upload the file. You can download the script for running a single chunk upload from this link: singleChunkUpload.py Imports The import.py script sends a post request to the API endpoint for the selected import. You will need to set the importData value to the metadata for the import. See Getting the Information Needed for Each Script for more information. You can download the script for running an import from this link: Import.py Once the import is finished, the script will write the metadata for the import task in an array to postImport.json, which you can use to verify which task you want to view the status of while running the importStatus.py script. The importStatus.py script will return a list of all tasks associated with the selected importID and their respective list index. If you are wanting to check the status of the last run import, make sure you are checking postImport.json to verify you have the correct taskID. Enter the index for the task and the script will write the task status to an array in file importStatus.json. If the task is still in progress, it will print the task status and progress. If the task finished and a failure dump is available, it will write the failure dump in comma delimited format to importDump.csv which can be used to review cause of the failure. If the task finished with no failures, you will get a message telling you the import has completed with no failures. You can download the script for importStatus.py from this link: importStatus.py Note:   If you check the status of a task with an old taskID for an import that has been run since you last checked it, the dump will no longer exist and importDump.csv will be overwritten with an HTTP error, and the status of the task will be 410 Gone. Exports The export.py script sends a post request to the API endpoint for the selected export. You will need to set the exportData value to the metadata for the export. See Getting the Information Needed for Each Script for more information. You can download the script for running an export from this link: Export.py Once the export is finished, the script will write the metadata for the export task in an array to postExport.json, which you can use to verify which task you want to view the status of while running the exportStatus.py script. The exportStatus.py script will return a list of all tasks associated with the selected exportID and their respective list index. If you are wanting to check the status of the last run import, make sure you are checking postExport.json to verify you have the correct taskID. Enter the index for the task and the script will write the task status to an array in file exportStatus.json. If the task is still in progress, it will print the task status and progress. It is important to note that no failure dump will be generated if the export fails. You can download the script for exportStatus.py from this link: exportStatus.py Actions The action.py script sends a post request to the API endpoint for the selected action (for use with actions other than imports or exports). You will need to set the actionData value to the metadata for the action. See Getting the Information Needed for Each Script for more information. You can download the script for running an action from this link: actionStatus.py. Processes The process.py script sends a post request to the API endpoint for the selected process. You will need to set the processData value to the metadata for the process. See Getting the Information Needed for Each Script for more information. You can download the script for running a process from this link: Process.py Once the process is finished, the script will write the metadata for the process task in an array to postProcess.json, which you can use to verify which task you want to view the status of while running the processStatus.py script. The processStatus.py script will return a list of all tasks associated with the selected processID and their respective list index. If you are wanting to check the status of the last run import, make sure you are checking postProcess.json to verify you have the correct taskID. Enter the index for the task and the script will write the task status to an array in file processStatus.json. If the task is still in progress, it will print the task status and progress. If the task finished and a failure dump is available, it will write the failure dump in comma delimited format to processDump.csv which can be used to review cause of the failure. It is important to note that no failure dump will be generated for the process itself, only if one of the imports in the process failed. If the task finished with no failures, you will get a message telling you the process has completed with no failures. You can download the script for processStatus.py from this link: processStatus.py Downloading a File Downloading a file from the Anaplan API endpoint will download the file in however many chunks it exists in on the endpoint. It is important to note that you should set the variable fileName to the name it has in the file metadata. First, the downloads individual chunk metadata will be written in an array to downloadChunkData.json for reference. The script will then download the file chunk by chunk and write each chunk to a new local file with the same name as the 'name' listed in the files metadata. You can download the link for this script from this link: downloadFile.py Note:If a file already exists in the same folder as your script with the same name as the name value in the files metadata, the local file will be overwritten with the file being downloaded from the server. Deleting a File You can delete the file contents of any file that the user has access to that exists in the Anaplan server. Note: This only removes private content. Default content and the import data source model object will remain. You can download the link for this script from this link: deleteFile.py Standalone Requests Code and Their Required Headers In this section, I will list the code for each request detailed above, including the API URL and the headers necessary to complete the call. I will be leaving the content right of Authorization: headers blank. Authorization header values can be either Basic encoded_username:password or AnaplanCertificate encoded_CommonName:PEM_Certificate_String (see   Certificate-Authorization-Using-the-Anaplan-API   for more information on encoded certificates) Note: requests.get will only generate a response body from the server, and no data will be locally saved unless written to a local file. Get Workspaces List requests.get('https://api.anaplan.com/1/3/workspaces/', headers='Authorization':) Get Models List requests.get('https://api.anaplan.com/1/3/models/', headers={'Authorization':}) or requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models', headers={'Authorization':}) Get Model Info requests.get(f'https://api.anaplan.com/1/3/models/{mGuid}', headers={'Authorization':}) Get Files/Imports/Exports/Actions/Processes List The get request for files, imports, exports, actions, or processes are largely the same. Change files to imports, exports, actions, or processes to run each. requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files', headers={'Authorization':}) Get Chunk Data requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/chunks', headers={'Authorization':}) Post Chunk Count requests.post('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/chunks/{chunkNumber}', headers={'Authorization': , 'Content-type': 'application/json'}, json={fileMetaData}) Upload a Chunk of a File requests.put('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/chunks/{chunkNumber}', headers={'Authorization': , 'Content-Type': 'application/octet-stream'}, data={raw contents of local chunk file}) Mark an upload complete requests.put('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/complete', headers=={'Authorization': , 'Content-Type': 'application/json'}, json={fileMetaData}) Upload a File in a Single Chunk requests.put('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}', headers={'Authorization': , 'Content-Type': 'application/octet-stream'}, data={raw contents of local file}) Run an Import/Export/Process The post request for imports, exports, and processes are largely the same. Change imports to exports, actions, or processes to run each. requests.post('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/imports/{Id}/tasks', headers={'Authorization': , 'Content-Type': 'application/json'}, data=json.dumps({'localeName': 'en_US'})) Run an Action requests.post('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/imports/{Id}/tasks', data={'localeName': 'en_US'}, headers={'Authorization': , 'Content-Type': 'application/json'}) Get Task list for an Import/Export/Action/Process The get request for import, export, action and process task lists are largely the same. Change imports to exports, actions, or processes to get each task list. requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/imports/{importID}/tasks', headers={'Authorization':}) Get Status for an Import/Export/Action/Process Task The get request for import, export, action and process task statuses are largely the same. Change imports to exports, actions, or processes to get each task list. Note: Only imports and processes will ever generate a failure dump. requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/imports/{ID}/tasks/{taskID}' headers={'Authorization':}) Download a File Note:   You will need to get the chunk metadata for each chunk of a file you want to download. requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/chunks/{chunkID}, headers={'Authorization': ,'Accept': 'application/octet-stream'}) Delete a File Note:   This only removes private content. Default content and the import data source model object will remain. requests.delete('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}', headers={'Authorization': , 'Content-type': 'application/json'} Note:  SFDC user administration is not covered in this article, but the same concepts from the scripts provided can be applied to SFDC user administration. For more information on SFDC user administration see the apiary entry for  SFDC user administration .
View full article
You can interact with the data in your models using Anaplan's RESTful API. This enables you to securely import and export data, as well as run actions through any programmatic way you desire. The API can be leveraged in any custom integration, allowing for a wide range of integration solutions to be implemented. Completing an integration using the Anaplan API is a technical process that will require significant action by an individual with programming experience. Visit the links below to learn more: API Documentation Anaplan API Guide You can also view demonstration videos to understand how to implement APIs in your custom Integration client. The below videos show step-by-step guides of sequencing API calls and exporting data from Anaplan, importing data into Anaplan, and running delete actions and Anaplan processes. API sequence for uploading a file to Anaplan and running an import action: API sequence for running an export action and downloading a file from Anaplan: API sequence for running an Anaplan process and a delete action:
View full article
Introduction Data Integration is a set of processes that bring data from disparate sources in to Anaplan models.  These processes could include activities that help understand the data (Data Profiling), cleanse & standardize data (Data Quality), and transform/load data (ETL). Anaplan offers following data integration options.  Manual import Anaplan Connect Extract Transform & Load (ETL) REST API  Anaplan learning center offers several on-demand courses on Anaplan’s data integration options.  Following is a list of on-demand courses. Data Integration Anaplan Data Integration Basics (303) Anaplan Connect (301) Hyperconnect This article presents step by step instructions on different integration tasks that can be performed using Anaplan integration APIs.  These tasks include: Import data into Anaplan Export data from Anaplan Run a process Downloading files Delete files Setup Install & Configure Postman Download latest Postman application for your platform (Ex: Mac, Windows, Linux) from  https://www.getpostman.com/apps . Instructions to install Postman app for your platform may be found here . Postman account: Signing up for a postman account is optional.  However, having an account will give you additional benefits of backing up history, collections, environments, and header presets (ex: authorization credentials).  Instructions for creating a postman account may be accessed here .  Download Files You may follow instructions provided in this article against your instance of Anaplan platform.  You will need to download a set of files for these exercises. Customers.csv: Download the .csv file to a directory on your workstation.  This file consists a list of customers you will import into a list using Anaplan integration APIs. Anaplan Community REST API Solution.txt: This is an export (json) from postman that contains solution to the exercises outlined in this article. You may choose to import this file into postman to review the solution. Although the file extension is .txt, it is a json file that can be imported into Postman. Anaplan Setup  Anaplan RESTful API, Import, allows you to bring data into Anaplan.  This is done by using POST HTTPs verb to call an import.   This means, an import action must exist in Anaplan prior to the API call.  Initially, you will import Employees.csv into Anaplan using the application.  Subsequent imports into this list will be carried out via API calls. Create a new model named   Data Integration API Import Customers.csv Create a list named Customers Using Anaplan application import Customers.csv to Customers list. Set File Options as shown below Map each column to a property in the list as shown below and Run Import. 31 records should be imported into the list.   Create an Export action. In this article, you will also learn how to export the data from Anaplan using APIs.  Anaplan API, Export, calls an export action previously created.  Therefore, create an Export of Customers list & save the export definition.  This will create an export action (ex: Grid – Customers.csv). Note:  Set file type to .csv in export action. You may choose to rename the export action under Settings ==> Actions ==> Exports. Create a Process Along with Import & Export, you will also learn how to leverage APIs to call an Anaplan process. Create a Process named “Import & Export a List” that calls Import (ex: Import Customers from Customers.csv) first followed by Export (Ex: Grid – Customers.csv).  Name the process, Import & Export a List. Anaplan Integration API Fundamentals Anaplan Integration APIs (v1.3) are RESTful API that allow for requests to be made via HTTPS using GET, PUT, POST, & DELETE verbs.  Using these APIs, you can perform integration tasks such as: Import data into a module/list Export data from a module/list Upload files for import Run an Anaplan Process Download Files that have been uploaded or file that were created during an export Delete from list using selection End points enable you to obtain information regarding workspaces, models, imports, exports, processes, etc… Many end points contain a chain of parameters. Example We want to get a list of models in a workspace.  In order to get a list of models, we will, first, need to select a workspace a model belongs to.  Obtain base URI for Anaplan API. Base URI for Anaplan Integration API is https://api.anaplan.com Select version of API that will be used in API calls. This article is based on version 1.3.  Therefore, updated base URI will be https://api.anaplan.com/1/3 Retrieve a list of workspaces you have access to GET <base URI>/workspaces. Where <base URI> is https://api.anaplan.com/1/3 GET https://api.anaplan.com/1/3/workspaces Above GET call returns a guid & name for each workspace user has access to.                             {                                    "guid": "8a81b09d5e8c6f27015ece3402487d33",                                    "name": "Pavan Marpaka"                              } Retrieve a list of models in a selected workspace by providing {guid} as a parameter value. https://api.anaplan.com/1/3/workspaces/{guid}/models https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models   Chaining Parameters Many end points contain a set of parameters that can be chained together in a request.  For example, to get a list of import actions we can chain together workspaceId & modelId as parameters in a GET request.  Request call to get a list of import action might look something like:               https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/imports Following sequence of requests need to be made to get a list of import actions in a selected model. GET a list of workspaces user has access to           https://api.anaplan.com/1/3/workspaces Select a workspaceID (guid) from the result GET a list of models in a workspace providing workspaceID as a parameter value            https://api.anaplan.com/1/3/workspaces /{workspaceID}/models Select a modelID from the result GET a list of Imports from a model in a workspace.           https://api.anaplan.com/1/3/workspaces /{workspaceID}/models/{modelID}/imports Formats  Format for most request and responses is application-json.  Exception to this are when uploading files in a single chunk or multiple chunks and getting data in a chunk. These requests use application/octet-stream format.  These formats are specified in header of an API request. They are also specified in header of a response. Data Integration with Anaplan APIs & Postman Background  Next few sections will provide you with step by step instructions on how to perform different data integration tasks via Anaplan integration API requests. You will perform following data integration tasks using Anaplan APIs: Upload file(s) to Anaplan Import data into a list Export data from a list Download file that has been uploaded or exported Run an Anaplan Process Delete uploaded file(s)  Postman application, an HTTP client for making RESTful API calls, will be used to perform these integration tasks.  You should have installed and configured Postman on your workstation using instructions provided in the beginning of this article.  You may follow steps outlined in the next few sections.  You may also import Postman collection (json file) provided with this article.  Navigating Postman UI  This section presents basics of Postman user interface (UI).  You will learn how to perform simple tasks required to make API calls.  These tasks include: Create a new collection Adding a Folder Add a Request Submit a Request Selecting a Request Method (GET, POST, PUT, DELETE) Specifying a Resource URI Specify Authorization, Headers, and Body (raw, binary) You will perform above steps repeatedly for each integration task. Create a new collection From New orange drop down box select “Collection” New Collection   Provide name for the collection (Ex: Data Integration API) Click Create Add Folders Create following folders in the collection Authentication, Upload, Import, Export, Download Files, Process, Delete Files. Folders in a collection Add a Request You don’t need to perform this step right now. Following steps will outline how a request can be added to a folder.  You will use this instruction each time a new request is created. Select a folder where you want to add a new request. Click on and select Add Request Add a request Provide a Request Name and click on Save Submit a Request Select a Request Method (GET, PUT, POST, DELETE) Select request method   Provide a resource URI (ex: https://api.anaplan.com/1/3/workspaces ) Click on Authorization and select “Basic Auth” for Authorization Type. Provide your Anaplan credentials (username & password) Authorization Provide necessary Headers. Common Headers include Authorization (should be pre-populated from Authorization tab), and Content-Type. Header variables & values   Some requests may also require a Body. Information for Body will be available in API documentation. Body   Click on Submit. Import data into a List using Anaplan APIs One of the data integration tasks is to bring data into Anaplan.  Popular method to bring data into Anaplan platform is via Import feature in Anaplan application.  Once imported, an import action is created.  This import action can be executed via an API request.  Earlier, you have imported Employees.csv file into a hierarchy.  In this section, you will use Anaplan Integration APIs to import employees’ data into the hierarchy.  Following sequence of requests will be made to import data into the list.     Get a list of workspaces In Postman, under the folder “Authentication”, create a new request and label it “GET List of Workspaces” Select request method GET Type https://api.anaplan.com/1/3/workspaces for resource URI Under “Authorization” tab, select Basic Auth and provide your Anaplan credentials. Click Send Response to this request should result in the following. Status: 200 OK Body: guid & name. “guid” is the workspaceID.  Sample result is shown below.  WorkspaceID for workspace “Pavan Marpaka” is 8a81b09d5e8c6f27015ece3402487d33.  This workspaceID will be passed as an input parameter in the next request, GET List of Models in a Workspace.    Get a list of Models in a workspace In Postman, under the folder “Authentication”, create a new request and label it “GET List of Models in a Workspaces” Select request method GET Input parameter for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33), which was retrieved in the last request. Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models for resource URI. Ex: https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models Under “Authorization” tab, select   Basic Auth   and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Content-Type, application/json. Click Send Response to this request should result in the following. Status: 200 OK Body: activeState, id & name. “id” is the modelID, which will passed as an input parameter in subsequent request calls.  In the result is shown below (your result may vary), “Top 15 DI API” is the model name.  92269C17A8404B7A90C536F4642E93DE is the modelID.  Get a list of files In Postman, under the folder “Upload”, create a new request and label it “GET List of Files and FileID” Select request method GET Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33) and modelID (92269C17A8404B7A90C536F4642E93DE) that were retrieved in the last request. Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/files for resource URI. Example:https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/files Under “Authorization” tab, select   Basic Auth   and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Content-Type, application/json. Click   Send Response to this request should result in the following. Status: 200 OK Body: id & name of the files that were either previously uploaded or exported. In the result below (your result may vary), fileID is 113000000001. This fileID will be passed as an input parameter in the next request (PUT) that will upload the file, Customers.csv   Upload a file In Postman, under the folder “Upload”, create a new request and label it “Upload File” Select request method PUT Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33) and modelID (92269C17A8404B7A90C536F4642E93DE) that were retrieved in the last request. Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/files for resource URI. Example:https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/files Under “Authorization” tab, select   Basic Auth   and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Content-Type, application/octet-stream. Click on “Body” tab, select “binary” radio button, and click “Choose Files” to select  Customers. csv   file you downloaded earlier. Click   Send Response to this request should result in the following. Status: 204 No Content. This is an expected response.  It just means the request was successful, but the response is empty. Get a list of Import actions in a model In Postman, under the folder “Import”, create a new request and label it “GET a list of Import Actions” Select request method GET Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33) and modelID (92269C17A8404B7A90C536F4642E93DE) that were retrieved in the last request. (Note: Your workspaceID and modelID may be different) Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/imports for resource URI. Example: https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/imports Under “Authorization” tab, select   Basic Auth   and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Content-Type, application/json. Click   Send   button Response to this request should result in the following. Status: 200 OK Body: “id”   is the importID (112000000001). This value will be passed as an input parameter to a POST request in the next step.  The POST request will call an import action that will import data from the uploaded Customers.csv into the list. Call an import Action In Postman, under the folder “Import”, create a new request and label it “Call an Import Action” Select request method POST Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33), modelID (92269C17A8404B7A90C536F4642E93DE), and importID (112000000001) that were retrieved in the last request. (Note: Your workspaceID, modelID, and importID may be different) Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/imports/{importID}/tasks for resource URI. Example:https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/imports/112000000001/tasks Under “Authorization” tab, select Basic Auth and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Content-Type, application/json. Click on “Body” tab, select “raw” and type following {   "localeName": "en_US" }  Click Send button Response to this request should result in the following. Status: 200 OK Body: “taskId” is for the import is returned as a json object. This task id can be used to check for status of import.  {     "taskId": "2D88EBAA093B4D4C9603DD9278521EBC" } Check status of an import call In Postman, under the folder “Import”, create a new request and label it “Check Status of Import Call” Select request method GET Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33), modelID (92269C17A8404B7A90C536F4642E93DE), importID (112000000000), and taskId (2D88EBAA093B4D4C9603DD9278521EBC) that were retrieved in the last request. (Note: Your workspaceID, modelID, importID, and taskId may be different) Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/imports/{importID}/tasks/taskId for resource URI.Example:https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/imports/112000000000/tasks/2D88EBAA093B4D4C9603DD9278521EBC Under “Authorization” tab, select Basic Auth and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Accept, application/json. Click Send button Response to this request should result in the following. Status: 200 OK Response should include “Complete” status, Number of records, and a value of “true” for “successful”. Validate import in Anaplan In Anaplan application, validate the Customers list with a list of customers. Export data using Anaplan APIs An export definition can be saved for later use.  Saved export definitions can be viewed under Settings > Actions > Exports. Earlier (Section 2), you exported the organization hierarchy and saved the export definition.  This should have created an export action (ex: Grid – Customers.csv). In this section, we will use Anaplan APIs to execute the export action.  Following sequence of requests will be made to export data. Get a list of Export Definitions In Postman, under the folder “Export”, create a new request and label it “Get a list of Export Definitions” Select request method GET Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33) and modelID (92269C17A8404B7A90C536F4642E93DE) that were retrieved earlier. Refer to results for requests under “Authentication” folder to obtain your workspaceId and modelId. Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/exports for resource URI. Example:https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/exports Under “Authorization” tab, select Basic Auth and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Content-Type, application/json. Click Send Response to this request should result in the following. Status: 200 OK Body: Should consist of id & name of export action.  Run the export In Postman, under the folder “Export”, create a new request and label it “Run the export” Select request method POST Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33), modelID (92269C17A8404B7A90C536F4642E93DE), and exportId (116000000001) that were retrieved in the previous request. Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/exports/{exportId}/tasks for resource URI. Example: https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/exports/116000000001/tasks Under “Authorization” tab, select Basic Auth and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Content-Type, application/json. Click on “Body” tab, select “raw” radio button, and type the following. {   "localeName": "en_US" } Click Send Response to this request should result in the following. Status: 200 OK. Body: Response should return a taskId.  The taskId can be used to determine status of export. {     "taskId": "29B4617C3D8646018B269F428AC396A3" } Get status of an export task In Postman, under the folder “Export”, create a new request and label it “Get status of an export task”. Select request method GET Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33), modelID (92269C17A8404B7A90C536F4642E93DE), exportId (116000000001) and taskId (29B4617C3D8646018B269F428AC396A3) that were retrieved in the previous request. (Note:  Your workspaceID, modelID, exportId, and taskId may be different) For resource URI type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/exports/{exportId}/tasks/{taskId} Example:https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/exports/116000000001/tasks/29B4617C3D8646018B269F428AC396A3 Under “Authorization” tab, select Basic Auth and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Content-Type, application/json. Click Send button Response to this request should result in the following. Status: 200 OK Body Download File using Anaplan APIs Files that have been either previously uploaded or exported can be downloaded using Anaplan API. In previous section, you exported the list to a csv file via APIs.  In this section, you will use APIs to download the exported file. Following sequence of requests will be made to download files. Get a list of files In Postman, under the folder “Download Files”, create a new request and label it “Get a list files” Select request method GET Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33) and modelID (92269C17A8404B7A90C536F4642E93DE) that were retrieved earlier. Refer to results for requests under “Authentication” folder to obtain your workspaceId and modelId. Your workspaceId and modelId may be different. Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/files for resource URI. Example:https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/files Under “Authorization” tab, select Basic Auth and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Content-Type, application/json. Click Send Response to this request should result in the following. Status: 200 OK Body: Response body returns information about available files in json format. “id” is the fileId, which will be passed as an input parameter in the next request to download the file Get chunkID and Name a file In Postman, under the folder “Download Files”, create a new request and label it “Get chunkID and Name of a file” Select request method GET Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33) and modelID (92269C17A8404B7A90C536F4642E93DE), and fileId (116000000001), that were retrieved earlier. Your workspaceId, modelId, and fileId may be different. Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/files/{fileId}/chunks for resource URI. For Example: https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/files/116000000001/chunks Under “Authorization” tab, select Basic Auth and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Accept, application/json. Click Send Response to this request should result in the following Status: 200 OK Body: Response body returns chunkID and chunk name in json format. Get a chunk of data  In Postman, under the folder “Download Files”, create a new request and label it “Get a chunk of data”. Select request method GET Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33) and modelID (92269C17A8404B7A90C536F4642E93DE), and fileId (116000000001), that were retrieved earlier. Your workspaceId, modelId, and fileId may be different. Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/files/{fileId}/chunks/{chunkID} for resource URI. For Example: https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/files/116000000001/chunks/0 Under “Authorization” tab, select Basic Auth and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Accept, application/octet-stream. Click Send Response to this request should result in the following Status: 200 OK Body: Response body returns data in csv format. Repeat  Repeat the above step for each chunkID returned from the "Get chunkID and Name" API call.  Concatenate all the data into a single file. Concatenate chunks into a single file  After collecting data from all the chunks, concatenate the chunks into a single output file.   CAUTION:  If you would like to download the file in a single chunk, DO NOT make the following API call.  It is NOT supported by Anaplan and may result in performance issues.  Best practice for large files is to download the files in chunks using steps described above. GET https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/files/{fileId}   Delete File using Anaplan APIs Files that have been either previously uploaded or exported can be deleted using Anaplan API. In previous sections, you have uploaded a file to Anaplan for import.  You’ve also exported a list to a csv file via APIs.  In this section, you will use APIs to delete the exported file. In Postman, under the folder “Delete File”, create a new request and label it “Delete an export file” Select request method DELETE Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33) and modelID (92269C17A8404B7A90C536F4642E93DE), and fileId (116000000001), that were retrieved earlier. Your workspaceId, modelId, and fileId may be different. Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/files/{fileId} for resource URI. For Example: https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/files/116000000001 Under “Authorization” tab, select Basic Auth and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Content-Type, application/json. Click Send Response to this request should result in the following. Status: 204 OK No Content. This is an expected response.  It just means the request was successful, but the response is empty Run a Process using Anaplan APIs A process is a sequence of actions.  Actions such as import, and export can be included in a process. In an earlier section (Setup), you created a process called “Import & Export a List”.  In this section, we will execute this process using Anaplan APIs.  Following sequence of requests will be made to execute a process.   Get a list of Processes in a model In Postman, under the folder “Process”, create a new request and label it “Get a list of Processes in a model” Select request method GET Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33) and modelID (92269C17A8404B7A90C536F4642E93DE) that were retrieved earlier. Refer to results for requests under “Authentication” folder to obtain your workspaceId and modelId. Your workspaceId and modelId may be different. Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/processes for resource URI. For example: https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/processes Under “Authorization” tab, select Basic Auth and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Content-Type, application/json. Click Send Response to this request should result in the following. Status: 200 OK Body: Response body returns proccessId and name of each process.         Run a Process In Postman, under the folder “Process”, create a new request and label it “Run a Process” Select request method POST Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33) and modelID (92269C17A8404B7A90C536F4642E93DE), and processId (118000000001), that were retrieved earlier. Your workspaceId, modelId, and processId may be different. Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/processes/{processId}/tasks for resource URI. For example: https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/processes/118000000001/tasks Under “Authorization” tab, select Basic Auth and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Content-Type, application/json. Click on “Body” tab. Select “raw” radio button and type the following. {   "localeName": "en_US" } Click Send Response to this request should result in the following. Status: 200 OK Body: Response body returns a taskId for executed process. This taskId can be used to request status of process excecution. {     "taskId": "1573150F0B3A4F9D90676E777FFFB7C1" } Get status of a process task In Postman, under the folder “Process”, create a new request and label it “Get status of a process” Select request method GET Input parameters for this request will be a workspaceID (8a81b09d5e8c6f27015ece3402487d33) and modelID (92269C17A8404B7A90C536F4642E93DE), processId (118000000001), and taskId (1573150F0B3A4F9D90676E777FFFB7C1) that were retrieved earlier. Your workspaceId, modelId, processId, and taskId may be different. Type https://api.anaplan.com/1/3/workspaces/{workspaceID}/models/{modelID}/processes/{processId}/tasks/1573150F0B3A4F9D90676E777FFFB7C1 for resource URI. For example: https://api.anaplan.com/1/3/workspaces/8a81b09d5e8c6f27015ece3402487d33/models/92269C17A8404B7A90C536F4642E93DE/processes/118000000001/tasks/1573150F0B3A4F9D90676E777FFFB7C1 Under “Authorization” tab, select Basic Auth and provide your Anaplan credentials. Click on “Headers” tab and create key, value pair of Content-Type, application/json. Click Send Response to this request should result in the following. Status: 200 OK Conclusion In this article, you learned fundamentals of Anaplan integration APIs & their structure.  You were also presented with step by step instructions on how to call Anaplan REST APIs to perform various data integration tasks.  Attached with this article is an export of Postman collection in .json format.  If you choose to, you may import this export into your Postman environment for solution to exercises described in this article.  You will need to modify various variables (ex: username/password) and end points specific to your environment, for the solution to run successfully.
View full article
Creates the Java KeyStore required for Anaplan Connect 1.4
View full article
I recently posted a Python library for version 1.3 of our API. With the GA announcment of API 2.0, I'm sharing a new library that works with these endpoints. Like the previous library, it does support certificate authentication, however it requires the private key in a particular format (documented in the code, and below). I'm pleased to announce, the use of Java keystore is now supported. Note:   While all of these scripts have been tested and found to be fully functional, due to the vast amount of potential use cases, Anaplan does not explicitly support custom scripts built by our customers. This article is for information only and does not suggest any future product direction. This library is a work in progress, and will be updated with new features once they have been tested.   Getting Started The attached Python library serves as a wrapper for interacting with the Anaplan API. This article will explain how you can use the library automate many of the requests that are available in our Apiary, which can be found at   https://anaplanbulkapi20.docs.apiary.io/#. This article assumes you have the requests and M2Crypto modules installed as well as the Python 3.7. Please make sure you are installing these modules with Python 3, and not for an older version of Python. For more information on these modules, please see their respective websites: Python   (If you are using a Python version older or newer than 3.7 we cannot guarantee validity of the article)   Requests   M2Crypto Note:   Please read the comments at the top of every script before use, as they more thoroughly detail the assumptions that each script makes. Gathering the Necessary Information In order to use this library, the following information is required: Anaplan model ID Anaplan workspace ID Anaplan action ID CA certificate key-pair (private key and public certificate), or username and password There are two ways to obtain the model and workspace IDs: While the model is open, go Help>About:  Select the workspace and model IDs from the URL:  Authentication Every API request is required to supply valid authentication. There are two (2) ways to authenticate: Certificate Authentication Basic Authentication For full details about CA certificates, please refer to our Anapedia article. Basic authentication uses your Anaplan username and password. To create a connection with this library, define the authentication type and details, and the Anaplan workspace and model IDs: Certificate Files: conn = AnaplanConnection(anaplan.generate_authorization("Certificate","<path to private key>", "<path to public certificate>"), "<workspace ID>", "<model ID>") Basic: conn = AnaplanConnection(anaplan.generate_authorization("Basic","<Anaplan username>", "<Anaplan password>"), "<workspace ID>", "<model ID>")   Java Keystore: from anaplan_auth import get_keystore_pair key_pair=get_keystore_pair('/Users/jessewilson/Documents/Certificates/my_keystore.jks', '<passphrase>', '<key alias>', '<key passphrase>') privKey=key_pair[0] pubCert=key_pair[1] #Instantiate AnaplanConnection without workspace or model IDs conn = AnaplanConnection(anaplan.generate_authorization("Certificate", privKey, pubCert), "", "") Note: In the above code, you must import the get_keystore_pair method from the anaplan_auth module in order to pull the private key and public certificate details from the keystore. Getting Anaplan Resource Information You can use this library to get the necessary file or action IDs. This library builds a Python key-value dictionary, which you can search to obtain the desired information: Example: list_of_files = anaplan.get_list(conn, "files") files_dict = anaplan_resource_dictionary.build_id_dict(list_of_files) This code will build a dictionary, with the file name as the key. The following code will return the ID of the file: users_file_id = anaplan_resource_dictionary.get_id(files_dict, "file name") print(users_file_id) To build a dictionary of other resources, replace "files" with the desired resource: actions, exports, imports, processes.  You can use this functionality to easily refer to objects (workspace, model, action, file) by name, rather than ID. Example: #Fetch the name of the process to run process=input("Enter name of process to run: ") start = datetime.utcnow() with open('/Users/jessewilson/Desktop/Test results.txt', 'w+') as file: file.write(anaplan.execute_action(conn, str(ard.get_id(ard.build_id_dict(anaplan.get_list(conn, "processes"), "processes"), process)), 1)) file.close() end = datetime.utcnow() The code above prompts for a process name, queries the Anaplan model for a list of processes, builds a key-value dictionary based on the resource name, then searches that dictionary for the user-provided name, and executes the action, and writes the results to a local file. Uploads You can upload a file of any size, and define a chunk size up to 50mb. The library loops through the file or memory buffer, reading chunks of the specified size and uploading to the Anaplan model. Flat file:  upload = anaplan.file_upload(conn, "<file ID>", <chunkSize (1-50)>, "<path to file>") "Streamed" file: with open('/Users/jessewilson/Documents/countries.csv', "rt") as f: buf=f.read() f.close() print(anaplan.stream_upload(conn, "113000000000", buf)) print(anaplan.stream_upload(conn, "113000000000", "", complete=True)) The above code reads a flat file and saves the data to a  buffer (this can be replaced with any data source, it does not necessarily need to read from a file). This data is then passed to the "streaming" upload method. This method does not accept the chunk size input, instead, it simply ensures that the data in the buffer is less than 50mb before uploading. You are responsible for ensuring that the data you've extracted is appropriately split. Once you've finished uploading the data, you must make one final call to mark the file as complete and ready for use by Anaplan actions. Executing Actions You can run any Anaplan action with this script, and define a number of times to retry the request if there's a problem. In order to execute an Anaplan action, the ID is required. To execute, all that is required is the following: run_job = execute_action(conn, "<action ID>", "<retryCount>") print(run_job) This will run the desired action, loop until complete, then print the results to the screen. If failure dump(s) exits, this will also be returned. Example output: Process action 112000000082 completed. Failure: True Process action 112000000079 completed. Failure: True Details: hierarchyName Worker Report successRowCount 0 successCreateCount 0 successUpdateCount 0 warningsRowCount 435 warningsCreateCount 0 warningsUpdateCount 435 failedCount 4 ignoredCount 0 totalRowCount 439 totalCreateCount 0 totalUpdateCount 435 invalidCount 4 updatedCount 435 renamedCount 435 createdCount 0 lineItemName Code rowCount 0 ignoredCount 435 Failure dump(s): Error dump for 112000000082 "_Status_","Employees","Parent","Code","Prop1","Prop2","_Line_","_Error_1_" "E","Test User 2","All employees","","101.1a","1.0","2","Error parsing key for this row; no values" "W","Jesse Wilson","All employees","a004100000HnINpAAN","","0.0","3","Invalid parent" "W","Alec","All employees","a004100000HnINzAAN","","0.0","4","Invalid parent" "E","Alec 2","All employees","","","0.0","5","Error parsing key for this row; no values" "W","Test 2","All employees","a004100000HnIO9AAN","","0.0","6","Invalid parent" "E","Jesse Wilson - To Delete","All employees","","","0.0","7","Error parsing key for this row; no values" "W","#1725","All employees","69001","","0.0","8","Invalid parent" [...] "W","#2156","All employees","21001","","0.0","439","Invalid parent" "E","All employees","","","","","440","Error parsing key for this row; no values" Error dump for 112000000079 "Worker Report","Code","Value 1","_Line_","_Error_1_" "Jesse Wilson","a004100000HnINpAAN","0","434","Item not located in Worker Report list: Jesse Wilson" "Alec","a004100000HnINzAAN","0","435","Item not located in Worker Report list: Alec" "Test 2","a004100000HnIO9AAN","0","436","Item not located in Worker Report list: Test 2 Downloading a File If the above code is used to execute an export action, the fill will not be downloaded automatically. To get this file, use the following: download = get_file(conn, "<file ID>", "<path to local file>") print(download) This will save the file to the desired location on the local machine (or mounted network share folder) and alert you once the download is complete, or warn you if there is an error. Get Available Workspaces and Models API 2.0 introduced a new means of fetching the workspaces and models available to a given user. You can use this library to build a key-value dictionary (as above) for these resources. #Instantiate AnaplanConnection without workspace or model IDs conn = AnaplanConnection(anaplan.generate_authorization("Certificate", privKey, pubCert), "", "") #Setting session variables uid = anaplan.get_user_id(conn) #Fetch models and workspaces the account may access workspaces = ard.build_id_dict(anaplan.get_workspaces(conn, uid), "workspaces") models = ard.build_id_dict(anaplan.get_models(conn, uid), "models") #Select workspace and model to use while True: workspace_name=input("Enter workspace name to use (Enter ? to list available workspaces): ") if workspace_name == '?': for key in workspaces: print(key) else: break while True: model_name=input("Enter model name to use (Enter ? to list available models): ") if model_name == '?': for key in models: print(key) else: break #Extract workspace and model IDs from dictionaries workspace_id = ard.get_id(workspaces, workspace_name) model_id = ard.get_id(models, model_name) #Updating AnaplanConnection object conn.modelGuid=model_id conn.workspaceGuid=workspace_id The above code will create an AnaplanConnection instance with only the user authentication defined. It queries the API to return the ID of the user in question, then queries for the available workspaces and models, and builds a dictionary with these results. You can then enter the name of the workspace and model you wish to use (or print to screen all available), then finally update the AnaplanConnection instance to be used in all future requests.
View full article
An easy to use set of PowerShell wrapper scripts This article outlines the features of the PowerShell scripts that are used as wrappers to the standard Anaplan Connect scripts. These PowerShell scripts will enable the following features: A file watcher that waits for the arrival of files to start importing into Anaplan that can run through Enterprise Schedulers Copy/move, import, and back up the source files as required after the Success or Failure of the import Provide email notifications of the outcome of the processes Can be used to trigger Actions on Anaplan that do not have file operations, but as required through schedulers The scripts are avaliable in the links below. GitHub Repository Please contribute enhancements here: https://github.com/upaliw/anaplanconnect_ps Releases Latest releases for AC1.4 & AC1.3: https://github.com/upaliw/anaplanconnect_ps/releases  Contents of the ZIP file The contents of the ZIP file are Object Comments exceptions Folder to hold the errors/messages generated from Anaplan Connect java_keystore Folder to hold the Java KeyStore file for CA Certificate authentication. See the complete Anaplan Connect Guide. lib Folder that holds the required Java libraries to run Anaplan Connect logs Folder to hold the logging information of the PowerShell scripts AnaplanClient.bat anaplan-connect.jar Anaplan Connect script and Java package AnaplanConfig.bat The connection details for Anaplan (i.e. Basic Authentication or CA Cert details) Anaplan_Action.bat Main script that runs the various types of Anaplan Actions FileInterface.ini Config file for all file-based operations FileWatch.ps1 FileCopy.ps1 FileRun.ps1 Functions.ps1 Main PowerShell scripts for all operations FW.bat FWCPY.bat FWCPYRUN.bat RUN.bat Windows batch scripts that can be used to call the main PowerShell scripts through Enterprise Schedulers EmailNotifications.ini Config file for email notification settings EmailPassword.txt Config file to hold the encrypted password for SMTP authentication  Step 1 – Anaplan Connect Authentication The following file should be updated as required to denote the connection type to Anaplan. The connection can be either one of the two possible types: Basic Authentication: Anaplan username and password, where the password is maintained by Anaplan and the Anaplan username is set to be an Exception user for SSO workspaces. The password will need to be reset every 90 days. CA Certificate Authentication: A client certificate procured using a Certification Authority that is attached to the Anaplan Username (see the Administration: Security - Certificates article in Anapedia). Step 2 – Email configuration The following steps need to be completed for email notifications. Update the EmailNotifications.ini file with the SMTP parameters. As required, create the encrypted password file txt for the SMTP authentication. To use the default encryption of PowerShell the following command can be issued in the PowerShell prompt and redirect the output to a file as: "smtpPassword" | ConvertTo-SecureString -AsPlainText -Force | ConvertFrom-SecureString | Out-File ".\EmailPassword.txt"Step 3 – File import configuration Step 3 – File import configuration This is the main configuration file for all the file import operations. The FileInterface.ini file will have the following information: Config Entry Comments Key Mandatory: The main parameter passed to the scripts that picks all the details of the operations Inbound filename Optional: The inbound filename as a Regular Expression, so that it can recognize any timestamps Load filename Optional: The filename the Anaplan Action is tied to Backup filename Optional: The filename the file should be backed up as Inbound location Optional: The folder the file arrives to from a source system Load location Optional: The folder the file moves from the inbound location Backup location Optional: The folder where the backups are located, which is by date-stamped subfolders Command to run Mandatory: The Anaplan Action Notify Optional: One of Success, Fail, or Both Notify email addresses Optional: The email addresses comma (,) separated Action Type Mandatory: One of Import, Export, Process, Action, ImportAndProcess, JDBCImport, or JDBCProcess Export filename Optional: Only for Export Action Type JDBC Properties file Optional: Only for JDBCImport and JDBCProcess Action Type Workspace GUID Mandatory: Workspace ID Model GUID Mandatory: Model ID  Calling the scripts The scripts can be called manually or via an Enterprise Scheduler. The Key should be passed as the argument. The following scenarios can be provided as examples of these operations. Wait for an arrival of a file, then import it to Anaplan FWCPYRUN “Key” Run an Anaplan action per schedule RUN “Key” Email notifications If an email notification is enabled per config entry, a sample with an attachment of any exceptions generated will look like: Note: The email will contain 1 of 3 statuses Success: No issues. Success with data errors: Import was successful, but some data items had issues. There will be an attachment with the details of the exceptions generated from Anaplan. Fail: The import failed, details will be attached in the email. Logging All steps of the interface processes will be logged in the logs folder for each operation (i.e. FileWatch, FileCopy, and FileRun) separately. The generated exceptions will be in the exceptions folder.  Note: There is no process to clean-up the older log files, which should be done on a case-by-case basis.
View full article
We're pleased to announce the February 2018 release of the Anaplan Connector for Informatica Cloud. This release fixes Success/Error row counts in Monitor Log for Data Synchronization Tasks (DST).   Exports Anaplan List exports Success rows is the number of Anaplan List rows exported. Error row count should be 0. Anaplan Module exports Success rows is the number of Anaplan Module rows exported. Error row count should be 0. Imports Anaplan List imports Success rows is sum of number of rows successfully updated/inserted & number of rows updated/inserted with warning. Error row count is number of failed rows. Anaplan Module imports Success rows is the sum of number of Anaplan cells successfully updated/inserted & number of Anaplan cells updated/inserted with warning. Error rows is number of failed Anaplan cells. Note: Cells ignored by Anaplan Import action are not included in above count. For example, during Module Import, any parent hierarchy level cells will be ignored. For more information, see the Anaplan Informatica Connector Guide. 
View full article
Anaplan Connect is a downloadable tool that empowers you to automate Anaplan actions. This lightweight tool still relies on the same types of flat files that can be manually uploaded into Anaplan. Once this tool is installed on your computer, you can package that point-and-click process in a script (.bat or .sh files). These scripts work well with external scheduling tools, enabling you to schedule and automate a data upload/download from Anaplan's cloud platform. Most often, Anaplan Connect is used in conjunction with flat files, but it can also be used to connect to any relational database with JDBC.   JDBC JDBC stands for Java Database Connectivity. It is the industry standard API for database-independent connectivity between Java and a wide range of SQL databases, as well as other tabular data sources. A JDBC connection relies on Anaplan Connect to handle the Anaplan side of the integration; it has a separate category because this is the only type of Anaplan Connect script that will contain an SQL query. As with any non-JDBC integration using Anaplan Connect, Anaplan must already have a template file stored as a data source. As long as this data source is available within the Anaplan model, a JDBC integration differs from a flat file Anaplan Connect script when it comes to selecting the file for import. With a JDBC integration, this is the result of an SQL query instead of the location of a flat file. The results of this query are passed directly to Anaplan without needing to store a file. Learn more about Anaplan Connect and download the Anaplan Connect Quick Start Guide in Anapedia.
View full article
Allowing model users to export data out of an Anaplan model on a large scale mode (e.g. many end user-run exports) is not a good practice. One approach is to create an "export model" in Anaplan that is specifically for exporting purposes. This export model will have the same data set and selective access definitions as in the main model, but will not have any of the data entry or reporting dashboards. In comparison, it will only have dashboards with buttons that run specific exports. To ensure a good user experience, provide a hyperlink to the export model from a dashboard in the main model. For example, users start from their usual, main model, see a link named "Exports," and click it. The link redirects them to the export model where they see a set of predefined buttons that run exports. It is important to explain to the customer and model users that: Exports execute sequentially (first in, first served): users have to wait until previously executed exports are finished before they can run their own export. There will be data latency as the export model will likely sync once or twice a day from the main model. The export from the main model to the export model is a blocking operation and must ideally be run at times that are least likely to not disrupt operations. Users will need to understand the schedule and plan their exports accordingly.
View full article
When users run exports there can be misalignment of data that causes issues in the business process. If users export out of a dashboard, it's most often for custom reporting purposes. During this the user filters, sorts, creates sums via a pivot table, uses lookup for attributes, and displays additional data. All of these are certainly needed for reporting. In a worst case scenario the user will create additional KPIs or ratios that he could not find in the Anaplan model. Next, this user will copy all of this data into a PowerPoint ® deck, make additional formatting changes, add comments to the numbers or variances, and present this deck to his meeting attendees. This user has spent a few days doing the tasks described above, and within these few days the model has changed: data might have changed, structures might have changed, some calculations might have changed, new calculations are now available, and maybe even user access has changed. Now, this user’s deck is misaligned; they are presenting data, analysis, and conclusions that can be irrelevant or that conflict with another presenter of the same meeting who exported from the platform in a different timeframe. At this point, the executive sponsor may ask why they invested in a great platform like Anaplan if the organization is still having the same issues as before where shadow processes are frequently occurring—not to mention that the additional accurate and insightful comments included in the deck are now disconnected from the rest of the data, and will remain buried in people's emails instead of being available to all. Then the next week, or next month, the meeting happens again, and all the work of extract, reformat, recalculate, copy/paste, and comments needs to be done again.
View full article
Each time a user runs an import or an export it affects platform performance, as they will block all other users of the model from performing any tasks while the import or export runs. This creates what is called a toaster message: basically a blue box at the top of the Anaplan screen that indicates to every connected user that the platform is processing an action. Any person who frequently exports out of Anaplan will likely become very unpopular among the users of the model, especially if exports last more than a few seconds. Users who are not workspace administrators can: Export data out of a module within a dashboard Run an import prepared by an administrator Run a process that an administrator has prepared. The process can combine a number of imports and exports
View full article
This article covers the necessary steps for you to migrate your Anaplan Connect (AC) 1.3.x.x script to Anaplan Connect 1.4. For more details and examples, refer to the   Anaplan Connect User Guide v1.4. The changes are: New connectivity parameters Replace reference to Anaplan Certificate with Certificate Authority (CA) certificates using new parameters Optional Chunksize & Retry parameters Changes to JDBC configuration New Connectivity Parameters Add the following parameters to your Anaplan Connect 1.4 integration scripts. These parameters provide connectivity to Anaplan and Anaplan authentication services. Both of the urls listed below need to be whitelisted with your network team. -service "https://api.anaplan.com/" -auth "https://auth.anaplan.com" Certificate Changes As noted in our   Anaplan-generated Certificates to Expire December 10, 2018 blog post, new and updated Anaplan integration options support Certificate Authority (CA) certificates for authentication. Basic Authentication is still available in Anaplan Connect 1.4, however, the use of certificates has changed. In Anaplan Connect 1.3.x.x, the script references the full path to the certificate file. For example: -certificate "/Users/username/Documents/AnaplanConnect1.4/certificate.pem" In Anaplan Connect 1.4 the CA certificate must be stored in a Java Key Store (JKS). Refer to   this video   for a walkthrough of the process of getting the CA certificate into the key store. You can also refer to   Anaplan Connect User Guide v1.4   for steps to create the Java key store. Once you have imported the key into the JKS,   make note of this information : Path to the JKS (directory path on server where JKS is saved) The Password to the JKS The alias of the certificate within the JKS. For example: KeyStorePath ="/Users/username/Documents/AnaplanConnect1.4/my_keystore.jks" KeyStorePass ="your_password" KeyStoreAlias ="keyalias" To pass these values to Anaplan Connect 1.4, use these command line parameters: -keystore {KeystorePath} -keystorealias {KeystoreAlias} -keystorepass {KeystorePass} Chunksize Anaplan Connect 1.4 allows for custom chunk sizes on files being imported. The -chunksize parameter can be included in the call with the value being the size of the chunks in megabytes. -chunksize {SizeInMBs} Retry Anaplan Connect 1.4 allows for the client to retry requests to the server in the event that the server is busy. The -maxretrycount parameter defines the number of times the process retries the action before exiting. The -retrytimeout parameter is the time in seconds that the process waits before the next retry. -maxretrycount {MaxNumberOfRetries} -retrytimeout {TimeoutInSeconds} Changes to JDBC Configuration With Anaplan Connect 1.3.x.x the parameters and query for using JDBC are stored within the Anaplan Connect script itself. For example: Operation="-file Sample.csv' -jdbcurl 'jdbc:mysql://localhost:3306/mysql?useSSL=false' -jdbcuser 'root:Welcome1' -jdbcquery 'SELECT * FROM py_sales' -import 'Sample.csv' -execute" With Anaplan Connect 1.4. the parameters and query for using JDBC have been moved to a separate file. The name of that file is then added to the AnaplanClient call using the   -jdbcproperties   parameter. For example:  Operation="-auth 'https://auth.anaplan.com' -file 'Sample.csv'  -jdbcproperties 'jdbc_query.properties' -chunksize 20 -import 'Sample.csv' -execute " To run multiple JDBC calls in the same operation, a separate jdbcpropeties file will be needed for each query. Each set of calls in the operation should include then following parameters: -file, -jdbcproperties, -import, and -execute. In the code sample below each call is underlined separately.  For example: Operation="-auth 'https://auth.anaplan.com' -file 'SampleA.csv' -jdbcproperties 'SampleA.properties' -chunksize 20 -import 'SampleA Load' -execute -file 'SampleB.csv' -jdbcproperties 'SampleB.properties' -chunksize 20 -import 'SampleB Load' -execute" JDBC Properties File Below is an example of the JDBCProperties file. Refer to the   Anaplan Connect User Guide v1.4   for more details on the properties shown below. If the query statement is long, the statement can be broken up on multiple lines by using the \ character at the end of each line. No \ is needed on the last line of the statement. The \ must be at the end of the line and nothing can follow it. jdbc.connect.url=jdbc:mysql://localhost:3306/mysql?useSSL=false jdbc.username=root jdbc.password=Welcome1 jdbc.fetch.size=5 jdbc.isStoredProcedure=false jdbc.query=select * \ from mysql.py_sales \ where year = ? and month !=?; jdbc.params=2018,04 Anaplan Connect Windows BAT Script Example (with Cert Auth) @echo off rem This example lists a user's workspaces set ServiceLocation="https://api.anaplan.com/" set Keystore="C:\Your Cert Name Here.jks" set KeystoreAlias="" set KeystorePassword="" set WorkspaceId="Enter WS ID Here" set ModelId="Enter Model ID here" set Operation=-service "https://api.anaplan.com" -auth "https://auth.anaplan.com" -W rem *** End of settings - Do not edit below this line *** setlocal enableextensions enabledelayedexpansion || exit /b 1 cd %~dp0 set Command=.\AnaplanClient.bat -s %ServiceLocation% -k %Keystore% -ka %KeystoreAlias% -kp %KeystorePassword% -workspace %WorkspaceId% -model %ModelId% %Operation% @echo %Command% cmd /c %Command% pause Anaplan Connect Shell Script Example with Cert Auth #!/bin/sh KeyStorePath="/path/Your Cert Name.jks" KeyStorePass="" KeyStoreAlias=" " WorkspaceId="Enter WS ID Here" ModelId="Enter Model Id Here" Operation="-service "https://api.anaplan.com" -auth "https://auth.anaplan.com" -W" #________________ Do not edit below this line __________________ if [ "${CACertPath}" ]; then     Credentials="-keystore ${KeyStorePath} -keystorepass ${KeyStorePass} -keystorealias ${KeyStoreAlias}" fi echo cd "`dirname "$0"`" cd "`dirname "$0"`" if [ ! -f AnaplanClient.sh ]; then     echo "Please ensure this script is in the same directory as AnaplanClient.sh." >&2     exit 1 elif [ ! -x AnaplanClient.sh ]; then     echo "Please ensure you have executable permissions on AnaplanClient.sh." >&2     exit 1 fi Command="./AnaplanClient.sh ${Credentials} ${Operation}" /bin/echo "${Command}" exec /bin/sh -c "${Command}"   
View full article
The Connect Manager is a tool that allows non-technical users to create Anaplan Connect scripts from scratch simply by walking through a step-by-step wizard.                 Features include:  - Create scripts for the Import/Export of flat files  - Create scripts for Importing from JDBC/ODBC sources  - Ability to chose between commonly used JDBC connection – New in v4  - Run scripts from the new Connection Manager Interface – New in v4  - Ability to use certificate authentication Please note that this program is currently only supported on Windows systems and requires .Net 4.5 or newer to run (.Net has been included in the download package). The Connect Manager is approved by Anaplan for general release, however it is not supported by Anaplan. If there are any specific enhancements you want to see in the next version, please leave a comment or send me an email at graham.gronhoff@anaplan.com. Download the Anaplan Connect Wizard here. If you are migrating to the new Anaplan Connect 1.4 release please check back soon as a new version will be published that includes udated features and functionality. Keystore creation can be tricky if you are not familure with the cimmand line, to that end I have created an additional program that will perform all the required steps for its creation. Use this link to go to the application: KeyStore Wizard
View full article
We often see Anaplan Connect scripts created ad-hoc, as new actions are added, or updating existing scripts with these new actions. This works when there is a limited number of imports/exports/processes/etc. running, and when these actions are relatively quick. However, as a models and actions scale up and grow in complexity, this solution can become very inefficient. Either scheduling dozens of scripts, or trying to manage large, difficult to read scripts. I prefer to design for scale from the outset. My solution utilizes batch scripts that call the relavant Anaplan Connect script, passing the action to run as a variable. There are a couple ways I've accomplished this: dedicate a script to execute processes and pass in the process name, or pass in the action type (-action, -export, etc.) and name as the variable. I generally prefer the first approach, but you want to be careful when creating your process that it doesn't become so large that it impacts model performance. Usually, I will create a single script to perform all file uploads to a model, then run the processes. In my implementations, I've written each Anaplan Connect script to be model specific, but you could pass the model ID as a variable as well. To achieve this, I create a "controller" script, that calls the Anaplan Connect script, which would look something like this: @echo off for /F "tokens=* delims=" %%A in (Demand-Daily-Processes.txt) do ( call "Demand - Daily.bat" %%A & TIMEOUT 300) pause This reads from a file called Demand-Daily-Processes.text, reads a line which contains the name of the process as it appears in Anaplan, e.g.,  Load Master Data from Data Hub ... Load Transactional Data from Data Hub Then it calls the Anaplan Connect, passing this name as a variable. Once the script completes, the controller waits 300 seconds before reading the next line and calling the AC script again. This timeout is there to give the model time to recover after running the process and prevent any potential issues executing subsequent processes. The Anaplan Connect script itself looks mostly as it does, except in place of the process name, we use a variable reference.  @echo off set AnaplanUser="" set WorkspaceId="" set ModelId="" set timestamp=%date:~7,2%_%date:~3,3%_%date:~10,4%_%time:~0,2%_%time:~3,2% set Operation==-certificate "path\certificate.cer" -process "%~1" -execute -output "C:\AnaplanConnectErrors\<Model Name>-%~1-%timestamp%" rem *** End of settings - Do not edit below this line *** setlocal enableextensions enabledelayedexpansion || exit /b 1 cd %~dp0 if not %AnaplanUser% == "" set Credentials=-user %AnaplanUser% set Command=.\AnaplanClient.bat %Credentials% -workspace %WorkspaceId% -model %ModelId% %Operation% @echo %Command% cmd /c %Command% pause You can see that in place of declaring a process name, the script uses %~1. This tells the script to use the value of the first parameter provided. You can set to 9 variables this way, allowing you to pass in workspace and model IDs as well. This also creates a timestamp variable with the current system time when executed, then uses that and the process name to create a clearly labled folder for error dumps. eg. "C:\AnaplanConnectErrors\Demand Planning-Load Master Data from Data Hub-dd/mm/yyyy time". By using this solution, as you add processes to your model, you can simply add them to the text file (keeping them in the order you want them executed), rather than editing or creating batch scripts. Additionally, you need only schedule your controller script(s), making maintenance easier still. 
View full article
Who is Mulesoft? What do they do? MuleSoft is a market leading Integration Platform as a Service (IPaaS) or Enterprise Service Bus (ESB) vendor. IPaas or ETL vendors are "middle-ware" software vendors that connect data from source systems to target systems within an enterprise company. For Anaplan customers a partner like MuleSoft can import data from cloud, on-premise, database, and other source systems into Anaplan as a target system, and export or update Anaplan data to other target systems like a BI or financial system vendor as a few examples. MuleSoft was acquired by Salesforce in early 2018, and will become the Integration Cloud business unit. What is the difference between Anaplan's Mulesoft connector, Hyperconnect, Connect, and using other Anaplan IPaaS partners like Dell Boomi and SnapLogic? Anaplan Connect is a server based Java scripting application that can import flat files into Anaplan. Connect is configured so business users can automate getting source data into Anaplan. The product typically the simplest way to import data sets to Anaplan. Hyperconnect is an optional SKU Anaplan sells that includes the Informatica Cloud Service (ICS) and bundles of 2, 5, or more connectors. One of these connectors must be the Anaplan Informatica connector. The Anaplan Enterprise Edition includes ICS and 2 connectors. ICS is a market-leading cloud ESB product for integrating on-premise and cloud application data to Anaplan. Anaplan sells, implements, and supports Hyperconnect. MuleSoft is also a market leading IPaaS or ESB vendor whose primary product is called AnyPoint. MuleSoft sells and supports the AnyPoint platform directly with their customers. Anaplan developed the v2 Mulesoft connector which is available in the Mule Exchange where certified connectors and integrations can be downloaded. Anaplan has connectors for Dell Boomi and SnapLogic as additional IPaaS/ETL options for our customers who have picked one of these vendors as their ETL solution. What are the new features in the MuleSoft v2 Anaplan connector compared to our v1 connector? Support for CA certificates Work with Anaplan actions like Import, Export, Delete, and Processes Support both file based and streaming data transfer; support for configurable file chunks up to 50MB to increase performance File previews for import and export actions Automatic retries and configurable retry counts Additional debug logs How does Mulesoft position itself relative to other ESB, ETL, or IPaaS vendors? MuleSoft reduces technology complexity and cost Anypoint Platform allows you to maintain and operate one platform to support the entire API and Integration lifecycle instead of two Developers do not have to hack orchestration into API gateways, can use built-in integration functionalities Use a single runtime for both integration and APIs, reducing resource needs MuleSoft helps your team deliver APIs faster Anypoint Platform enforces API design best practices and helps you avoid duplicated work with reusable templates We provide a mock service so API owners can validate the design before coding Exchange seamlessly imports templates into built-in API design tools, you do not have to rely on open source tools (Swagger) MuleSoft provides actionable visibility to help you prevent and reduce system downtime Anypoint Platform provides one place to monitor how data traverses APIs and integrations: end-to-end visibility We have built-in test tools that work well with your existing CI/CD pipeline; with us you can use standard Java tools for debugging v. having no visibility into black box Has MuleSoft certified this v2 connector? Yes. What versions of Mulesoft runtime and Studio is the v2 Anaplan connector compatible with? The Anaplan v2 connector is compatible with Mule runtime 3.8/3.9 and Studio 6.4. My customer is using the Mule 4.x runtime. Is our v2 connector compatible with that? Our v2 connector is not compatible with Mule 4.x runtime today but is on our roadmap for forthcoming enhancements. What are the compelling benefits to our customer for undertaking a v1 to v2 connector migration, or to start with our v2 connector for Anaplan integration? The basic authentication provided through the Anaplan APIs used with our v1 MuleSoft connector have been upgraded with Anaplan v2.0 APIs to now support certification-based authentication. There are significant security, performance, data orchestration, and monitoring features in our v2 connector that will enable greater scaling and more real-time data flows into Anaplan, and less development effort for IT. Any customer using the Mule 3.x runtime must use our v2 MuleSoft connector. Is there a price to use the MuleSoft v2 Anaplan connector? No. Neither Anaplan nor MuleSoft charges for the Anaplan MuleSoft connector. How does a customer get access to this v2 Anaplan Mulesoft connector? Do a Google search for the MuleSoft Exchange or go to https://www.mulesoft.com/exchange/. All of MuleSoft's certified connectors and integrations can be found and downloaded from this Exchange. Within the MuleSoft Exchange you can type "Anaplan" to find the connector, or go to https://www.mulesoft.com/exchange/org.mule.modules/mule-module-anaplan-connector/ What resources are available to help our customers plan to download, use or migrate to this v2 Anaplan Mulesoft connector? Within the Anypoint Exchange there is a FAQ section that explains how to download and use connectors with the Anypoint platform, and how to upgrade between connector versions. See https://www.mulesoft.com/exchange-faq How is support provided for MuleSoft and the Anaplan MuleSoft connector? MuleSoft   sells their Anypoint platform directly to customers, and therefore provides tier 1 support to these customers including the connectors. MuleSoft tier 1 support will work with Anaplan support who will provide tier 2 and 3 support if there are issues within the Anaplan MuleSoft connector. Will the v1 Anaplan connector continue to be available? Will the v1 connector be supported now that v2 is available? The v1 connector will still be available through the Anypoint Exchange to download and use for development, but customers will be encouraged to start all new development with the v2 connector.   MuleSoft will no longer provide support for the v1 connector.  Resources Anaplan-MuleSoft Connector Data Sheet (attached) MuleSoft Connector v2.0 Presentation (attached) 
View full article
This article describes how to use the Anaplan DocuSign integration with single sign-on (SSO).
View full article
Anaplan Connect v1.3.3.5 is now available. 
View full article
Audience: Anaplan Internal and Customers/Partners Workiva Wdesk Integration Is Now Available We are excited to announce the general availability of Anaplan’s integration with Workiva’s product, known as the Wdesk. Wdesk easily imports planning, analysis and reporting data from Anaplan to deliver integrated narrative reporting, compliance, planning and performance management on the cloud. The platform is utilized by over 3,000 organizations for SEC reporting, financial reporting, SOX compliance, and regulatory reporting. The Workiva and Anaplan partnership delivers enterprise compliance and performance management on the cloud. Workiva Wdesk, the leading narrative reporting cloud platform, and Anaplan, the leading connected-planning cloud platform, offer reliable, secure integration to address high-value use cases in the last mile of finance, financial planning and analysis, and industry specific regulatory compliance. GA Launch: March 5th  How does the Workiva Wdesk integration work? Please contact Will Berger, Partnerships (william.berger@workiva.com) from Workiva to discuss how to enable integration. Anaplan reports will feed into the Wdesk platform. Wdesk will integrate with Anaplan via Wdesk Connected Sheets. This is a Workiva built and maintained connection. What use cases are supported by the Workiva Wdesk Integration? The Workiva Wdesk integration supports a number of use cases, including: Last mile of finance: Complete regulatory reporting and filing as part of the close, consolidate, report and file process. Workiva automates and structures the complete financial reporting cycle and pulls consolidated actuals from Anaplan. Financial planning and analysis: Complex multi-author, narrative reports that combine extensive commentary and data such as budget books, board books, briefing books and other FP&A management and internal reports. Workiva creates timely, reliable narrative reports pulling actuals, targets and forecast data from Anaplan. Industry specific regulatory compliance & extensive support of XBRL and iXBRL: Workiva is used to solve complex compliance and regulatory reporting requirements in a range of industries.  In banking, Workiva supports documentation process such as CCAR, DFAST and RRP, pulling banking stress test data from Anaplan. Also, Workiva is the leading provider of XBRL software and services accounting for more than 53% of XBRL facts filed with the SEC in the first quarter of 2017.
View full article
This guide assumes you have set up your runtime environment in Informatica Cloud (Anaplan Hyperconnect) and the agent is up and running. This guide focusses solely on how to configure the ODBC connection and setting up a simple synchronization task importing data from one table in PostgreSQL to Anaplan. Informatica Cloud has richer features that are not covered in this guide. The built-in help is contextual and helpful as you go along should you need more information than I have included in this guide. The intention of this guide is to help you set up a simple import from PostgreSQL to Anaplan and this guide is therefore kept short and is not covering all related areas. This guide assumes you have ran an import using a csv file as this needs to be referenced when the target connection is set up, described under section 2.2 below. To prepare, I exported the data I wanted to use for the import from PostgreSQL to a csv file. I then mapped this csv file to Anaplan and ran an initial import to create the import action that is needed.   1. Set up the ODBC connection for PostgreSQL In this example I am using the 64-bit version of the ODBC connection running on my local laptop. I have set it up for User DSN rather than System DSN, but the process is very similar should you need to set up a System DSN. You will need to download the relevant ODBC driver from PostgreSQL and install it to be able to add it to your ODBC Data Sources as per below (click the Add…button and you should be able to select the downloaded driver).     Clicking the configuration button for the ODBC Data Source opens the configuration dialogue. The configurations needed are: Database is the name of your PostgreSQL database. Server is the address to your server. As I am setting this up on my laptop, it’s localhost. User Name is the username for the PostgreSQL database. The password is the password for the PostgreSQL database. Port is the port used by PostgreSQL. You will find this if you open PostgreSQL. Testing the connection should not return any errors.   2.    Configuring source and target connections After setting up the ODBC connection as described above, you will need to set up two connections, one to PostgreSQL and one to Anaplan. Follow the steps below to do this.   2.1 Source connection – PostgreSQL ODBC Select Configure > connection in the menu bar to configure a connection.    Name your connection and add a description Select type – ODBC Select the runtime environment that will be used to run this. In this instance I am using my local machine. Insert the username for the database (same as you used to set up the ODBC connection). Insert the password for the database (same as you used to set up the ODBC connection). Insert the data source name. This is the name of the ODBC connection you configured earlier. Code page would need to correspond to the character set you are using. Testing the connection should give you below confirmation. If so, you can click Done.   2.2 Set up target connection – Anaplan The second connection that needs to be set up is the connection from Informatica Cloud to Anaplan.   Name your connection and add a description if needed Select type – AnaplanV2 Select the runtime environment that will be used to run this. In this instance I am using my local machine. Auth type – I am using Basic Auth which will require your Anaplan user credentials Insert the Anaplan username Insert the Anaplan password Certification Path location – leave blank if you use Basic Auth Insert the workspace ID (open your Anaplan model and select help and about) Insert the model ID (find in the same way as for workspace ID) I have left the remaining fields as per default setting.   Testing the connection should not pass any errors.   3 Task wizard – Data synchronization The next step is to set up a data synchronization task to connect the PostgreSQL source to the Anaplan target. Select Task Wizards in the menu bar and navigate to Data Synchronization as per below screen shot.   This will open the task wizard, starting with defining the Data Synchronization task as per below. Name the task and select the relevant task operation. In this example I have selected Insert, but other task operations are available like update and upsert.   Click Next for the next step in the workflow which is to set up the connection to the source. Start by selecting the connection you defined above under section 2.1. In this example I am using a single table as source and have therefore selected single source. With this connection you can select the source object with the Source Object drop down. This will give you a data preview so you can validate the source is defined correctly. The source object corresponds to the table you are importing from.     The next step is to define the target connection and you will be using the connection that was set up under section 2.1 above.   The target object is the import process that you ran from the csv file in the preparation step described under section 1 above. This action is referred to below as target object. The wizard will show a preview of the target module columns.    The next step in the process is the Data Filters that has both a Simple and an Advanced mode.   I am not using any data filters in this example and please refer to the built-in help for further information on how to use this.   In the field mapping you will either need to manually map or get the fields automatically mapped depending on if the names in the source and target correspond. If you map manually, you will need to drag and drop the fields from the source to the target. Once done, select Validate Mapping to check no errors are generated from the mapping.     The last step is to define whether to use a schedule to run the connection or not. You will also have the option to insert pre-processing commands and post-processing commands and any parameters for your mapping. Please refer to the built-in help for guidance on this.   After running the task, the activity log will confirm whether the import ran without errors or warnings.   As I mentioned initially, this is a simple guide to help you to set up a simple, single source import. Informatica Cloud does have more advanced options as well, both for mappings and transformations.
View full article
Summary Anaplan Connect is a command-line client to the Anaplan cloud-based planning environment and is a java-based utility that is able to perform a variety of commands, such as uploading and downloading data files, executing relational SQL queries (for loading into Anaplan), and running Anaplan actions and processes. To enhance the deployment of Anaplan Connect, it is import to be able to integrate the trapping of error conditions, enable the ability to retry the Anaplan Connect operation, and integrate email notifications. This article provides best practices on how to incorporate these capabilities. This article leverages the standard Windows command line batch script and documents the various components and syntax of the script. In summary, the script has the following main components: Set variable values such as exit codes, Anaplan Connect login parameters, and operations and email parameters Run commands prior to running Anaplan Connect commands Main loop block for multiple retries Establish a log file based upon the current date and loop number Run the native Anaplan Connect commands Search for string criteria to trap error conditions Branching logic based upon the discovery of any trapped error conditions Send email success or failure notification of Anaplan Connect run status Logic to determine if a retry is required End main loop block Run commands post to running Anaplan Connect commands Exit the script Section #1: Setting Script Variables The following section of the script establishes and sets variables that are used in the script. The first three lines perform the following actions: Clears the screen Sets the default to echo all commands Indicates to the operating system that variable values are strictly local to the the script The variables used in the script are as follows: ERRNO   – Sets the exit code to 0 unless set to 1 after multiple failed reties COUNT   – Counter variable used for looping multiple retries RETRY_COUNT   – Counter variable to store the max retry count (note: the /a switch indicates indicates a numeric value) AnaplanUser   – Anaplan login credentials in the format as indicated in the example WorkspaceId   – Anaplan numerical or named Workspace ID ModelId   – Anaplan numerical or named Model ID Operation   – A combination of Anaplan Connect commands. It should be noted that a ^ can be used to enhance readability by indicating that the current command continues on the next line Domain   – Email base domain. Typically, in the format of company.com Smtp   – Email SMTP server User   – Email SMTP server User ID Pass   – Email SMTP server password To   – Target email address(es). To increase the email distribution, simply add additional -t and the email addresses as in the example. From   – From email address Subject   – Email subject line. Note that this is dynamically set later in the script. cls echo on setlocal enableextensions REM **** SECTION #1 - SET VARIABLE VALUES **** set /a ERRNO=0 set /a COUNT=0 set /a RETRY_COUNT=2 REM Set Anaplan Connect Variables set AnaplanUser="<<Anaplan UserID>>:<<Anaplan UserPW>>" set WorkspaceId="<<put your WS ID here>>" set ModelId="<<put your Model ID here>>" set Operation=-import "My File" -execute ^ -output ".\My Errors.txt" REM Set Email variables set Domain="spg-demo.com" set Smtp="spg-demo" set User="fpmadmin@spg-demo.com" set Pass="1Rapidfpm" set To=-t "fpmadmin@spg-demo.com" -t "gburns@spg-demo.com" set From="fpmadmin@spg-demo.com" set Subject="Anaplan Connect Status" REM Set other types of variables such as file path names to be used in the Anaplan Connect "Operation" command Section #2: Pre Custom Batch Commands The following section allows custom batch commands to be added, such as running various batch operations like copy and renaming files or running stored procedures via a relational database command line interface. REM **** SECTION #2 - PRE ANAPLAN CONNECT COMMANDS *** REM Use this section to perform standard batch commands or operations prior to running Anaplan Connect Section #3: Start of Main Loop Block / Anaplan Connect Commands The following section of the script is the start of the main loop block as indicated by the :START. The individual components breakdown as follows: Dynamically set the name of the log file in the following date format and indicates the current loop number:   2016-16-06-ANAPLAN-LOG-RUN-0.TXT Delete prior log and error files Native out-of-the-box Anaplan Connect script with the addition of outputting the Anaplan Connect run session to the dynamic log file as highlighted here: cmd /C %Command% > .\%LogFile% REM **** SECTION #3 - ANAPLAN CONNECT COMMANDS *** :START REM Dynamically set logfile name based upon current date and retry count. set LogFile="%date:~-4%-%date:~7,2%-%date:~4,2%-ANAPLAN-LOG-RUN-%COUNT%.TXT" REM Delete prior log and error files del .\BAT_STAT.TXT del .\AC_API.ERR REM Out-of-the-box Anaplan Connect code with the exception of sending output to a log file setlocal enableextensions enabledelayedexpansion || exit /b 1 REM Change the directory to the batch file's drive, then change to its folder cd %~dp0 if not %AnaplanUser% == "" set Credentials=-user %AnaplanUser% set Command=.\AnaplanClient.bat %Credentials% -workspace %WorkspaceId% -model %ModelId% %Operation% @echo %Command% cmd /C %Command% > .\%LogFile% Section #4: Set Search Criteria The following section of the script enables trapping of error conditions that may occur with running the Anaplan Connect script. The methodology relies upon searching for certain strings in the log file after the AC commands execute. The batch command findstr can search for certain string patterns based upon literal or regular expressions and echo any matched records to the file AC_API.ERR. The existence of this file is then used to trap if an error has been caught. In the example below, two different patterns are searched in the log file. The output file AC_API.ERR is always produced even if there is no matching string. When there is no matching string, the file size will be an empty 0K file. Since the existence of the file determines if an error condition was trapped, it is imperative that any 0K files are removed, which is the function of the final line in the example below. REM **** SECTION #4 - SET SEARCH CRITERIA - REPEAT @FINDSTR COMMAND AS MANY TIMES AS NEEDED *** @findstr /c:"The file" .\%LogFile% > .\AC_API.ERR @findstr /c:"Anaplan API" .\%LogFile% >> .\AC_API.ERR REM Remove any 0K files produced by previous findstr commands @for /r %%f in (*) do if %%~zf==0 del "%%f" Section #5: Trap Error Conditions In the next section, logic is incorporated into the script to trap errors that might have occurred when executing the Anaplan Connect commands. The branching logic relies upon the existence of the AC_API.ERR file. If it exists, then the contents of the AC_API.ERR file are redirected to a secondary file called BAT_STAT.TXT and the email subject line is updated to indicate that an error occurred. If the file AC_API.ERR does not exist, then the contents of the Anaplan Connect log file is redirected to BAT_STAT.TXT and the email subject line is updated to indicate a successful run. Later in the script, the file BAT_STAT.TXT becomes the body of the email alert.  REM **** SECTION #5 - TRAP ERROR CONDITIONS *** REM If the file AC_API.ERR exists then echo errors to the primary BAT_STAT log file REM Else echo the log file to the primary BAT_STAT log file @if exist .\AC_API.ERR ( @echo . >> .\BAT_STAT.TXT @echo *** ANAPLAN CONNECT ERROR OCCURED *** >> .\BAT_STAT.TXT @echo -------------------------------------------------------------- >> .\BAT_STAT.TXT type .\AC_API.ERR >> .\BAT_STAT.TXT @echo -------------------------------------------------------------- >> .\BAT_STAT.TXT set Subject="ANAPLAN CONNECT ERROR OCCURED" ) else ( @echo . >> .\BAT_STAT.TXT @echo *** ALL OPERATIONS COMPLETED SUCCESSFULLY *** >> .\BAT_STAT.TXT @echo -------------------------------------------------------------- >> .\BAT_STAT.TXT type .\%LogFile% >> .\BAT_STAT.TXT @echo -------------------------------------------------------------- >> .\BAT_STAT.TXT set Subject="ANAPLAN LOADED SUCCESSFULLY" ) Section #6: Send Email In this section of the script, a success or failure email notification email will be sent. The parameters for sending are all set in the variable section of the script.  REM **** SECTION #6 - SEND EMAIL VIA MAILSEND *** @mailsend -domain %Domain% ^ -smtp %Smtp% ^ -auth -user %User% ^ -pass %Pass% ^ %To% ^ -f %From% ^ -sub %Subject% ^ -msg-body .\BAT_STAT.TXT Note: Sending email via SMTP requires the use of a free and simple Windows program known as MailSend. The latest release is available here:   https://github.com/muquit/mailsend/releases/ . Once downloaded, unpack the .zip file, rename the file to mailsend.exe and place the executable in the same directory where the Anaplan Connect batch script is located.  Section #7: Determine if a Retry is Required This is one of the final sections of the script that will determine if the Anaplan Connect commands need to be retried. Nested IF statements are typically frowned upon but are required here given the limited capabilities of the Windows batch language. The first IF test determines if the file AC_API.ERR exists. If this file does exist, then the logic drops in and tests if the current value of COUNT   is less than   the RETRY_COUNT. If the condition is true, then the COUNT gets incremented and the batch returns to the :START location (Section #3) to repeat the Anaplan Connect commands. If the condition of the nested IF is false, then the batch goes to the end of the script to exit with an exit code of 1.  REM **** SECTION #7 - DETERMINE IF A RETRY IS REQUIRED *** @if exist .\AC_API.ERR ( @if %COUNT% lss %RETRY_COUNT% ( @set /a COUNT+=1 @goto :START ) else ( set /a ERRNO=1 @goto :END ) ) else ( set /a ERRNO=0 Section #8: Post Custom Batch Commands The following section allows custom batch commands to be added, such as running various batch operations like copy and renaming files, or running stored procedures via a relational database command line interface. Additionally, this would be the location to add functionality to bulk insert flat file data exported from Anaplan into a relational target via tools such as Oracle SQL Loader (SQLLDR) or Microsoft SQL Server Bulk Copy (BCP).  REM **** SECTION #8 - POST ANAPLAN CONNECT COMMANDS *** REM Use this section to perform standard batch commands or operations after running Anaplan Connect commands :END exit /b %ERRNO% Sample Email Notifications The following are sample emails sent by the batch script, which are based upon the sample script in this document. Note how the needed content from the log files is piped directly into the body of the email.  Success Mail: Error Mail:
View full article
Announcements


Join us in San Francisco, CA, to explore what’s possible with business leaders, industry visionaries, and your peers.
Take 50% off your registration with code COMMUNITYCPX50.


Anapedia

Review the official documentation of the Anaplan platform.

Share what you know!

Share what you know! Contribute your best practices and Anaplan expertise using our Contributor's Toolkit.