Using Python 3 with the Anaplan API
- Getting Started
- Authentication
- Getting the Information Needed for Each Script
- Uploads
- Multiple Chunk Uploads
- Single Chunk Upload
- Imports
- Exports
- Actions
- Processes
- Downloading a File
- Deleting a File
- Standalone Requests Code and Their Required Headers
- Get Workspaces List
- Get Models List
- Get Model Info
- Get Files/Imports/Exports/Actions/Processes List
- Get Chunk Data
- Post Chunk Count
- Upload a Chunk of a File
- Mark an upload complete
- Upload a File in a Single Chunk
- Run an Import/Export/Process
- Run an Action
- Get Task list for an Import/Export/Action/Process
- Get Status for an Import/Export/Action/Process Task
- Download a File
- Delete a File
Note: While all of these scripts have been tested and found to be fully functional, due to the vast amount of potential use cases, Anaplan does not explicitly support custom scripts built by our customers. This article is for information only and does not suggest any future product direction.
Getting Started
Python 3 offers many options for interacting with an API. This article will explain how you can use Python 3 to automate many of the requests that are available in our apiary, which can be found at https://anaplan.docs.apiary.io/#.
This article assumes you have the requests (version 2.18.4), base64, and JSON modules installed, as well as the Python 3 version 3.6.4. Please make sure you are installing these modules with Python 3, and not for an older version of Python. For more information on these modules, please see their respective websites:
- Python (If you are using a Python version older or newer than 3.6.4, or requests version older or newer than 2.18.4, we cannot guarantee the validity of the article.)
- Requests
- Base Converter
- JSON (Note: Install instructions are not at this site but will be the same as any other Python module.)
Note: Please read the comments at the top of every script before use, as they more thoroughly detail the assumptions that each script makes.
Authentication
To start, let's talk about Authentication. Every script run that connects to our API will be required to supply valid authentication. There are two ways to authenticate a Python script that I will be covering.
- Certificate Authentication
- Basic Encoded Authentication
Certificate authentication will require that you have a valid Anaplan certificate, which you can read more about here. Once you have your certificate saved locally, to properly convert your Anaplan certificate to be usable with the API, first you will need OpenSSL. Once you have that, you will need to convert the certificate to PEM format by running the following code in your terminal:
openssl x509 -inform der -in certificate-(certnumber).cer -out certtest.pem
If you are using Certificate Authorization, the scripts we use in this article will assume you know the Anaplan account email associated with the certificate. If you do not know it, you can extract the common name (CN) from the PEM file by running the following code in your terminal:
openssl x509 -text -in certtest.pem
To be used with the API, the PEM certificate string will need to be converted to base64, but the scripts we will be covering will take care of that for you, so I won't cover that in this section.
To use basic authentication, you will need to know the Anaplan account email that is being used, as well as the password. All scripts in this article will have the following code near the top:
# Insert the Anaplan account email being used
username = ''
-----------------
# If using cert auth, replace cert.pem with your pem converted certificate
# filename. Otherwise, remove this line.
cert = open('cert.pem').read()
# If using basic auth, insert your password. Otherwise, remove this line.
password = ''
# Uncomment your authentication method (cert or basic). Remove the other.
user = 'AnaplanCertificate ' + str(base64.b64encode((
f'{username}:{cert}').encode('utf-8')).decode('utf-8'))
# user = 'Basic ' + str(base64.b64encode((f'{username}:{password}'
# ).encode('utf-8')).decode('utf-8')
Regardless of the authentication method, you will need to set the username variable to the Anaplan account email being used.
If you are using a certificate to authenticate, you will need to have your PEM converted certificate in the same folder or a child folder of the one you are running the scripts from. If your certificate is in a child folder, please remember to include the file path when replacing cert.pem (e.g. cert/cert.pem). You can remove the password line and its comments, and its respective user variable.
If you are using basic authentication, you will need to set the password variable to your Anaplan account password, and you can remove the cert line, its comments, and its respective user variable.
Getting the Information Needed for Each Script
Most of the scripts covered in this article will require you to know an ID or metadata for the file, action, etc., that you are trying to process. Each script that gets this information for their respective fields is titled get_____.py. For example, if you want to get your file's metadata, you'll run getFiles.py, which will write the file metadata for each file in the selected model, in the selected workspace, in an array to a JSON file titled files.json. You can then open the JSON file, find the file you need to reference, and use the metadata from that entry in your other scripts.
TIP: If you open the raw data tab of the JSON file it makes it much easier to copy the whole set of metadata.
The following are the links to download each get____.py script. Each get script uses the requests.get method to send a get request to the proper API endpoint.
- getWorkspaces.py: Writes an array to workspaces.json of all the workspaces the user has access to.
- getModels.py: Writes an array to models.json of either all the models a user has access to if wGuid is left blank or all of the models the user has access to in a selected workspace if a workspace ID was inserted.
- getModelInfo.py: Writes an array to modelInfo.json of all metadata associated with the selected model.
- getFiles.py: Writes an array to files.json of all metadata for each file the user has access to in the selected model and workspace. (Please refer to the Apiary for more information on private vs default files. Generally, it is recommended that all scripts be run via the same user account.)
- getChunkData.py: Writes an array to chunkData.json of all metadata for each chunk of the selected file in the selected model and workspace.
- getImports.py: Writes an array to imports.json of all metadata for each import in the selected model and workspace.
- getExports.py: Writes an array to exports.json of all metadata for each export in the selected model and workspace.
- getActions.py: Writes an array to actions.json of all metadata for all actions in the selected model and workspace.
- getProcesses.py: Writes an array to processes.json of all metadata for all processes in the selected model and workspace.
Uploads
A file can be uploaded to the Anaplan API endpoint either in chunks or as a single chunk. Per our apiary:
We recommend that you upload files in several chunks. This enables you to resume an upload that fails before the final chunk is uploaded. In addition, you can compress files on the upload action. We recommend compressing single chunks that are larger than 50MB. This creates a Private File.
Note: To upload a file using the API that file must exist in Anaplan. If the file has not been previously uploaded, you must upload it initially using the Anaplan user interface. You can then carry out subsequent uploads of that file using the API.
Multiple Chunk Uploads
The script we have for reference is built so that if the script is interrupted for any reason, or if any particular chunk of a file fails to upload, simply rerunning the script will start uploading the file again, starting at the last successful chunk. For this to work, the file must be initially split using a standard naming convention, using the terminal script below.
split -b [numberofBytes] [path and filename] [prefix for output files]
You can store the file in any location as long as you the proper file path when setting the chunkFilePrefix (e.g. chunkFilePrefix = ''upload_chunks/chunk-" This will look for file chunks named chunk-aa, chunk-ab, chunk-ac etc., up to chunk-zz in the folder script_origin/upload_chunks/. It is very unlikely that you will ever exceed chunk-zz). This will let the script know where to look for the chunks of the file to upload. You can download the script for running a multiple chunk upload from this link: chunkUpload.py.
Note: The assumed naming conventions will only be standard if using Terminal, and they do not necessarily work if the file was split using another method in Windows. If you are using Windows you will need to either create a way to standardize the naming of the chunks alphabetically {chunkFilePrefix}(aa - zz) or run the script as detailed in the Apiary.
Note: The chunkUpload.py script keeps track of the last successful chunk by writing the name of the last successful chunk to a .txt file chunkStop.txt. This file is deleted once the import completes successfully. If the file is modified in between runs of the script, the script may not function correctly. Best practice is to leave the file alone and delete it if you want to start the upload from the first chunk.
Single Chunk Upload
The single chunk upload should only be used if the file is small enough to upload in a reasonable time frame. If the upload fails, it will have to start again from the beginning. If your file has a different name then that of its version of the server, you will need to modify line 31 ("name" : '') to reflect the name of the local file. This script runs a single put request to the API endpoint to upload the file. You can download the script for running a single chunk upload from this link: singleChunkUpload.py
Imports
The import.py script sends a post request to the API endpoint for the selected import. You will need to set the importData value to the metadata for the import. See Getting the Information Needed for Each Script for more information. You can download the script for running an import from this link: Import.py.
Once the import is finished, the script will write the metadata for the import task in an array to postImport.json, which you can use to verify which task you want to view the status of while running the importStatus.py script. The importStatus.py script will return a list of all tasks associated with the selected importID and their respective list index. If you are wanting to check the status of the last run import, make sure you are checking postImport.json to verify you have the correct taskID. Enter the index for the task and the script will write the task status to an array in file importStatus.json. If the task is still in progress, it will print the task status and progress. If the task finished and a failure dump is available, it will write the failure dump in comma delimited format to importDump.csv which can be used to review the cause of the failure. If the task finished with no failures, you will get a message telling you the import has completed with no failures. You can download the script for importStatus.py from this link: importStatus.py
Note: If you check the status of a task with an old taskID for an import that has been run since you last checked it, the dump will no longer exist and importDump.csv will be overwritten with an HTTP error, and the status of the task will be 410 Gone.
Exports
The export.py script sends a post request to the API endpoint for the selected export. You will need to set the exportData value to the metadata for the export. See Getting the Information Needed for Each Script for more information. You can download the script for running an export from this link: Export.py
Once the export is finished, the script will write the metadata for the export task in an array to postExport.json, which you can use to verify which task you want to view the status of while running the exportStatus.py script. The exportStatus.py script will return a list of all tasks associated with the selected exportID and their respective list index. If you are wanting to check the status of the last run import, make sure you are checking postExport.json to verify you have the correct taskID. Enter the index for the task and the script will write the task status to an array in file exportStatus.json. If the task is still in progress, it will print the task status and progress. It is important to note that no failure dump will be generated if the export fails. You can download the script for exportStatus.py from this link: exportStatus.py
Actions
The action.py script sends a post request to the API endpoint for the selected action (for use with actions other than imports or exports). You will need to set the actionData value to the metadata for the action. See Getting the Information Needed for Each Script for more information. You can download the script for running an action from this link: actionStatus.py.
Processes
The process.py script sends a post request to the API endpoint for the selected process. You will need to set the processData value to the metadata for the process. See Getting the Information Needed for Each Script for more information. You can download the script for running a process from this link: Process.py.
Once the process is finished, the script will write the metadata for the process task in an array to postProcess.json, which you can use to verify which task you want to view the status of while running the processStatus.py script. The processStatus.py script will return a list of all tasks associated with the selected processID and their respective list index. If you are wanting to check the status of the last run import, make sure you are checking postProcess.json to verify you have the correct taskID. Enter the index for the task and the script will write the task status to an array in file processStatus.json. If the task is still in progress, it will print the task status and progress. If the task finished and a failure dump is available, it will write the failure dump in comma delimited format to processDump.csv which can be used to review the cause of the failure. It is important to note that no failure dump will be generated for the process itself, only if one of the imports in the process failed. If the task finished with no failures, you will get a message telling you the process has completed with no failures. You can download the script for processStatus.py from this link: processStatus.py.
Downloading a File
Downloading a file from the Anaplan API endpoint will download the file in however many chunks it exists in on the endpoint. It is important to note that you should set the variable fileName to the name it has in the file metadata. First, the downloads individual chunk metadata will be written in an array to downloadChunkData.json for reference. The script will then download the file chunk by chunk and write each chunk to a new local file with the same name as the 'name' listed in the file's metadata. You can download the link for this script from this link: downloadFile.py.
Note: If a file already exists in the same folder as your script with the same name as the name value in the file's metadata, the local file will be overwritten with the file being downloaded from the server.
Deleting a File
You can delete the file contents of any file that the user has access to that exists in the Anaplan server. Note: This only removes private content. Default content and the import data source model object will remain. You can download the link for this script from this link: deleteFile.py.
Standalone Requests Code and Their Required Headers
In this section, I will list the code for each request detailed above, including the API URL and the headers necessary to complete the call. I will be leaving the content right of Authorization: headers blank. Authorization header values can be either Basic encoded_username: password or AnaplanCertificate encoded_CommonName:PEM_Certificate_String (see Certificate-Authorization-Using-the-Anaplan-API for more information on encoded certificates)
Note: requests.get will only generate a response body from the server, and no data will be locally saved unless written to a local file.
Get Workspaces List
requests.get('https://api.anaplan.com/1/3/workspaces/', headers='Authorization':)
Get Models List
requests.get('https://api.anaplan.com/1/3/models/', headers={'Authorization':})
or
requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models', headers={'Authorization':})
Get Model Info
requests.get(f'https://api.anaplan.com/1/3/models/{mGuid}', headers={'Authorization':})
Get Files/Imports/Exports/Actions/Processes List
The get request for files, imports, exports, actions, or processes is largely the same. Change files to imports, exports, actions, or processes to run each.
requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files', headers={'Authorization':})
Get Chunk Data
requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/chunks', headers={'Authorization':})
Post Chunk Count
requests.post('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/chunks/{chunkNumber}', headers={'Authorization': , 'Content-type': 'application/json'}, json={fileMetaData})
Upload a Chunk of a File
requests.put('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/chunks/{chunkNumber}', headers={'Authorization': , 'Content-Type': 'application/octet-stream'}, data={raw contents of local chunk file})
Mark an upload complete
requests.put('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/complete', headers=={'Authorization': , 'Content-Type': 'application/json'}, json={fileMetaData})
Upload a File in a Single Chunk
requests.put('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}', headers={'Authorization': , 'Content-Type': 'application/octet-stream'}, data={raw contents of local file})
Run an Import/Export/Process
The post request for imports, exports, and processes are largely the same. Change imports to exports, actions, or processes to run each.
requests.post('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/imports/{Id}/tasks', headers={'Authorization': , 'Content-Type': 'application/json'}, data=json.dumps({'localeName': 'en_US'}))
Run an Action
requests.post('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/imports/{Id}/tasks', data={'localeName': 'en_US'}, headers={'Authorization': , 'Content-Type': 'application/json'})
Get Task list for an Import/Export/Action/Process
The get request for import, export, action and process task lists are largely the same. Change imports to exports, actions, or processes to get each task list.
requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/imports/{importID}/tasks', headers={'Authorization':})
Get Status for an Import/Export/Action/Process Task
The get request for import, export, action and process task statuses are largely the same. Change imports to exports, actions, or processes to get each task list. Note: Only imports and processes will ever generate a failure dump.
requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/imports/{ID}/tasks/{taskID}' headers={'Authorization':})
Download a File
Note: You will need to get the chunk metadata for each chunk of a file you want to download.
requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/chunks/{chunkID}, headers={'Authorization': ,'Accept': 'application/octet-stream'})
Delete a File
Note: This only removes private content. Default content and the import data source model object will remain.
requests.delete('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}', headers={'Authorization': , 'Content-type': 'application/json'}
Note: SFDC user administration is not covered in this article, but the same concepts from the scripts provided can be applied to SFDC user administration. For more information on SFDC user administration see the apiary entry for SFDC user administration.
Comments
-
I can't seem to get the link for Getting the Information Needed for Each Script (under the Imports heading) to work, could @chase.hippen confirm?
Thanks!
Callum
1 -
I am having the same problem as CallumW
0