-
Using the Anaplan Certificate with the Anaplan REST API: A Comprehensive Guide
If you're diving into integrating with the Anaplan REST API using an Anaplan Certificate, you'll need to know certain requirements. This post is a step-by-step guide to ensure a seamless connection using Python. If you'd prefer to avoid a programmatic approach to using your certificate, then please check out the CA Certificate Auth Generator (yet to be updated with new v2 format).
📢What's New: Enhanced Security with v2 Format
Recent Update: Certificate-Based Authentication for Anaplan APIs has been enhanced with a new v2 format for enhanced security. This guide covers both the original v1 format and the new v2 format.
Choose Your Format:
* v2 format (Recommended): Includes timestamp data for enhanced security
* v1 format: Uses purely random data, supported for backward compatibility
Important: With v2 format, each authentication request must generate a fresh payload due to the timestamp component. You cannot reuse payloads between API calls.
Migration Timeline and Format Support
Current Status: Both v1 and v2 formats are supported.
Future Direction: Anaplan plans to phase out v1 format support over time as part of ongoing security enhancements. While there is no immediate deprecation timeline, we recommend:
* New implementations: Start with v2 format to ensure long-term compatibility
* Existing v1 users: Begin planning migration to v2 format to avoid future disruption
* Production systems: Consider updating to v2 during your next maintenance cycle
Core Requirements (Both Formats)
Regardless of which format you choose, these fundamental requirements remain the same:
• Make a POST Call
Initiate a POST call to https://auth.anaplan.com/token/authenticate
• Add the Certificate string to the Authorization header
The Authorization header value should include your Anaplan public key:
Authorization: CACertificate MIIFeDCCBGCgAwIBAgIQCcnAr/+Z3f...
• Pass a Random String
Include a 100-byte message within the body of your REST API call. This data must be signed using your certificate's private key to create the authentication payload.
[V1 Format]
Random Data: a 100-byte string of purely random characters.
Example: xO#hXOHcj2tj2!s#&HLzK*NrOJOfbQaz)MvLQnz4Ift*0SuWK&r#1Ud^L@7wAb @7EST @!cHyR%n&0)72C#J!by@RMqY2bFc7uGQP
JSON Structure:
{
"encodedData": "ACTUAL_ENCODED_DATA_VALUE",
"encodedSignedData": "ACTUAL_SIGNED_ENCODED_DATA_VALUE"
}
[V2 Format] (Recommended)
Timestamp + Random Data: Combines an 8-byte timestamp with 92 bytes of random data (total: 100 bytes).
Structure:
* First 8 bytes: Current epoch timestamp (binary format)
* Remaining 92 bytes: Random data
* Total: 100 bytes exactly
JSON Structure:
{
"encodedDataFormat": "v2",
"encodedData": "ACTUAL_ENCODED_DATA_VALUE",
"encodedSignedData": "ACTUAL_SIGNED_ENCODED_DATA_VALUE"
}
Security Benefit: The timestamp makes each request unique and helps prevent replay attacks where someone might try to reuse an old authentication payload.
• Certificate Format Requirements
Both the Public Certificate and the Private Key must be in PEM format. PEM (Privacy Enhanced Mail) is recognizable by its delimiters:
* Public certificates: <strong>-----BEGIN CERTIFICATE----- and -----END CERTIFICATE-----</strong>
* Private keys: <strong>-----BEGIN PRIVATE KEY----- and -----END PRIVATE KEY-----</strong>
How the Signing Process Works
Both formats follow the same cryptographic process:
* Generate the 100-byte message (format depends on v1 vs v2)
* Base64 encode the message → This becomes your encodedData
* Sign the message with your private key using RSA-SHA512
* Base64 encode the signature → This becomes your encodedSignedData
* Create the JSON payload with the appropriate format structure
Python Implementation
Here's an enhanced Python script that handles both v1 and v2 formats with user-friendly prompts:
"""
gen_signed_data.py
This script generates a JSON payload containing a base64-encoded random message and its RSA signature,
intended for use with Anaplan token authentication API (e.g., for CBA authentication flows).
It supports two payload formats (v1 and v2), where v2 prefixes the message with the current epoch seconds.
The script reads the RSA private key (optionally encrypted) from a file, securely prompts for the passphrase,
and can use environment variables to automate input. The resulting JSON is printed to stdout for use in API requests.
Environment Variables:
- PRIVATE_KEY_FILE: Path to the RSA private key file.
- PAYLOAD_VERSION: Payload version ('v1' or 'v2').
Usage:
python3 gen_signed_data.py
# or set environment variables to skip prompts:
PRIVATE_KEY_FILE=/path/to/key PAYLOAD_VERSION=v2 python gen_signed_data.py
"""
from base64 import b64encode
from Crypto.PublicKey import RSA
from Crypto.Random import get_random_bytes
from Crypto.Signature import pkcs1_15
from Crypto.Hash import SHA512
import json
import time
import os
import getpass
# Generate the encodedData parameter
def create_encoded_data_string(message_bytes):
# Step #1 - Convert the binary message into Base64 encoding:
# When transmitting binary data, especially in text-based protocols like JSON,
# it's common to encode the data into a format that's safe for transmission.
# Base64 is a popular encoding scheme that transforms binary data into an ASCII string,
# making it safe to embed in JSON, XML, or other text-based formats.
message_bytes_b64e = b64encode(message_bytes)
# Step #2 - Convert the Base64-encoded binary data to an ASCII string:
# After Base64 encoding, the result is still in a binary format.
# By decoding it to ASCII, we get a string representation of the Base64 data,
# which is easily readable and can be transmitted or stored as regular text.
message_str_b64e = message_bytes_b64e.decode('ascii')
return message_str_b64e
# Generate the encodedSignedData parameter
def create_signed_encoded_data_string(message_bytes, key_file, passphrase):
# Step #1 - Open the private key file for reading:
# Private keys are sensitive pieces of data that should be stored securely.
# Here, we're reading the private key file from the disk using Python's file I/O functions.
key_file_content = open(key_file, 'r', encoding='utf-8').read()
# Step #2 - Import the RSA private key:
# The RSA private key is imported from the previously read file content.
# If the key is encrypted, a passphrase will be required to decrypt and access the key.
my_key = RSA.import_key(key_file_content, passphrase=passphrase)
# Step #3 - Prepare the RSA key for signing operations:
# Before we can use the RSA key to sign data, we need to prepare it using
# the PKCS#1 v1.5 standard, a common standard for RSA encryption and signatures.
signer = pkcs1_15.new(my_key)
# Step #4 - Create a SHA-512 hash of the message bytes:
# It's common practice to create a cryptographic hash of the data you want to sign
# instead of signing the data directly. This ensures the integrity of the data.
# Here, we're using the SHA-512 algorithm, which produces a fixed-size 512-bit (64-byte) hash.
message_hash = SHA512.new(message_bytes)
# Step #5 - Sign the hashed message:
# Once the data is hashed, the hash is then signed using the private key.
# This produces a signature that can be verified by others using the associated public key.
message_hash_signed = signer.sign(message_hash)
# Step #6 - Encode the binary signature to Base64 and decode it to an ASCII string:
# Similar to our earlier function, after signing, the signature is in a binary format.
# We convert this to Base64 for safe transmission or storage, and then decode it to a string.
message_str_signed_b64e = b64encode(message_hash_signed).decode('utf-8')
return message_str_signed_b64e
def create_json_body(encoded_data_value, signed_encoded_data_value, encoded_data_format="v2"):
# Make encodedDataFormat the first attribute of the data
data = {}
if encoded_data_format == "v2":
data["encodedDataFormat"] = "v2"
data.update({
"encodedData": encoded_data_value,
"encodedSignedData": signed_encoded_data_value
})
return json.dumps(data, indent=4)
# Get user input for private key file and encodedDataFormat
# If environment variables are defined, use them; otherwise, prompt the user
# Usage: Define the following environment variables to skip user input:
# - PRIVATE_KEY_FILE: Path to the private key file (e.g., '/path/to/private.key')
# - PAYLOAD_VERSION: Payload version (e.g., 'v1' or 'v2')
# Print usage instructions if environment variables are not defined
if not os.getenv('PRIVATE_KEY_FILE') or not os.getenv('PAYLOAD_VERSION'):
print("Usage Instructions:")
print("Define the following environment variables to skip the user input:")
print(" - PRIVATE_KEY_FILE: Path to the private key file (e.g., '/path/to/private.key')")
print(" - PAYLOAD_VERSION: Payload version (e.g., 'v1' or 'v2')")
private_key_file = os.getenv('PRIVATE_KEY_FILE') or input(
"Enter the path to the private key file (default: '/path/to/private/key'): "
) or '/path/to/private/key'
encoded_data_format = os.getenv('PAYLOAD_VERSION') or input(
"Enter the encodedDataFormat (default: 'v2', options: 'v1', 'v2'): "
) or "v2"
# Provide the private key passphrase. If there is no passphrase, please insert None
private_key_file_passphrase = getpass.getpass(
"Enter the private key passphrase (leave blank for None): "
) or None
# Create random 100-byte message
if encoded_data_format == "v2":
# Prefix epoch_seconds to message_bytes
epoch_seconds = int(time.time()) # Generates the current time in seconds since the Unix epoch (January 1, 1970, 00:00:00 UTC). The value is always in UTC, regardless of the system's local timezone.
message_bytes = epoch_seconds.to_bytes(8, 'big') + get_random_bytes(100 - 8)
else:
# Generate random bytes without prefix
message_bytes = get_random_bytes(100)
# Create the encoded data string
message_str_b64e = create_encoded_data_string(message_bytes=message_bytes)
# Create an encoded signed data string
message_str_signed_b64e = create_signed_encoded_data_string(
message_bytes=message_bytes, key_file=private_key_file, passphrase=private_key_file_passphrase)
# Get the formatted body for the Anaplan API token authentication endpoint (https://auth.anaplan.com/token/authenticate) to generate an access_token
cba_payload = create_json_body(
encoded_data_value=message_str_b64e, signed_encoded_data_value=message_str_signed_b64e, encoded_data_format=encoded_data_format)
# Print the formatted body to stdout
print("Generated JSON body for API token authentication:")
print(cba_payload)
This script requires the PyCryptodome library:
pip install pycryptodome
Which Format Should You Use?
For new implementations: Use v2 format (the default) for enhanced security.
For existing systems: v1 format remains supported, but consider migrating to v2 when possible.
For automation: Remember that v2 requires generating a fresh payload for each authentication request due to the timestamp component.
Quick Demo
Here's how easy it is to use with Postman or any REST client:
* Run the script and copy the generated JSON payload
* Create a POST request to https://auth.anaplan.com/token/authenticate (check the URL, IP, and allowlist requirements | Anaplan Support page to ensure you have the correct Auth API URL corresponding to your environment)
* Add your certificate to the authorization header
* Paste the JSON as the request body and hit send
* Receive your access token for subsequent API calls
Authors: Adam Trainer, @AdamT - Operational Excellence Group (OEG)
-
[Part 3] Anaplan Audit History Data in an Anaplan Reporting Model
With Anaplan Audit, Enterprise and Professional level Customers can track audit events that occur in their Anaplan tenant.
For the Anaplan Audit History project, we created a simple Anaplan reporting model and app to report on Anaplan Audit events in the Anaplan UX.
Customers may download and use the Anaplan Audit Model & App in their Customer tenant. The standalone Anaplan Audit model consists of a single Anaplan model with a small number of lists, modules, actions, processes, and user experience pages. Customers are able modify and maintain their Audit Model as they see fit.
Once connected to the Python integration described in Part 2, Customers may schedule daily updates from the SQLite tables. The model update process consists of a single Anaplan process that imports new audit data as text into a flat ‘Audit’ module. SUM and COUNT functions sum over audit event frequencies by Audit activity category into an ‘Audit Report’ module. A ‘Refresh Log’ module is dimensionalized with a batch ID dimension that captures timestamps of every model update as well as record counts. These three modules are displayed on UX pages.
The Anaplan model is ~2MB at download, but will grow over time as the number of unique audit records increases based on the volume of activity in a Customer’s tenant. Customers may therefore need to periodically archive older audit data to maintain desired model size.
Audit Logs are available to members of the Tenant Auditor role via either the Administration UI or via the Audit API. The Anaplan Tenant Admin assigns members of the Tenant Auditor Role. Audit logs are available through the Admin console or the REST API for 30 days.
Please reach out to your Anaplan Customer Success Business Partner or Representative to obtain your Model copy. The Anaplan Audit Model is not supported by Anaplan and is used at Customer’s own discretion.
See Part 1 and Part 2 of Anaplan Audit History Data in an Anaplan Reporting Model.
-
How do I show only those line items which have non zero values for list and time dimension?
I want to see only those line items which are not equal to zero from a hierarchy list subset in a module dimensioned by time as well.
I am looking forward to create a line item which I will apply as a filter to the module.
I have 15+ line items but the filter should only show those line items from the list subset for every list subset item which are having values <> 0 across the time horizon if any week is non zero.
-
ALM syncing when multiple developments are happening
We have 2 developers working on the same model in different modules.
One has completed his changes but the other is not ready to sync.
Unfortunately you have to sync the whole model, and this means partial changes would then be synced. If the first developers' changes are urgent, waiting for a major change to be finished is unproductive.
Would it be an idea to just sync the changes (by choice), and not the whole model?
-
Search option for filters when selecting filters
In as much detail as possible, describe the problem or experience related to your idea. Please provide the context of what you were trying to do and include specific examples or workarounds:
Right now there is no ability to search filter by name when applying filter for a view.
How often is this impacting your users?
We have to scroll and find the module where our filter is located , which can consume some of the model builders time if there are large amount of modules.
Who is this impacting? (ex. model builders, solution architects, partners, admins, integration experts, business/end users, executive-level business users)
Model Builders & page Builders
What would your ideal solution be? How would it add value to your current experience?
Provide a search option to search filter by name at time of filter selection for a view.
Please include any images to help illustrate your experience.
-
How have you from-scratch built lot-level, geo-based Inventory Consumption?
We’re working on modeling inventory consumption at the lot level, accounting for shelf life considerations, since we need visibility into which lots may not consume—both for risk reporting and E&O analysis. Each item may have customers with different shelf life requirements. For example, item A may need 270 days for some customers, but only 180 days for others. We reflect this in our demand inputs, and we feed these into our consumption calculation, which is structured to consume lots with the lowest shelf life first, across shelf life requirements from lowest to highest. Here is an example of what that would look like:
inventory
item
Lot
Lot Shelf Life
Lot Rank
Lot Qty
A
1234
190
1
500
A
5678
211
2
600
A
9876
345
3
7000
demand
item
shelf life requirement
demand
A
180
450
A
270
3000
consumption
item
Lot
Lot Shelf Life
Lot Rank
Lot Qty
Shelf life requirement considered
consumed qty
remaining qty
remaining 180 demand
remaining 270 demand
A
1234
190
1
500
180
450
50
0
3000
A
5678
211
2
600
180
0
600
0
3000
A
9876
345
3
7000
270
3000
4000
0
0
In this logic, Lot 1234 is consumed first because it has the lowest shelf life that still satisfies the lowest shelf life requirement. We currently use rank cumulate chains that run sequentially, each handling one shelf life rule by consuming inventory and updating the available quantity of the lot before the next shelf life rule is applied. Questions:
* Can we reduce or eliminate the need for multiple rank cumulates?
Ideally, we’d use a module dimensioned by shelf life rule or something similar, so we only maintain one rank cumulate. Is it possible to track consumption of a lot’s inventory across multiple list members (e.g., shelf life rules) without rebuilding rank cumulates for each?
* If that’s not feasible, can the same logic be modeled using the Optimizer?
We’re open to using Optimizer if it allows a cleaner or more scalable solution. We have (like many) struggled to translate into the right strucutre & obj function
Thanks in advance for your thoughts—happy to provide more details or walk through the model if helpful.
-
How to change the order for LIS list items which is already used in a module??
I have earlier made an Lis from a module and used the same LIS in another module, and the LIS is also used as selector on the dashboard.
Now, I want to reorder LIS items, I have changed the order in module from which the LIS was made but still the order in the list is not changing.
How to resolve this ??
-
Display images in Excel directly exporting Anaplan NUX grid
I have published a grid in NUX with images visible for each of the items by using URL. Now, there is ask in which users would like to export the grid In Excel with images visible.
Is there a way to export the grid with the images visible in Excel?
-
Using Network cards for dynamic org charts
Hi,
We came across org charts (Hierarchy Charts) which are a great functionality. However, we want the client to be able to do scenario planning on the hierarchy and by Month. E.g. John could report to Smith in Jan, but in Feb , he report to Molly.
Since Org Charts cannot refresh dynamically unless you incorporate time dimension in the hierarchy itself, we were hoping this could be achieved with Network cards, as we could keep a parent child mapping by month.
Can anyone please help understanding if this is the best approach and / or are there any limitations of this? We are talking about a 10 level reporting hierarchy.
Regards,
Aakash Sachdeva
-
CloudWorks—This is how we use it, Part 3: The CloudWorks API
In Part 3 of our CloudWorks - This is how we use it series, we will introduce you to the CloudWorks API.
This is typically for an audience comfortable with REST APIs concepts and programming.
If you need a refresher on What CloudWorks is, see Part 1 or, for how to set it up, see Part 2.
CloudWorks API will allow you to create and execute connections and integrations extending integration automation capabilities between Amazon AWS S3 and Anaplan.
We will use a typical CloudWorks workflow to build and run integrations using REST API and Postman. To follow along, download the set of Jupyter Notebooks and data set that will walk you through each step of this workflow using Python (CloudWorksAPIBasic.zip and AccountDetailsS3.zip).
The following are minimum requirements to be able to perform the steps described in this blog.
* Required software* Download and install Postman (https://www.postman.com/downloads), Python 3.8.2, Jupyter notebook
* Anaplan: Create a Model, LIST (Accounts), Module (AccountDetails), Import AccountDetailsS3.csv into LIST (Create Import Action), Import AccountDetailsS3.csv into Module (Create Import Action)
* AWS S3: AWS Account, Create Access Key, Secret Access Key, Create a bucket, upload Source File (AccountDetailsS3.csv) to S3 Bucket
* Working knowledge of Anaplan Model Building Concepts, REST API, Postman, Python, Jupyter
* Familiarity with Anaplan Authentication and Integration REST APIs
Step 1 - Anaplan setup
* Download AccountDetailsS3.csv from this blog and save it to your workstation
* Create a new Anaplan Model. Provide an appropriate model name.
* Create LIST named Accounts. Import AccountID into this LIST from AccountDetailsS3.csv
* Create a Module named AccountDetails with Accounts as a dimension
* Create the following line items:* AccountName (Text), Industry (Text), AnnualRevenue (Number), EmployeeCount (Number)
* Import AccountDetailsS3.csv into AccountDetails Module.
* Import Action Name for LIST (Accounts): ______________________
* Import Action Name for Module (AccountDetails): __________________________
Step 2 - AWS setup
* Assumptions: AWS Access Key & Secret Access Key have been generated in AWS Console
* Create an S3 bucket named anaplandemo
* Create a folder named source
* Upload AccountDetailS3.csv to anaplandemo/source in AWS S3
Step 3 - Postman setup
* Create a collection named CloudWorks
* Create following folder structure in collection CloudWorks
* We will use variables in API requests. Create the following variables in collection CloudWorks.
Now that we have completed the required prep-work in Anaplan, AWS S3 & Postman, we will make CloudWorks API requests to build and execute integrations to load data from AWS S3 to Anaplan. We will follow steps outlined in a typical CloudWorks task, outlined below, to build this integration. CloudWorks REST API requests will be made using Postman.
Step 4 - CloudWorks API
Each Anaplan API request begins with generating an Authentication Token using Anaplan Authentication Services REST API. Generated Auth Token is then passed in the header of CloudWorks API request for Anaplan authentication. Keep in mind, Anaplan Authtoken is valid for only 30 min. Once expired, you will need to regenerate the token and update CloudWorks API header.
We will perform the following tasks in Postman to bring AWS S3 data to Anaplan.
* Create a connection: Using AWS S3 connector, we will create a CloudWorks connection to AWS S3 Bucket.
* Create an integration: Using AWS connection and Anaplan information (workspaceid, modelid, fileid, import action id), we will create a CloudWorks integration.
* Run an integration: Run integration using CloudWorks API.
* Retrieve Integration Information: Get integration run information.
Generate Authentication Token (Basic Authentication)
The first step in any Anaplan API request, including CloudWorks, is generating Anaplan Authentication Token. The authentication token can be generated using Anaplan Authentication Services API. URL for authentication services is https://auth.anaplan.com/token/authentication.
Authentication Services API to generate token has the following REST structure:
Method
POST
API end point
https://auth.anaplan.com/token/authenticate
Headers
Authorization: Basic username:password (username:password string must be base64 encoded)
Copy value for tokenValue to collection variable token_value. You will use variables instead of hard coding values in your API requests. Token values expire every 30 minutes. You will need to re-generate a token and update the token_value value variable if you encounter authentication failures.
Create a connection using CloudWorks API
The first step in building a CloudWorks integration is to create a connection to AWS S3 bucket. Passing AuthToken in the header of API request, we will create a connection to AWS S3 bucket in this step. Create connection API has the following REST structure.
Method
POST
API end point
https://api.cloudworks.anaplan.com/1/0/integrations/connection
Authorization
No Auth
Headers
Authorization: AnaplanAuthToken {{token_value}}
Content-Type: application/json
Body
{
"type": "AmazonS3",
"body": {
"name":"AWS_CW_API",
"accessKeyId":"XXXXXXXXXXXXXXXX",
"secretAccessKey":"XXXXXXXXXXXXXXXXXXXXXXX",
"bucketName":"anaplandemo"
}
}
Do not change the value for “type”.
Provide name for following elements: name, accessKeyId, secretAccessKey, & bucketName
* Select POST method and API end point. https://api.cloudworks.anaplan.com/1/0/integrations/connections.
* Select “NoAuth” for Authorization Type. We will pass the authentication token in the header of the request.
* Create the following headers: Authorization and Content-Type.
* Copy and paste the following JSON structure into the Body of your request. Replace values for accessKeyId, secretAccessKey, and bucketName with your values.
{
"type": "AmazonS3",
"body": {
"name":"AWS_CW_API",
"accessKeyId":"XXXXXXXXXXXXXXXX",
"secretAccessKey":"XXXXXXXXXXXXXXXXXXXXXXX",
"bucketName":"anaplandemo"
}
}
* Once the API request is submitted, the service should return a successful response back with HTTP code 200 and a value for connection id. You will save this connection id, to be used in the next API request in creating a CloudWorks integration. Copy the value for connectionId in a notepad and update it for collection variable connection_id.
* Log into Anaplan CloudWorks. You should see the connection you created under Connections.
Create an integration using CloudWorks API
In this second step, we will create a CloudWorks integration. In the integration, we will provide connectivity information for both AWS S3 and Anaplan. Two types of integrations can be built:
* AWS S3 (Source) Anaplan (Target): Data from a file on AWS S3 is selected as a source and Anaplan Import Action as a target.
* Anaplan (Source) AWS S3: Data from Anaplan is exported using Export Action (Source) and written to a file on AWS S3 Bucket (Target).
Create integration API has the following REST structure:
Method
POST
API end point
https://api.cloudworks.anaplan.com/1/0/integrations
Authorization
No Auth
Headers
Authorization: AnaplanAuthToken {{token_value}}
Content-Type: application/json
Body
{
"name": "CWAPI_AWSS3_Anaplan",
"jobs": [
{
"type": "AmazonS3ToAnaplan",
"sources": [
{
"type": "AmazonS3",
"connectionId": "a2fb43a9-3bc8-4bcb-9f37-dd59a75eb940",
"file": "source/AccountsS3.csv"
}
],
"targets": [
{
"type": "Anaplan",
"actionId": "112000000005",
"fileId": "113000000000",
"workspaceId": "8a81b09d5e8c6f27015ece3402487d33",
"modelId": "35A6EF893D7F47EEA5A554D5CC7DC330"
}
]
}
]
}
Do not change the value for “type”: “AmazonS3ToAnaplan”, “type”:”AmazonS3”, “type”:”Anaplan”
Provide values for following elements: connectionId, actionId, fileId, workspaceId, modelId.
* Select POST method and API end point https://api.cloudworks.anaplan.com/1/0/integrations. Also, select “No Auth” for Authorization.
* Create the following headers: Authorization and Content-Type
* Copy and paste the following JSON structure into the Body of your request. Replace values for connectionId, file, actionId, fileId, workspaceId, and modelId. You may obtain values for your import action (actionId) and Data Source File (fileId) using Anaplan Integration API. Details on Integration APIs can be found on Anapedia. You can obtain values for workspaceId & modeled from Anaplan UX.
{
"name": "CWAPI_AWSS3_Anaplan",
"jobs": [
{
"type": "AmazonS3ToAnaplan",
"sources": [
{
"type": "AmazonS3",
"connectionId": "a2fb43a9-3bc8-4bcb-9f37-dd59a75eb940",
"file": "source/AccountsS3.csv"
}
],
"targets": [
{
"type": "Anaplan",
"actionId": "112000000005",
"fileId": "113000000000",
"workspaceId": "8a81b09d5e8c6f27015ece3402487d33",
"modelId": "35A6EF893D7F47EEA5A554D5CC7DC330"
}
]
}
]
}
* Once the API request is submitted, the service should return a successful response back with HTTP code 200 and a value for integration id. Copy & paste value for integrationId to collection variable s3_anaplan_int_id. We will use integrationId in a later step when we gather run details for an integration.
* Log into Anaplan CloudWorks. You should see the integration you created under Integrations.
Run an integration using CloudWorks API
Now that we have created an integration, we will run the integration that loads data from a file on AWS S3 bucket to an Anaplan module. Once the integration is run, we will retrieve a history of integration runs and its details. We will do this in the next step. CloudWorks APIs can be used to schedule integrations. However, in this blog, we will focus on executing integrations via API.
You will need to provide the following details in your API request:
Method
POST
API end point
https://api.cloudworks.anaplan.com/1/0/integrations/<integration_id>/run
Authorization
No Auth
Headers
Authorization: AnaplanAuthToken {{token_value}}
Content-Type: application/json
Body
None
* Select POST method and API end point https://api.cloudworks.anaplan.com/1/0/integrations. Also, select “No Auth” for Authorization.
* Create the following headers: Authorization and Content-Type
* Once the API request is submitted, the service should return a successful response back with HTTP code 200.
* View integration run status & details in Anaplan CloudWorks UX.
* If an integration run fails, you will also be notified via email.
Get history of integration runs
CloudWorks API provides end points to get details on integration run history. REST structure for this end point is:
Method
GET
API end point
https://api.cloudworks.anaplan.com/1/0/integrations/runs/<integrationId>
Parameters
Offset = 0
limit = 2
Authorization
No Auth
Headers
Authorization: AnaplanAuthToken {{token_value}}
Content-Type: application/json
Body
None
* Select POST method and API end point https://api.cloudworks.anaplan.com/1/0/integrations/runs/<integrationId>. Also select “No Auth” for Authorization.
* Create two parameters offset & limit. Set their values to 0 & 2 respectively.
* Create the following headers: Authorization and Content-Type
* The service request should return a successful response back with HTTP code 200 and details for each run for selected integration.
Contributing authors: Pavan Marpaka, Scott Smith, and Christophe Keomanivong.
-
Error in Uploading Screenshot for Anaplan ADO Maps.
Hi All,
Facing an issue while uploading the screenshot as it is showing "error while uploading the file". Is it only for me or is anyone else facing the same. Can anyone help in resolving this at earliest? Attaching the screenshot of the error below.
Thanks,
-
Returning same value from same Line Item but different items
Hi all,
Is it possible to return a value from an item on the same line item? For example, picture this as my module:
I want to be able to calculate a delta between S1 and S2 or S3 and S1, like this:
Here I'm using SELECT but this will be a lookup since I want the user to be able to select which scenarios he want to compare, of course this formula won't work because this is a circular reference, is there a way to make it work?
The user wants to have a Delta on the same table as the scenarios, that's why I'm not creating a second module to calculate it.