-
Ability to Apply Filters before downloading Model History
Currently, If I want to check history to view some activity on a specific module or list, but I am not aware of the dates when that change took place, It takes forever for me to extract the history for downloading 'ALL' change history and locate that change. I face issues especially when the changes are too many and file is too huge. It would be best for me if I can filter by module, list and user before downloading.
-
CloudWorks Job Dependency Timing
If I schedule two CloudWorks jobs, one that loads data into Module A from different Model and a second that runs the actions to move that data into Module B from Module A, will the second job wait until the first job finishes before starting, or will it start at its scheduled time even if the first job is still running.
-
[Part 2] Enhancing Anaplan Audit Log Data Extraction with a Streamlined Python Solution
The Anaplan Audit History project, consisting of several Python modules and a configuration file, works synergistically to streamline the process of fetching, formatting, and loading Anaplan audit log data into a preconfigured Anaplan Model. The project leverages the Anaplan REST API, the Python Pandas library for efficient data conversion from web services JSON format to tabular, and SQLite for advanced data blending and transformation capabilities.
SQLite, a highly versatile and lightweight database engine, offers substantial benefits for transforming and blending various related datasets. Its capacity to execute complex SQL queries and join operations enables users to merge datasets in diverse ways, establishing itself as a powerful tool for data integration and enrichment. Utilizing SQLite's comprehensive set of built-in functions, the project seamlessly combines multiple Anaplan datasets and performs data transformation operations such as filtering, sorting, and reshaping within the SQLite environment. The simplicity of SQLite's file-based storage system facilitates easy deployment and management directly within the Python code, making SQLite an efficient, scalable, and accessible solution for data blending, transformation, and summarization. In context to this project, this SQL code combines various Anaplan metadata, including Users, Workspaces, Models, Actions, Processes, and CloudWorks Integrations via an SQL join with the raw audit data to synthesize audit data in a reportable format.
In order to supply all this Anaplan content, the following Anaplan REST APIs need to be leveraged:
* OAuth Service API - to authenticate and refresh the access_token based on the client_id and refresh_token.
* Authentication API - to authenticate and refresh theaccess_tokenbased on a valid username and password combination or valid certificate in the PEM format.
* Audit API - to fetch the audit records.
* Integration API - to fetch metadata about Anaplan objects such as data sources, Processes, and Actions. Additionally, to refresh content to the target Anaplan Audit Reporting Model, the bulk API is used to upload the report-ready audit data, and the transaction API is leveraged for updating the latest timestamp.
* SCIM API - to fetch Anaplan user metadata.
* CloudWorks API - to fetch CloudWorks integration metadata.
This project exemplifies how Python can effectively integrate and automate Anaplan operations, using modern OAuth services for Anaplan authorization in tandem with the rich capabilities of the Anaplan REST API. Additionally, the project highlights several Python best practices, such as:
* Organizing code into packages and modules: Segmenting the code into multiple modules based on functionality improves maintainability and simplifies the process of locating and resolving future issues.
* Enhancing error handling: Effective error management is vital when working with external APIs. Implement try-except blocks to handle exceptions that may occur during API calls or file I/O operations.
* Utilizing Python's logging module: Opt for the built-in logging module instead of print statements for debugging, providing better control over log verbosity and streamlining log output management.
* Leveraging environment variables or configuration files: Avoid hardcoding sensitive information like API keys or credentials and store this information using environment variables or configuration files instead.
* Adding comments to the code: Include annotations in complex or non-obvious code sections to improve comprehensibility for both yourself and others.
Anaplanners can use this project as a basis for building tailored integrations.
The source code and supplementary details, such as requirements, deployment instructions, and helpful videos, are available on GitHub.
Next, learn how to use this data in an Anaplan Model by reading the third installment of this series: [Part 3] Anaplan Audit History Data in an Anaplan Reporting Model.
Author: Quin Eddy, @QuinE - Director of Data Integration, Operational Excellence Group (OEG)
-
Using the Anaplan Certificate with the Anaplan REST API: A Comprehensive Guide
If you're diving into integrating with the Anaplan REST API using an Anaplan Certificate, you'll need to know certain requirements. This post is a step-by-step guide to ensure a seamless connection using Python. If you'd prefer to avoid a programmatic approach to using your certificate, then please check out the CA Certificate Auth Generator (yet to be updated with new v2 format).
📢What's New: Enhanced Security with v2 Format
Recent Update: Certificate-Based Authentication for Anaplan APIs has been enhanced with a new v2 format for enhanced security. This guide covers both the original v1 format and the new v2 format.
Choose Your Format:
* v2 format (Recommended): Includes timestamp data for enhanced security
* v1 format: Uses purely random data, supported for backward compatibility
Important: With v2 format, each authentication request must generate a fresh payload due to the timestamp component. You cannot reuse payloads between API calls.
Migration Timeline and Format Support
Current Status: Both v1 and v2 formats are supported.
Future Direction: Anaplan plans to phase out v1 format support over time as part of ongoing security enhancements. While there is no immediate deprecation timeline, we recommend:
* New implementations: Start with v2 format to ensure long-term compatibility
* Existing v1 users: Begin planning migration to v2 format to avoid future disruption
* Production systems: Consider updating to v2 during your next maintenance cycle
Core Requirements (Both Formats)
Regardless of which format you choose, these fundamental requirements remain the same:
• Make a POST Call
Initiate a POST call to https://auth.anaplan.com/token/authenticate
• Add the Certificate string to the Authorization header
The Authorization header value should include your Anaplan public key:
Authorization: CACertificate MIIFeDCCBGCgAwIBAgIQCcnAr/+Z3f...
• Pass a Random String
Include a 100-byte message within the body of your REST API call. This data must be signed using your certificate's private key to create the authentication payload.
[V1 Format]
Random Data: a 100-byte string of purely random characters.
Example: xO#hXOHcj2tj2!s#&HLzK*NrOJOfbQaz)MvLQnz4Ift*0SuWK&r#1Ud^L@7wAb @7EST @!cHyR%n&0)72C#J!by@RMqY2bFc7uGQP
JSON Structure:
{
"encodedData": "ACTUAL_ENCODED_DATA_VALUE",
"encodedSignedData": "ACTUAL_SIGNED_ENCODED_DATA_VALUE"
}
[V2 Format] (Recommended)
Timestamp + Random Data: Combines an 8-byte timestamp with 92 bytes of random data (total: 100 bytes).
Structure:
* First 8 bytes: Current epoch timestamp (binary format)
* Remaining 92 bytes: Random data
* Total: 100 bytes exactly
JSON Structure:
{
"encodedDataFormat": "v2",
"encodedData": "ACTUAL_ENCODED_DATA_VALUE",
"encodedSignedData": "ACTUAL_SIGNED_ENCODED_DATA_VALUE"
}
Security Benefit: The timestamp makes each request unique and helps prevent replay attacks where someone might try to reuse an old authentication payload.
• Certificate Format Requirements
Both the Public Certificate and the Private Key must be in PEM format. PEM (Privacy Enhanced Mail) is recognizable by its delimiters:
* Public certificates: <strong>-----BEGIN CERTIFICATE----- and -----END CERTIFICATE-----</strong>
* Private keys: <strong>-----BEGIN PRIVATE KEY----- and -----END PRIVATE KEY-----</strong>
How the Signing Process Works
Both formats follow the same cryptographic process:
* Generate the 100-byte message (format depends on v1 vs v2)
* Base64 encode the message → This becomes your encodedData
* Sign the message with your private key using RSA-SHA512
* Base64 encode the signature → This becomes your encodedSignedData
* Create the JSON payload with the appropriate format structure
Python Implementation
Here's an enhanced Python script that handles both v1 and v2 formats with user-friendly prompts:
"""
gen_signed_data.py
This script generates a JSON payload containing a base64-encoded random message and its RSA signature,
intended for use with Anaplan token authentication API (e.g., for CBA authentication flows).
It supports two payload formats (v1 and v2), where v2 prefixes the message with the current epoch seconds.
The script reads the RSA private key (optionally encrypted) from a file, securely prompts for the passphrase,
and can use environment variables to automate input. The resulting JSON is printed to stdout for use in API requests.
Environment Variables:
- PRIVATE_KEY_FILE: Path to the RSA private key file.
- PAYLOAD_VERSION: Payload version ('v1' or 'v2').
Usage:
python3 gen_signed_data.py
# or set environment variables to skip prompts:
PRIVATE_KEY_FILE=/path/to/key PAYLOAD_VERSION=v2 python gen_signed_data.py
"""
from base64 import b64encode
from Crypto.PublicKey import RSA
from Crypto.Random import get_random_bytes
from Crypto.Signature import pkcs1_15
from Crypto.Hash import SHA512
import json
import time
import os
import getpass
# Generate the encodedData parameter
def create_encoded_data_string(message_bytes):
# Step #1 - Convert the binary message into Base64 encoding:
# When transmitting binary data, especially in text-based protocols like JSON,
# it's common to encode the data into a format that's safe for transmission.
# Base64 is a popular encoding scheme that transforms binary data into an ASCII string,
# making it safe to embed in JSON, XML, or other text-based formats.
message_bytes_b64e = b64encode(message_bytes)
# Step #2 - Convert the Base64-encoded binary data to an ASCII string:
# After Base64 encoding, the result is still in a binary format.
# By decoding it to ASCII, we get a string representation of the Base64 data,
# which is easily readable and can be transmitted or stored as regular text.
message_str_b64e = message_bytes_b64e.decode('ascii')
return message_str_b64e
# Generate the encodedSignedData parameter
def create_signed_encoded_data_string(message_bytes, key_file, passphrase):
# Step #1 - Open the private key file for reading:
# Private keys are sensitive pieces of data that should be stored securely.
# Here, we're reading the private key file from the disk using Python's file I/O functions.
key_file_content = open(key_file, 'r', encoding='utf-8').read()
# Step #2 - Import the RSA private key:
# The RSA private key is imported from the previously read file content.
# If the key is encrypted, a passphrase will be required to decrypt and access the key.
my_key = RSA.import_key(key_file_content, passphrase=passphrase)
# Step #3 - Prepare the RSA key for signing operations:
# Before we can use the RSA key to sign data, we need to prepare it using
# the PKCS#1 v1.5 standard, a common standard for RSA encryption and signatures.
signer = pkcs1_15.new(my_key)
# Step #4 - Create a SHA-512 hash of the message bytes:
# It's common practice to create a cryptographic hash of the data you want to sign
# instead of signing the data directly. This ensures the integrity of the data.
# Here, we're using the SHA-512 algorithm, which produces a fixed-size 512-bit (64-byte) hash.
message_hash = SHA512.new(message_bytes)
# Step #5 - Sign the hashed message:
# Once the data is hashed, the hash is then signed using the private key.
# This produces a signature that can be verified by others using the associated public key.
message_hash_signed = signer.sign(message_hash)
# Step #6 - Encode the binary signature to Base64 and decode it to an ASCII string:
# Similar to our earlier function, after signing, the signature is in a binary format.
# We convert this to Base64 for safe transmission or storage, and then decode it to a string.
message_str_signed_b64e = b64encode(message_hash_signed).decode('utf-8')
return message_str_signed_b64e
def create_json_body(encoded_data_value, signed_encoded_data_value, encoded_data_format="v2"):
# Make encodedDataFormat the first attribute of the data
data = {}
if encoded_data_format == "v2":
data["encodedDataFormat"] = "v2"
data.update({
"encodedData": encoded_data_value,
"encodedSignedData": signed_encoded_data_value
})
return json.dumps(data, indent=4)
# Get user input for private key file and encodedDataFormat
# If environment variables are defined, use them; otherwise, prompt the user
# Usage: Define the following environment variables to skip user input:
# - PRIVATE_KEY_FILE: Path to the private key file (e.g., '/path/to/private.key')
# - PAYLOAD_VERSION: Payload version (e.g., 'v1' or 'v2')
# Print usage instructions if environment variables are not defined
if not os.getenv('PRIVATE_KEY_FILE') or not os.getenv('PAYLOAD_VERSION'):
print("Usage Instructions:")
print("Define the following environment variables to skip the user input:")
print(" - PRIVATE_KEY_FILE: Path to the private key file (e.g., '/path/to/private.key')")
print(" - PAYLOAD_VERSION: Payload version (e.g., 'v1' or 'v2')")
private_key_file = os.getenv('PRIVATE_KEY_FILE') or input(
"Enter the path to the private key file (default: '/path/to/private/key'): "
) or '/path/to/private/key'
encoded_data_format = os.getenv('PAYLOAD_VERSION') or input(
"Enter the encodedDataFormat (default: 'v2', options: 'v1', 'v2'): "
) or "v2"
# Provide the private key passphrase. If there is no passphrase, please insert None
private_key_file_passphrase = getpass.getpass(
"Enter the private key passphrase (leave blank for None): "
) or None
# Create random 100-byte message
if encoded_data_format == "v2":
# Prefix epoch_seconds to message_bytes
epoch_seconds = int(time.time()) # Generates the current time in seconds since the Unix epoch (January 1, 1970, 00:00:00 UTC). The value is always in UTC, regardless of the system's local timezone.
message_bytes = epoch_seconds.to_bytes(8, 'big') + get_random_bytes(100 - 8)
else:
# Generate random bytes without prefix
message_bytes = get_random_bytes(100)
# Create the encoded data string
message_str_b64e = create_encoded_data_string(message_bytes=message_bytes)
# Create an encoded signed data string
message_str_signed_b64e = create_signed_encoded_data_string(
message_bytes=message_bytes, key_file=private_key_file, passphrase=private_key_file_passphrase)
# Get the formatted body for the Anaplan API token authentication endpoint (https://auth.anaplan.com/token/authenticate) to generate an access_token
cba_payload = create_json_body(
encoded_data_value=message_str_b64e, signed_encoded_data_value=message_str_signed_b64e, encoded_data_format=encoded_data_format)
# Print the formatted body to stdout
print("Generated JSON body for API token authentication:")
print(cba_payload)
This script requires the PyCryptodome library:
pip install pycryptodome
Which Format Should You Use?
For new implementations: Use v2 format (the default) for enhanced security.
For existing systems: v1 format remains supported, but consider migrating to v2 when possible.
For automation: Remember that v2 requires generating a fresh payload for each authentication request due to the timestamp component.
Quick Demo
Here's how easy it is to use with Postman or any REST client:
* Run the script and copy the generated JSON payload
* Create a POST request to https://auth.anaplan.com/token/authenticate (check the URL, IP, and allowlist requirements | Anaplan Support page to ensure you have the correct Auth API URL corresponding to your environment)
* Add your certificate to the authorization header
* Paste the JSON as the request body and hit send
* Receive your access token for subsequent API calls
Authors: Adam Trainer, @AdamT - Operational Excellence Group (OEG)
-
[Part 3] Anaplan Audit History Data in an Anaplan Reporting Model
With Anaplan Audit, Enterprise and Professional level Customers can track audit events that occur in their Anaplan tenant.
For the Anaplan Audit History project, we created a simple Anaplan reporting model and app to report on Anaplan Audit events in the Anaplan UX.
Customers may download and use the Anaplan Audit Model & App in their Customer tenant. The standalone Anaplan Audit model consists of a single Anaplan model with a small number of lists, modules, actions, processes, and user experience pages. Customers are able modify and maintain their Audit Model as they see fit.
Once connected to the Python integration described in Part 2, Customers may schedule daily updates from the SQLite tables. The model update process consists of a single Anaplan process that imports new audit data as text into a flat ‘Audit’ module. SUM and COUNT functions sum over audit event frequencies by Audit activity category into an ‘Audit Report’ module. A ‘Refresh Log’ module is dimensionalized with a batch ID dimension that captures timestamps of every model update as well as record counts. These three modules are displayed on UX pages.
The Anaplan model is ~2MB at download, but will grow over time as the number of unique audit records increases based on the volume of activity in a Customer’s tenant. Customers may therefore need to periodically archive older audit data to maintain desired model size.
Audit Logs are available to members of the Tenant Auditor role via either the Administration UI or via the Audit API. The Anaplan Tenant Admin assigns members of the Tenant Auditor Role. Audit logs are available through the Admin console or the REST API for 30 days.
Please reach out to your Anaplan Customer Success Business Partner or Representative to obtain your Model copy. The Anaplan Audit Model is not supported by Anaplan and is used at Customer’s own discretion.
See Part 1 and Part 2 of Anaplan Audit History Data in an Anaplan Reporting Model.
-
How do I show only those line items which have non zero values for list and time dimension?
I want to see only those line items which are not equal to zero from a hierarchy list subset in a module dimensioned by time as well.
I am looking forward to create a line item which I will apply as a filter to the module.
I have 15+ line items but the filter should only show those line items from the list subset for every list subset item which are having values <> 0 across the time horizon if any week is non zero.
-
ALM syncing when multiple developments are happening
We have 2 developers working on the same model in different modules.
One has completed his changes but the other is not ready to sync.
Unfortunately you have to sync the whole model, and this means partial changes would then be synced. If the first developers' changes are urgent, waiting for a major change to be finished is unproductive.
Would it be an idea to just sync the changes (by choice), and not the whole model?
-
Search option for filters when selecting filters
In as much detail as possible, describe the problem or experience related to your idea. Please provide the context of what you were trying to do and include specific examples or workarounds:
Right now there is no ability to search filter by name when applying filter for a view.
How often is this impacting your users?
We have to scroll and find the module where our filter is located , which can consume some of the model builders time if there are large amount of modules.
Who is this impacting? (ex. model builders, solution architects, partners, admins, integration experts, business/end users, executive-level business users)
Model Builders & page Builders
What would your ideal solution be? How would it add value to your current experience?
Provide a search option to search filter by name at time of filter selection for a view.
Please include any images to help illustrate your experience.
-
How have you from-scratch built lot-level, geo-based Inventory Consumption?
We’re working on modeling inventory consumption at the lot level, accounting for shelf life considerations, since we need visibility into which lots may not consume—both for risk reporting and E&O analysis. Each item may have customers with different shelf life requirements. For example, item A may need 270 days for some customers, but only 180 days for others. We reflect this in our demand inputs, and we feed these into our consumption calculation, which is structured to consume lots with the lowest shelf life first, across shelf life requirements from lowest to highest. Here is an example of what that would look like:
inventory
item
Lot
Lot Shelf Life
Lot Rank
Lot Qty
A
1234
190
1
500
A
5678
211
2
600
A
9876
345
3
7000
demand
item
shelf life requirement
demand
A
180
450
A
270
3000
consumption
item
Lot
Lot Shelf Life
Lot Rank
Lot Qty
Shelf life requirement considered
consumed qty
remaining qty
remaining 180 demand
remaining 270 demand
A
1234
190
1
500
180
450
50
0
3000
A
5678
211
2
600
180
0
600
0
3000
A
9876
345
3
7000
270
3000
4000
0
0
In this logic, Lot 1234 is consumed first because it has the lowest shelf life that still satisfies the lowest shelf life requirement. We currently use rank cumulate chains that run sequentially, each handling one shelf life rule by consuming inventory and updating the available quantity of the lot before the next shelf life rule is applied. Questions:
* Can we reduce or eliminate the need for multiple rank cumulates?
Ideally, we’d use a module dimensioned by shelf life rule or something similar, so we only maintain one rank cumulate. Is it possible to track consumption of a lot’s inventory across multiple list members (e.g., shelf life rules) without rebuilding rank cumulates for each?
* If that’s not feasible, can the same logic be modeled using the Optimizer?
We’re open to using Optimizer if it allows a cleaner or more scalable solution. We have (like many) struggled to translate into the right strucutre & obj function
Thanks in advance for your thoughts—happy to provide more details or walk through the model if helpful.
-
How to change the order for LIS list items which is already used in a module??
I have earlier made an Lis from a module and used the same LIS in another module, and the LIS is also used as selector on the dashboard.
Now, I want to reorder LIS items, I have changed the order in module from which the LIS was made but still the order in the list is not changing.
How to resolve this ??
-
Display images in Excel directly exporting Anaplan NUX grid
I have published a grid in NUX with images visible for each of the items by using URL. Now, there is ask in which users would like to export the grid In Excel with images visible.
Is there a way to export the grid with the images visible in Excel?
-
Using Network cards for dynamic org charts
Hi,
We came across org charts (Hierarchy Charts) which are a great functionality. However, we want the client to be able to do scenario planning on the hierarchy and by Month. E.g. John could report to Smith in Jan, but in Feb , he report to Molly.
Since Org Charts cannot refresh dynamically unless you incorporate time dimension in the hierarchy itself, we were hoping this could be achieved with Network cards, as we could keep a parent child mapping by month.
Can anyone please help understanding if this is the best approach and / or are there any limitations of this? We are talking about a 10 level reporting hierarchy.
Regards,
Aakash Sachdeva