ALM APIs: This is how we use it

edited December 2022 in Best Practices

Application Lifecycle Management APIs (ALM APIs) were released earlier this year, and their purpose is to make model change management more scalable, automatable, and integrate with other systems.


We have seen the ALM APIs being leveraged in 3 distinctive ways:

  • to orchestrate and run multiple model deployments (model syncs) in parallel;
  • to build additional security controls by embedding the ALM steps in approval processes trigged from other systems or custom-built applications;
  • to retrieve information on different ALM steps (eg. revisions information and comparison report) as auditable artifacts in SOX compliance and other regulatory reviews;

This article focuses on the first and most common use of the ALM APIs  - change management automation – and will show you an example of how you can automate the ALM process in your environment.


In our example, the environment we manage has 1 development model and 3 production models each corresponding to a country (France, Spain, and Ireland).


In an ideal scenario, all of our production models would be at the same revision version, but in reality, each market requests changes to be deployed at different times depending on local workforce availability.


As the end of the year approaches, there is an urgent need to update the “Current Fiscal Year” setting in all production models. This setting is “Production Data” and therefore requires an ALM Synchronisation.



The ALM admins are facing various challenges:

  • They need an overview of what is the latest revision in each of the production models, so that they know which markets have been updated accordingly and which markets are lagging.
  • They are on a tight schedule and need to act as soon as the market is ready and where possible execute multiple synchronizations at a time.
  • Their team is overstretched and would benefit from minimizing manual process and number of clicks where possible.

Now that we have the context, let's look at the solution we built:



We leveraged a custom APIs management tool similar to an Anaplan Connect Script and an Anaplan model to build an ALM execution consol that enables the ALM admin to 1. view the latest revision in each productiom model compared to the latest revision in the development model and 2. trigger ALM syncronisation across the environment from a single place.


Using a model here is an example: a dedicated UI or other tools could also be a relevant choice.

Also, we will not focus on the custom script in this article. Instead we will focus on how to interact with the ALM APIs and use them in an implementation example.

Let's analyze how the information on the ALM execution consol is linked to the ALM APIs.

Revisions Information

First, we want to know the state of each production model compared to the development model, which allows us to keep track of change deployment progress across the different markets.


To achieve this, we use a card that compares the revision tags from the development model and the selected production model (in the examaple below “Supply-Prod-France” has been synced to 3 revisions but not synced yet to “08/09/2021_1” and “10/09/2021_1”).




This chart leverages the endpoint "/revisions" (link to apiary here). 


Retrieve the revisions from the source model and from the target model.

Create a module to compare which revisions are in both models (SYNCED) and which revisions are only in the source models (NOT SYNCED).


Note that not all of the revisions in the source model can be synced to the target model – only revisions that are newer than the latest revision in the target model can be syncronised.



Therefore, while we use "/revisions" to build the overview card, we would leverage a different endpoint to build the card that gives us only the syncable revisions for a particular source – target pair.

Syncable Revisions

In addition to monitoring, we want to be able to execute revision sinchronizations from the same page.

So, let’s look at the cards we built to trigger change deployment.


Firstly, we build a card that allows us to chose which syncable revision we want to deploy to our selected production model.


We levaraged the endpoint “/syncableRevisions?” (link to apiary here).




Note that this is the same information you can see in the UI when comparing revisions.




Once we have selected the revision that we want to deploy to our production model, we go to the card that enables us to execute the ALM sync.



Synchronize models

We use this selection widget to pick the model that we want to update:


When we select the model that we want to update (eg.: Ireland), the program will perform the API request to execute the synchronization (more info via this link).


The request looks like this:



The relevant information for the source and target model needs to be passed correctly so that the request is valid.


Tip: If you intend to implement a similar solution, don't forget to perform a syncable revision request once the sync task is done; this will guarantee an up-to-date status of the models' state.


Finally, we also added a “Log” card that returns the API responses and gives visibility to the processes happening on the background, so that we can debug issues more easily.




In this article we explored one of many ways to levarage the new ALM APIs. We build an ALM Execution  Console that enabled us to monitor and trigger change deployments (ALM syncs) across our Anaplan environment. It brought visibility and efficiency in our change management operations.


Hopefully, this example has inspired you to build your own application to integrate Anaplan’s change management with your existing systems, to build approval workflows for additional layer of segregation of duties, to retrieve information on different steps of the ALM process for auditability, or simply to streamline and run in parallel multiple deployments across your environment.


Now that you have seen our example, please share with us how you plan to use the ALM APIs in your ecosystem. Let us know in the comments below.

Contributing authors: Christophe Keomanivong and Joey Morisette.


  • That's ALM and on steroids 🙂

    I imagine how time consuming is the overall solution build and test, thanks for sharing.


    I am curious to know more about your code. Is it running on a dedicated server? I assume you use schedule for your code. If so, how often your API interacts with the model? How stable is it?




  • Hello @Hayk ,


    Thanks for your interest and sorry for the delay.
    Indeed, the overall solution was a subtle mix between a model that needed to be self-explanatory and a code on which we can rely on.
    The code is a java jar file that can be deployed in any server (on-prem or cloud like AWS, Azure).
    By doing so, we can eschew scheduling (we could schedule obviously if we wanted to) but we can trigger a web service to call the APIs. So they interact with the models only when we want. (The little cat emoji will be changed to a rolling wheel, indicating the agent is running).
    We are planning to release an article on the dev community to share the backbone solution on which anyone can built such solutions (it was also used for the SCIM use case).

    Christophe K.