Transactional or Bulk: When to use which API?

edited December 2022 in Best Practices

What are we talking about?

See our refresher here.

Anaplan has a set of APIs that allow for a developer to programmatically interact in certain ways with the Anaplan platform, rather than using the Anaplan user interface.

These APIs are grouped into the following categories: 

  1. Bulk 
  2. Transactional 
  3. Audit 
  4. SCIM 
  5. ALM 
  6. Cloudworks 

The subset of Bulk and Transactional APIs are the two that allows for someone to programmatically move data into and out of an Anaplan model. We will be focusing on those two APIs in this article, specifically loading data into Anaplan and explaining the difference between the two, and when and how to make an informed decision on which one to use.  

What’s the difference between bulk and transactional? 

Historically Anaplan’s Bulk API (very literally named) has been used for sending large (a.k.a. bulk of) data sets into Anaplan. These data sets are either in .csv or .txt format and can range in size anywhere from a few KB to 10s of GB worth of data.

The steps to set up a Bulk API load are as follows: 

  1. Manually load the file into the Anaplan model 
  2. Save the export action 
  3. Upload the file via API 
  4. Execute the import action via API 
  5. Poll the /tasks endpoint to check on the status of the job 

The Bulk API is a very efficient and effective way to load mass amounts of data into an Anaplan model at a scheduled frequency and customers have been using it successfully for years for their data needs. 


The Transactional API, on the other hand, is a much more lightweight alternative to the Bulk API. It is meant for targeted, small loads of data into an Anaplan model vs. updating an entire data set like the Bulk API. This API accepts .csv or .json formatted data sets. JSON is a much easier data type for developers to work with and is a general standard for Restful APIs.  

The steps to set up a Transactional API load are as follows: 

  1. Load the data into the Anaplan model via the API 

image (9).png

As you can tell, this is a much simpler process to load data into Anaplan.

Right now you are probably thinking, why wouldn’t I just use Transactional always? There are several considerations to take into account which we will now cover. 

When to use Bulk API 

The Bulk API should be used for traditional scheduled, batch, data loads into Anaplan. Think of loading an entire month’s worth of general ledger entries from an ERP every week or a refresh of your employee roster from an HRIS nightly.

Some general criteria to guide your decision are as follows: 

  1. Large data set (>1000 cells of data) 
  2. Scheduled 
  3. .csv or .txt files to be loaded 
  4. OK to have asynchronous responses for the load status 

When to use Transactional API 

The Transactional API on the other hand should be used for event-based data feeds into Anaplan.

That sounds like some technical jargon, right? What this means is rather than scheduling loads in a batch process to run, say, nightly, perhaps an event takes place in another system (a record is updated) and you’d like that update to immediately be reflected in your Anaplan model.

If you have small “transactions” worth of data that should be loaded quickly and w/o blocking the model, the Transactional API could be a great choice. 

  1. Event based 
  2. Small data set (<1000 cells of data) 
  3. .json or .csv to be loaded 
  4. Need synchronous load status responses 

Another consideration is as previously discussed the Transactional API does not need an import data source file or import action in the model to run. This can be particularly useful when model changes are frequent (think renaming, adding, or deleting of line items), or you’d like to keep your actions tab lean. Since Transactional API loads straight into the model, there is less change management needed to keep integrations working properly.  

What else can Transactional API do? 

As you can imagine, the single step of loading data into the Anaplan model needs a few pieces of information to work properly, such as: the ID of the module, the line item names, the list IDs, list member IDs, etc. (basically the information needed to tell Anaplan the exact cell within the model it needs to update; the full list is here). As such, the Transactional API also offers endpoints for a developer to access the model information. Some examples of data that can be queried out of the model via Transactional API: 

  1. Data about all lists  
  2. Data about all modules (dimensions, line items, etc.) 
  3. Data about all line items (everything from model blueprint view) 
  4. Data about model calendar, versions, and switchover 

 While these are useful for constructing the API request to upload data into the model, they can also be used on their own to run some powerful model analytics and monitoring. By using the API, it would take a simple script to loop through every model and consolidate info about your models.

With this info you could perform various analyses, some examples: 

  1. Modeling best practice analysis (where are my biggest formulas, Planual violations, over-dimensioned line items, etc.) 
  2. Do all of my models have the same versions? The same switchover date? 
  3. Search all line items in a tenant (where is “salary” stored in my models?) 

Got feedback on this content? Let us know in the comments below.

Contributing authors: Christophe Keomanivong and Joey Morisette.


  • JulieL

    Can you use 2 API's for one model?  Ie:  I want to use Bulk API for monthly updates and then Transaction API's for updates during the month to the same model

  • @JulieL 


    Certainly, you can use any combination of APIs on a single model. This framework is something we have seen customers adopt (using Bulk + Transactional for different schedules). 

  • JulieL

    Thank you Joey!

  • BPifer

    How do the extensibility updates mentioned in the April release relating to Large Volume readRequests change the guidance on when to use Bulk vs Transactional APIs?


  • @BPifer 


    Not much changes in the way of deciding. From an ingress perspective nothing changes from the article, writing to Anaplan remains the same considerations. Egress, the large volume read out could replace a bulk export action request, it really depends on how the middleware reading out of anaplan is configured:


    1. JSON + paginated for the large readout endpoint is usually more developer friendly than running an export and downloading a file. 

    2. Large vol endpoint splits pages based on records while bulk splits based on file size. This can be very useful for developers as well, because you don't have to stitch together records, you can just append full records from each page

    3. Model behavior is the same. In either scenario Anaplan has to render the export definition or saved view, so from a model locking perspective they are more or less on parity with each other