A primer on Anaplan integration using Azure Logic Apps
Introduction
With the release of Anaplan Cloudworks, this flow may be better addressed via a blob storage connection in Cloudworks, or with a combination of a Cloudworks flow and a post request to the Cloudworks API. My colleague @christophe_keom published a guide covering this topic.
In my case, I wanted my flows to run end to end based on a trigger in my source system and more control over the flow end to end with enhanced error reporting so I went down this route. Cloudworks is likely a much simpler option if you have less complex requirements, but this approach does give a lot more flexibility and control.
To set the scene, for this example I am getting data from a variety of cloud applications and staging the data in SQL. When the SQL tables are updated, I run a data factory pipeline to copy the SQL data to storage blobs. The documentation from Microsoft explains how to set up the SQL to blob copy action. When the storage blob is updated, it triggers the run of my Anaplan imports. The flow below shows how I get the data from my blob containers and then load it into Anaplan via the API. My example is specific to me but the building blocks here are generic.
Trigger and flow summary
When a blob in my data folder is updated or modified this flow will run:
After the trigger, the flow does the following:
- Sets variables that are common. In this example, I initialize a tokenValue variable which I will use to authenticate. I also set a workspace and model variable. This is useful if I want to point this flow to a different model or workspace.
- Checks that the blob file updated or modified is the one relevant to my flow
- If YES > get the blob content
- Authenticates with Anaplan and gets a tokenValue
- Posts the blob data to Anaplan
- Runs the Anaplan process to consume the data
- Checks the status of the Anaplan process and email updates to myself on a loop until the process completes
- If NO > does nothing
Triggers and variables
Here we can see the trigger and the variable setting steps. In these steps I am setting variables that I use later in the flows. The last step, at the bottom of the screenshot, is a condition.
The condition checks the updated file and, depending on the name of the file, I run the import. If the file name is not valid for this flow, I do nothing.
Here you can see I am getting the blob file name and checking if it is equal to the string "AvazaTimesheetsAP".
The conditional shows that if the file name does not meet my criteria, I do nothing. If it does meet the criteria, I do the import process.
Generic steps for importing data using Logic Apps
In my example flow, the aim is to illustrate the steps needed to import data using Anaplan's Bulk API and Logic apps. As such, I have not gone into detail in some areas and am not following some good practices which I will call out now:
- I am using basic authentication. This means I need to reset the passwords in my flows every 90 days or so. It is much better to use certification authentication as it will save the task of updating passwords and reduce the risk of failures. Also, when using logic apps you likely can use a keystore to retain the authentication credentials and have this apply across all your flows, rather than setting the variable flow by flow. The data I am sending to Anaplan is small (under 5 MBs); therefore I am sending the data as a single chunk. The correct way to send data is to chunk the file and send the chunks. This flow is described in the API documentation and may be a bit fiddly to do in logic app and may be a bit fiddly to do in logic apps. It may be a function app in the middle of a flow is the way to go here.
With my two caveats above, let me describe the flow as it is pretty generic past this point.
- Get the blob content
- Authenticate. As above, I am using basic authentication here. It is better to use certificate authentication. Also, if you deploy with a logic app, you can likely do something smarter using a keystore.
- Parse the result above to get the JSON array into a sensible format. You could likely avoid this step as you can reference the output array directly, but it does give you a nice object with fields that you can see for the next step.
- Set my token value variable, using the result from above
The function for the ‘TokenValue’ is below:
body('Parse_JSON_authentication_token')?['tokenInfo']?['tokenValue']
As mentioned above, I could have avoided the parse JSON step and done this instead and saved a step.
body('Get_authentication_token')?['tokenInfo']?['tokenValue']
- Post the data as a single chunk. You can see how I use my variables to build up the URL. I have hard-coded the FileID as it is unique to this step, so a variable didn’t make sense. In the headers, I force the authorization header and bring the token value we received when we authenticate, and then in the body I post the file content from the blog. Note the content type needs to be application/octet-stream. Also, note the method; we use a PUT. A common failure I see is where the method is not updated and left as a GET.
- Run the Anaplan process. Rule 1 with automated Anaplan imports is to wrap your imports into a process and call the process. Although you can call the imports directly, it is a bad idea. If you call the process then you have a generic container in Anaplan that you can adjust easily. You can tweak or create a new import to deal with new columns or format changes, or add additional model to model or model to list imports to create a multistep process. By calling a process, you remove dependencies and give yourself flexibility.
Off the soapbox now and let's carry on. As you can see below I build up the URL with some of my global variables. Again the processId is hardcoded here as this flow is unique to 1 processId. When we run the process, notice that I send a body parameter this is a fixed value as below. If you don’t include this you will get a failure.
- Check the status of the process. When you run the import process Anaplan will return a taskId. You can use this taskId to monitor the process run. Here I parse the result from running the process to get the taskId.
- Loop to monitor the run. This loop is checking the taskState of the import process, and loops until the taskState is complete.
In this loop I am doing the following:
8.1. I pause for 10 seconds. This keeps my email notifications down.
8.2. Get the task status. Using the taskId returned when running the process, I query this endpoint and get the details of the task.
8.3. I parse the return from above, and use the elements of the json to email myself details.
Conclusion
Although there are improvements to be made, the building blocks show how it is relatively simple to build complex data integrations using Microsoft Logics Apps (or Power Automate). The logical flow of this process and API calls are generic across other platforms and technologies, even if the syntax and deployment will differ. As a result, consider reviewing this guide even if you are planning to use a different platform. This guide should give a starting point to investigating custom integrations and hopefully allay some of the fears around the complexity of developing a custom Rest API.
Certainly from my experience, I find the direct REST API integration to be less complex in many ways than Anaplan connect, as I don't need to deal with the complexities and maintenance associated with the provisioning of a Virtual Machine and maintaining the infrastructure> I also like being in control of the integration behavior end to end.
Got feedback on this content, let us know in the comments below.
Contributing author Anne-Julie Balsamo.
Comments
-
Great article, thank you Sean for taking the time.
0 -
Very interesting article Sean, well done!
0 -
@AlejandroGomez thanks. The next thing I am working on is custom email notifications. I need to build this out for an internal model so will do a write up once I have it figured out.
0 -
i will stay tuned! looking forward to reading it. Thank Sean
0