500 Internal Server Error occurs with requests made with API V2.

To export data from the View, a request is made with an API call to download all pages.

The View has a lot of data, so the number of pages will be about 100~3000.
I am using Azure pipeline to download one page at a time, but when I finished downloading about 120 pages in 50 minutes, I got a 500 HTTP Internal Server Error.
I asked Microsoft about this and they said it was a server issue on the Anaplan side.

The access token keeps updating every 35 minutes, so it is not an authorization issue.
Is there such a thing as an expiration time on export requests?
I am having trouble.
Any answers would be appreciated.
Appreciate your help.

 

CuteeeeRabbit_0-1666179608554.png

 

 

"errors": [
		{
			"Code": 22756,
			"Message": "ErrorCode=HttpRequestFailedWithServerError,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Http request failed with server error, status code 500 InternalServerError, please check your activity settings and remote server issues.\nRequest URL: http://api.anaplan.com/2/0/workspaces/{workspaceID}/models/{modelID}/views/{viewID}/readRequests/{requestID}/pages/145.,Source=Microsoft.DataTransfer.ClientLibrary,''Type=System.Net.WebException,Message=The remote server returned an error: (500) Internal Server Error.,Source=System,'",

 

Answers

  • If you are running a high volume of export calls it may be a mis-coded rate limit issue, rate limits are normally a 429, but I have run into 500 codes when trying to do too much via the API, or it could just be a random error.

     

    Could you not implement a simpler design using the cloudworks export action. So step 1 use cloudworks to export a file to a blob container, then off a blob created/updated trigger you can run your DF pipeline job.

     

    Even if you don't use cloudworks given the number of pages in the read request (and loops you are running), maybe consider the bulk api route. Run the export action then loop to monitor this for completion, and download the file at the end. This may be something to test to see if that runs better than your approach.