Time out issues with Anaplan Connect v1.4.4 console while importing LARGE files
New to Anaplan, loading large volumes of historical transactional data to support multiple models. We have a clean structure and the process "works" but virtually every import into the module has our session in the console timeout. It appears that the file has loaded successfully, but when the console times-out, I lose transparency into the process. When the session times out, I do not receive any dump files to show me errors and there does not appear to be any log files for the processes run via AC v1.4.4. Besides 3rd party tools, is there a way to access the log files that confirm no issues? Do logfiles exist?
Best Answer
-
The timeout you're seeing is a request timeout on a progress request. This is happening during the commit phase of a list import, which can be quite CPU-intensive for the server but normally progress requests still get serviced, so I think your import has stressed the server a bit. You can adjust the number of retries (using the -mrc or -maxretrycount parameter) and the retry timeout (using the -rt or -retrytimeout parameter) to higher values (max is 15 and 60 respectively) to ensure Anaplan Connect doesn't give up on the import task. Longer term, consider splitting up the data load into two or more separate loads if it is very large.
2
Answers
-
This might be a little challenging to do with Anaplan connect scripts, but you can download error dump files using the following API call:
GET request to
https://api.anaplan.com/2/0/workspaces/{WS}/models{model}/processes/{processID}/tasks/
The challenge here is in getting the {taskID} and the {objectID}. The taskID is linked to the particular process and returned when first executing the task. The objectID is the action in which the process failed
I don't know if this could be the reason for the timeout errors, but are you chunking the files before upload?
Regards,
Anirudh
0 -
Thanks @anirudh
We have chunked the files to begin with but because there is somuch data, we are trying to reduce the number of files, using AC to chunk them as well into 50MB parts. Some run with now issue, but many/most see the console timeout but it does appear that the file loaded sucessfully.
Your answer is most helpful as at least on a go forward basis I can grab the taskid from the console before I close the window and at least verify no errors were thrown.
IN the screen cap I provided, are the task ID (214) and ObejctID (1712)? "c.a.client.Task :214] 1712| "
It also seems to most frequently occur during the "applying Model Changes" step.
0 -
Hi @jjjcpa
> IN the screen cap I provided, are the task ID (214) and ObejctID (1712)? "c.a.client.Task :214] 1712| "
I don't think so. The Task ID looks like this. These values are not returned to the console
The objectID appears hierarchically in a json dump when the process is run.
It's certainly strange that the "Applying Model" change causes a time out error...
Can you try this? After a file upload completes don't run the import process, instead use a GET request to download that file
Then compare the downloaded file with your original file. If it matches, then there's no error being caused by the upload process at least
Regards,
Anirudh
0 -
Couple more questions, are you using basic authentication and is the file upload taking longer than 30 minutes?0
-
Yes on both.
definitely using basic authentication and the file upload itself is typically in the 15-20 min range (5GB file max) but the entire process is end to end 45 minutes to an hour.
thanks for keeping at it.
0 -
In that case, I suspect the auth token is expiring due to the 30 minute limit. Could you edit your script to refresh the token before it expires? That should solve the timeout...0
-
JJ,
Unfortunately I'm not familiar with batch scripting...
The refresh request is:
POST to 'https://auth.anaplan.com/token/refresh'
header = {'Authorization' : authToRefresh}where authToRefresh is your token before it expires.
Ideally, you would programmatically refresh the token just before its expiry. Expiry is reported in UNIX time when the token is originally created.
And then pass only fresh tokens to each chunk upload, process run etc instead of just creating the token and passing the same one each time
Regards,
Anirudh1 -
Hi Ben,
Out of curiosity, what happens if the auth token expires before all chunks have finished uploading? Based on the apiary documentation, my understanding is this causes a timeout as well......
Thanks,
Anirudh0 -
I would expect a not authorised response rather than a request timeout if the token expired.
0 -
Adding the two parameters set at the max value for both resolved the issue. We set the values using set statements where MRC=15 and RTO=60
set Operation=-debug -service %ServiceUrl% -auth %AuthUrl% -workspace %WorkspaceId% -model %ModelId% -process "Update DH1" -chunksize %Chunksize% -file %FileName% -put %FilePath% -maxretrycount %MRC% -retrytimeout %RTO% -execute -output %DumpName%
Thanks @ben_speight @anirudh For your help, going to make the rest of my massive data loads stress free and hopefully failure free!
1