I am currently setting up calls to the REST Api, and I want to know if there are any best practices to improve the speed. If I try to run the same data that we are using with Anaplan Connect, it times out. This is a listing from the history for 14 Days of data, that does not time out. We usually run 60 days with Anaplan Connect.
This is a really popular question. I've found three things really help with import performance.
The first is that the model, or data hub, has the right architecture. There is no better best-practice article on that than this one, data hub peak performance, here by @rob_marshall . Honestly, this article is printed and wall-papered to my office wall. It's absolute genius. If you follow these rules, you can see performance improvements 80%+.
The other idea I have is that you make sure there are no concurrency issues. If you have a really busy server, meaning you are in contention with other imports/exports and other activity on the same workspace, you will see a performance degradation. This is a good time to consider a dedicated workspace for a data hub.
Lastly, consider starting a Center of Excellence to help with the governance. This will help you and your team ensure you are using best practices and are using actions the most efficient way.
Please correct me if I am wrong, but you are pulling this from the History log, correct? If so, here are a couple of best practices:
don't pull a group of days, just one so do this every day. The main reason for this is for speed and since everything is captured in the log, if you have an unusually large import day, it will not impact the other days
I see in your post there are several Deletes and then it reloads the data. All this is really doing is filling up your change log unnecessarily especially if you are loading the exact same information that was already there. Hint, use a code and only update what is needed. By using the proper architecture (list, trans module, and properties module), this will decrease the need to use the mass delete and mass import.
Have you evaluated the best setting for the chunk parameter in your loads?
Each chunk is a part of a large file, unless the file is small enough to fit into a single chunk. The Import or Export operation is done chunk-by-chunk. If the connection is unreliable, you can retry from an intermediate point. For example, if the sequence of uploads failed at chunk "az", you can retry uploading chunk "az" instead of restarting from "aa".We recommend that you set an import chunk of between 1MB and 50 MB in size.An export chunk is 10 MB, except for the final chunk, which can be smaller.
I have set the upload chunk to .5MB and currently with Anaplan Connect we are uploading 60 days worth of data (15-25 minutes), with Rest API solution 14 days takes 15 minutes. The tasks that I am performing are upload, import, and action. The list provided was just for row count reference.