Exporting a file that ends up having over 100 chunks.
The first and last rows of the files are split across different chunks.
Is there a a way to prevent this?
If you are exporting the files via the REST API( Python) you can configure your script to divide the files into chunks and change the size of the chunks to avoid the issue
The API returns 93 chunks already, is there a way to set their size? If so, how do I know an appropriate file size that won't split it again?
Can someone help me i am unable to login anaplan portal using my credentials
Dear Hive Mind, Do we have a way to get the history of a particular List Item, from its creation into a model up until its removal ? I am trying to chase when / how an item has been added into a model and how / when it would have been deleted and then recreated with the same code. Is there a pattern (manual or API) to getβ¦
I have an action in a process which is getting failed since 2 days when run via CloudWorks with below error . No issues when run manually . Same error occurs when run manually but as a warning "Import complete but with failures" This action is running since almost a year without any issue . No change has been made toβ¦