Fellow PlanIQ users, we are constantly working on adding more best practices and documentation. Are there issues that you feel you're struggling with? Is there anything where we could provide better documentation?
Let us know!
Hi Evgy, thanks for starting the thread! We are using PlanIQ at one of our engagements. Overall the team is very happy with the results, however, in order to achieve those results we need to perform a decent amount of manual work within PlanIQ and Anaplan:
What we are doing
We want to leverage the capabilities of the tool to link SKUs based on common attributes so we send everything in one data collection. However, the SKUs can vary in their sales patterns so it makes sense for us to run multiple algorithims for that data collection. When we produce the results for each of the several algorithms - the way we select the best algorithm for a SKU is by seeing which algorithm produced the most accurate backtest when compared to actuals in the backtest period. That selection logic is all set up by us in the anaplan model.
When a new month is closed and a data collection is reran, the first month of the forecast is shifted but the backtest is not. For example, if Dec 22 is our last month of actuals, a 12 month forecast will produce results from Jan 23 - Dec 23 and a 12 month backtest will produce results of Jan 22 - Dec 22. When Jan 23 actuals are in and we send to the data collection, the forecast will produce results from Feb 23 - Jan 24. However, the backtest will keep results of Jan 22-Dec 22. Thus, in order to use the backtest setup to select the correct algorithm, we need to create several new forecast models each month just for the backtest analysis.
What would help us
If the backtest actions time periods can be updated the same way that forecast actions are, it would save us a good amount of time each month. Also, we have the setup in our Anaplan model working for selecting an algorithm based off the backtest analysis, however, if there was a more systematic approach of selecting an algorithm we would most likely happily shift over to that. The team also likes the ability of having insight to which algorithm was selected - so using an algorithm that does the selection for you is fine if there was info as to which one it picked (e.g. ML Anaplan selected DeepAR as the algorithm).
Hi Nick, thank you for the comment.
Question on the backtest - are you using backtest data to calculate accuracy metrics in Anaplan model? Which metric do you prefer to use?