Hi!
I was having a conversation with a Partner Success Manager and had a question that we deliberated for a bit without a satisfactory solution. I realize Anaplan PlanIQ would charge basis the number of time series being forecasted. This basically means if I run forecasts with Prophet, Deep AR+, and CNN each, i would pay for 3 different iterations of whatever time series I am forecasting.
However, if i run the Auto MLs, they would skim and compare different forecasting methods - including Deep AR+ and the variants, to give me the best forecast possible. Now on the surface, and I do understand this is a reductive approach, running an autoML will already encompass the other different algorithms, so isn't it economical to run AutoMLs only (although I understand it might take longer to process)?
A better way to understand this for people with less exposure to stat is to answer the questions:
1. Are the AutoMLs a superset of the other algorithms present
2. I understand the difference in the metric that decides best fit - MASE in Anaplan AutoML and quantiles in Amazon Auto ML - but how much would this differ if I run a Prophet vs AutoML?
Solved! Go to Solution.
Hi @ankit_cheeni ,
Thanks for the question. One minor point of clarification around pricing before jumping into the crux of your question. PlanIQ pricing is based on estimated monthly usage patterns. Based on the estimate, the cost is then fixed, which is different than traditional usage based pricing that many cloud services leverage. You are correct that running 3 separate models (i.e. CNN, DeepAR, Prophet) would count as 3 distinct forecast runs whereas the AutoML options count as 1. We try to account for this during initial pricing conversations with customers.
In terms of how the algorithms run, yes, AutoML is a superset of all the other algorithms. However, AutoML will only produce forecasts using the single model that is the "best" across the entire time series on average. This means that AutoML results are always derived from a single algorithm. For example, if Anaplan AutoML finds that the ETS model produces the best MASE, on average, across the entire dataset, AutoML will produce forecasts for each series using ETS (even if ARIMA or a different algorithm was the better model for a particular series). So with AutoML, you are not necessarily receiving a prediction for each individual time series that is based on the model that is the best for that series.
We recognize that in the majority of situations a user will want the best model for each individual series and you'll likely see some valuable enhancement coming to AutoML in the future.
With that said, I actually prefer running each algorithm separately in PlanIQ and storing the results in Anaplan. I can then use the flexibility of Anaplan to assess each model based on whichever evaluation metric, forecast horizon or lag period I prefer. This approach may require a bit more effort but the outcome can be incredibly valuable.
I hope this helps!
Hi @ankit_cheeni ,
Thanks for the question. One minor point of clarification around pricing before jumping into the crux of your question. PlanIQ pricing is based on estimated monthly usage patterns. Based on the estimate, the cost is then fixed, which is different than traditional usage based pricing that many cloud services leverage. You are correct that running 3 separate models (i.e. CNN, DeepAR, Prophet) would count as 3 distinct forecast runs whereas the AutoML options count as 1. We try to account for this during initial pricing conversations with customers.
In terms of how the algorithms run, yes, AutoML is a superset of all the other algorithms. However, AutoML will only produce forecasts using the single model that is the "best" across the entire time series on average. This means that AutoML results are always derived from a single algorithm. For example, if Anaplan AutoML finds that the ETS model produces the best MASE, on average, across the entire dataset, AutoML will produce forecasts for each series using ETS (even if ARIMA or a different algorithm was the better model for a particular series). So with AutoML, you are not necessarily receiving a prediction for each individual time series that is based on the model that is the best for that series.
We recognize that in the majority of situations a user will want the best model for each individual series and you'll likely see some valuable enhancement coming to AutoML in the future.
With that said, I actually prefer running each algorithm separately in PlanIQ and storing the results in Anaplan. I can then use the flexibility of Anaplan to assess each model based on whichever evaluation metric, forecast horizon or lag period I prefer. This approach may require a bit more effort but the outcome can be incredibly valuable.
I hope this helps!