Earlier this year The International Institute of Forecasters launched a podcast called Forecasting Impact. The inaugural guest was Rob Hyndman, a bit of a celebrity to those of us engaged in the forecasting community. Rob heads the Department of Business Statistics at Monash University and authored many of the open source forecasting algorithms that are widely used across industry/academia today (Amazon Forecast, which is being used by PlanIQ, uses two of Rob’s algorithms; ETS, ARIMA). The conversation has plenty of great ideas about understanding forecasts and the business processes that support them.
In the interview the host asked Rob,
“What is forecastable? What should we be trying to forecast?”
I really liked his answer and wanted to share...
“Anything can be forecasted but not everything can be forecasted well.”
So what traits of data do we look for that may make it a good candidate for algorithmic forecasting? Rob offered a short list of qualifiers...
We understand the factors that contribute to its variation.
There is a lot of data available.
The forecast can’t affect the thing you are trying to forecast.
The future is relatively similar to the past.
There is relatively little natural or unexplainable variation in the data.
I think that is a pretty good framework for thinking through whether or not an algorithm may be helpful for forecasting a particular piece of data. You can go directly to the part of the interview when Rob discusses this framework and gives some example here.
I highly recommend listening to the entire podcast if you're interested in leveling up your forecasting knowledge . Rob also gave a great talk about Forecast Reconciliation at this year's International Symposium on Forecasting if you are looking for more insights from the forecasting legend himself.
What do you think? Let me know in the comments below.