PlanIQ pricing is fixed, and based on the number of expected predictions that will be running each month.
Determining the number of expected monthly predictions typically requires a good understanding of the desired outcomes of the forecasting process.
Let's consider the following situation...
If we need a monthly forecast for 1,000 SKUs across 300 different stores, we may need 300,000 forecasts per month. With that said, estimating the number of predictions required per month usually isn't as simple as A*B. Here are a few things to consider...
Is every SKU sold in every store?
In the scenario above, we're making an assumption that every SKU is sold in every store. This is rarely the case.
Are we building forecast models at the right level?
Just because we may ultimately need forecasts at the SKU*Store level, that does not necessarily mean we are building our forecast models and making predictions at that level. Sometimes the data is very sparse at a granular level and it makes more sense to build forecast models at a higher hierarchical level (i.e. build a forecast model that predicts Product Category x Store) and then disaggregate to the SKU. Figuring out which level to build the forecast models at requires a strong understanding of the data. Knowing the level is important, because PlanIQ pricing is based on the number of predictions forecast models make, not the final level a forecast ends up at in the planning process.
Do we want to test multiple models for each prediction?
Most of the time we're also building multiple forecast models to make predictions, so we can then determine which model is best at predicting any individual SKU (or category). At the extreme case, if we want to test all 5 models offered in PlanIQ (ARIMA, ETS, Prophet, DeepAR, CNN), we'd be looking at 1.5m predictions per month (1000 SKUs * 300 stores * 5 models).
Are we considering the number of predictions we'll use while testing?
We should also consider adding in a prediction buffer for testing models. Don't try to back into # of monthly predictions by only accounting for what happens during scheduled production runs. A big part of the value of PlanIQ is our ability to make better predictions by testing different models on our own (it can be a fun exercise too!). Sometimes we don't know which features will add value to our forecast predictions until we test them out.
Determining the number of monthly predictions required for a specific use case can be a nuanced process. Hopefully the considerations above help make the estimation process a bit easier!