Until fairly recently, there was no standard. Forecasters could not compare forecast accuracy across products, regions, territories, and so on. Thanks to the innovative work of Rob Hyndman and his colleagues in Australia, we now have a forecasting error standard. It is based on what is called the naive forecast, or the prediction that the past will repeat (e.g., next month will be the same as last month). Every forecast has a naive forecast, so the forecaster’s challenge is to beat the naive forecast for their particular situation. If you beat it, you are adding value.
Your level of value-added as a forecaster depends on the degree to which you beat the naive forecast. Those who beat it by 50 percent are twice as accurate as those who only beat it by 25 percent. Or, if you typically beat the naive forecast by 50 percent and then you slip to 25 percent, your forecasting is now half as accurate.
We offer an online training course on how to measure forecast accuracy using the naive forecast. To learn more, click here.