Improving Forecast Accuracy with Machine Learning: Choosing the Right Metrics

Rudrendu Paul
3 min readApr 1, 2023

--

Choose accurate metrics for data science forecasting models using machine learning techniques.

Photo by Mikail McVerry on Unsplash

Forecasting is a vital component of business decision-making, as it allows organizations to make informed decisions based on predictions of future trends. However, it is important to evaluate the accuracy of these forecasts to ensure they are reliable and useful. In this article, we will discuss the importance of selecting the right metrics for evaluating your data science forecasting model.

Why Choose the Right Metrics?

Selecting the right metrics is essential for measuring the accuracy of your forecasting model. Metrics provide a quantitative way to assess the accuracy of the model and can help you identify areas for improvement. Choosing the wrong metrics can lead to inaccurate assessments and misinformed decision-making.

Choosing the Right Metrics

When selecting metrics for evaluating your forecasting model, there are several factors to consider:

Scale

The scale of your data should be taken into consideration when selecting a metric.

Below are some examples:

  1. If your data is on the same scale, using metrics such as Mean Absolute Error (MAE) or Root Mean Squared Error (RMSE) can provide useful results.
  2. If your data is on different scales, using metrics such as Mean Absolute Percentage Error (MAPE) or Mean Absolute Scaled Error (MASE) may be more appropriate.

Outliers

Outliers can significantly impact the accuracy of your model. Metrics such as MAE and RMSE are sensitive to outliers, whereas metrics such as MAPE and MASE are more robust.

Forecast Horizon

The forecast horizon is the time period over which the model is making predictions.

  1. If your model is making one-step-ahead predictions, then metrics such as MAE, RMSE, and MAPE are appropriate.
  2. If your model is making multi-step-ahead predictions, then metrics such as Mean Squared Scaled Error (MSSE) and Symmetric Mean Absolute Percentage Error (SMAPE) may be more appropriate.

Common Metrics for Evaluating Forecasting Models

  1. Mean Absolute Error (MAE): The MAE is the average of the absolute errors between the actual and predicted values. This metric provides a measure of the average magnitude of the errors.
  2. Root Mean Squared Error (RMSE): The RMSE is the square root of the average of the squared errors between the actual and predicted values. This metric provides a measure of the magnitude of the errors and is more sensitive to large errors than MAE.
  3. Mean Absolute Percentage Error (MAPE): The MAPE is the average of the absolute percentage errors between the actual and predicted values. This metric is useful when the data is on different scales.
  4. Mean Absolute Scaled Error (MASE): The MASE is a relative error metric that compares the forecasting accuracy of different time series. It scales the errors based on the naïve forecast for the series and can be used to compare models across different scales and series.

Conclusion

Selecting the right metrics is essential for evaluating the accuracy of your data science forecasting model. The choice of metric depends on the scale of your data, the presence of outliers, and the forecast horizon. Choosing the wrong metric can lead to inaccurate assessments of the model’s accuracy and misinformed decision-making. By choosing the right metrics, you can ensure that your forecasting model is reliable and useful for making informed business decisions.

Connect with the Author

If you enjoyed this article and would like to stay connected, feel free to follow me on Medium and connect with me on LinkedIn. I’d love to continue the conversation and hear your thoughts on this topic.

References:

  1. https://robjhyndman.com/papers/foresight.pdf
  2. https://www.relexsolutions.com/resources/measuring-forecast-accuracy/
  3. https://robjhyndman.com/papers/forecast-accuracy.pdf

--

--

Rudrendu Paul

Data Science Leader | Ex-PayPal | Ads | Applied AI/ML | MBA | E-commerce | Retail | Judge at Startup Competitions | Reviewer Springer, Elsevier, IEEE | Speaker