Mean absolute error

For a broader coverage related to this topic, see Mean absolute difference.

In statistics, the mean absolute error (MAE) is a quantity used to measure how close forecasts or predictions are to the eventual outcomes. The mean absolute error is given by

\mathrm{MAE} = \frac{1}{n}\sum_{i=1}^n \left| f_i-y_i\right| =\frac{1}{n}\sum_{i=1}^n \left| e_i \right|.

As the name suggests, the mean absolute error is an average of the absolute errors |e_i| = |f_i - y_i|, where f_i is the prediction and y_i the true value. Note that alternative formulations may include relative frequencies as weight factors.

The mean absolute error is a common measure of forecast error in time series analysis, where the terms "mean absolute deviation" is sometimes used in confusion with the more standard definition of mean absolute deviation. The same confusion exists more generally.

Related measures

The mean absolute error is one of a number of ways of comparing forecasts with their eventual outcomes. Well-established alternatives are the mean absolute scaled error (MASE) and the mean squared error.[1] These all summarize performance in ways that disregard the direction of over- or under- prediction; a measure that does place emphasis on this is the mean signed difference.

Where a prediction model is to be fitted using a selected performance measure, in the sense that the least squares approach is related to the mean squared error, the equivalent for mean absolute error is least absolute deviations.

See also

References

  1. Hyndman, R. and Koehler A. (2005). "Another look at measures of forecast accuracy"
This article is issued from Wikipedia - version of the Monday, May 04, 2015. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.