Bessel's correction

In statistics, Bessel's correction, named after Friedrich Bessel, is the use of n  1 instead of n in the formula for the sample variance and sample standard deviation, where n is the number of observations in a sample. This corrects the bias in the estimation of the population variance, and some (but not all) of the bias in the estimation of the population standard deviation, but often increases the mean squared error in these estimations.

That is, when estimating the population variance and standard deviation from a sample when the population mean is unknown, the sample variance estimated as the mean of the squared deviations of sample values from their mean (that is, using a multiplicative factor 1/n) is a biased estimator of the population variance, and for the average sample underestimates it. Multiplying the standard sample variance as computed in that fashion by n/n  1 (equivalently, using 1/n  1 instead of 1/n in the estimator's formula) corrects for this, and gives an unbiased estimator of the population variance. In some terminology,[1][2] the factor n/n  1 is itself called Bessel's correction.

One can understand Bessel's correction intuitively as the degrees of freedom in the residuals vector (residuals, not errors, because the population mean is unknown):

(x_1-\overline{x},\,\dots,\,x_n-\overline{x}),

where \overline{x} is the sample mean. While there are n independent samples, there are only n  1 independent residuals, as they sum to 0.

Caveats

Three caveats must be borne in mind regarding Bessel's correction:

  1. It does not yield an unbiased estimator of standard deviation.
  2. The corrected estimator often has worse (higher) mean squared error (MSE) than the uncorrected estimator, and never has the minimum MSE: a different scale factor can always be chosen to minimize MSE.
  3. It is only necessary when the population mean is unknown (and estimated as the sample mean).

Firstly, while the sample variance (using Bessel's correction) is an unbiased estimate of the population variance, its square root, the sample standard deviation, is a biased estimate of the population standard deviation; because the square root is a concave function, the bias is downward, by Jensen's inequality. There is no general formula for an unbiased estimator of the population standard deviation, though there are correction factors for particular distributions, such as the normal; see unbiased estimation of standard deviation for details. An approximation for the exact correction factor for the normal distribution is given by using n  1.5 in the formula: the bias decays quadratically (rather than linearly, as in the uncorrected form and Bessel's corrected form).

Secondly, the unbiased estimator does not minimize MSE compared with biased estimators, and generally has worse MSE than the uncorrected estimator (this varies with excess kurtosis). MSE can be minimized by using a different factor. The optimal value depends on excess kurtosis, as discussed in mean squared error: variance; for the normal distribution this is optimized by dividing by n + 1 (instead of n  1 or n).

Thirdly, Bessel's correction is only necessary when the population mean is unknown, and one is estimating both population mean and population variance from a given sample set, using the sample mean to estimate the population mean. In that case there are n degrees of freedom in a sample of n points, and simultaneous estimation of mean and variance means one degree of freedom goes to the sample mean and the remaining n  1 degrees of freedom (the residuals) go to the sample variance. However, if the population mean is known, then the deviations of the samples from the population mean have n degrees of freedom (because the mean is not being estimated – the deviations are not residuals but errors) and Bessel's correction is not applicable.

Source of bias

Suppose the mean of the whole population is 2050, but the statistician does not know that, and must estimate it based on this small sample chosen randomly from the population:

 2051,\quad 2053,\quad 2055,\quad 2050,\quad 2051 \,

One may compute the sample average:

 \frac{1}{5}\left(2051 + 2053 + 2055 + 2050 + 2051\right) = 2052

This may serve as an observable estimate of the unobservable population average, which is 2050. Now we face the problem of estimating the population variance. That is the average of the squares of the deviations from 2050. If we knew that the population average is 2050, we could proceed as follows:

\begin{align}
   {} & \frac{1}{5}\left[(2051 - 2050)^2 + (2053 - 2050)^2 + (2055 - 2050)^2 + (2050 - 2050)^2 + (2051 - 2050)^2\right] \\
  =\; & \frac{36}{5} = 7.2
\end{align}

But our estimate of the population average is the sample average, 2052, not 2050. Therefore we do what we can:

\begin{align}
   {} & \frac{1}{5}\left[(2051 - 2052)^2 + (2053 - 2052)^2 + (2055 - 2052)^2 + (2050 - 2052)^2 + (2051 - 2052)^2\right] \\
  =\; & \frac{16}{5} = 3.2
\end{align}

This is a substantially smaller estimate. Now a question arises: is the estimate of the population variance that arises in this way using the sample mean always smaller than what we would get if we used the population mean? The answer is yes except when the sample mean happens to be the same as the population mean.

We are seeking the sum of squared distances from the population mean, but end up calculating the sum of squared differences from the sample mean, which, as will be seen, is the number that minimizes that sum of squared distances. So unless the sample happens to have the same mean as the population, this estimate will always underestimate the sum of squared differences from the population mean.

To see why this happens, we use a simple identity in algebra:

(a + b)^2 = a^2 + 2ab + b^2\,

With a representing the deviation from an individual to the sample mean, and b representing the deviation from the sample mean to the population mean. Note that we've simply decomposed the actual deviation from the (unknown) population mean into two components: the deviation to the sample mean, which we can compute, and the additional deviation to the population mean, which we can not. Now apply that identity to the squares of deviations from the population mean:

\begin{align}
  {[}\,\underbrace{2053 - 2050}_{\begin{smallmatrix} \text{Deviation from} \\  \text{the population} \\  \text{mean} \end{smallmatrix}}\,]^2 & = [\,\overbrace{(\,\underbrace{2053 - 2052}_{\begin{smallmatrix} \text{Deviation from} \\ \text{the sample mean} \end{smallmatrix}}\,)}^{\text{This is }a.} + \overbrace{(2052 - 2050)}^{\text{This is }b.}\,]^2 \\
  & = \overbrace{(2053 - 2052)^2}^{\text{This is }a^2.} + \overbrace{2(2053 - 2052)(2052 - 2050)}^{\text{This is }2ab.} + \overbrace{(2052 - 2050)^2}^{\text{This is }b^2.}
\end{align}

Now apply this to all five observations and observe certain patterns:

\begin{alignat}{2}
  \overbrace{(2051 - 2052)^2}^{\text{This is }a^2.}\  &+\  \overbrace{2(2051 - 2052)(2052 - 2050)}^{\text{This is }2ab.}\  &&+\  \overbrace{(2052 - 2050)^2}^{\text{This is }b^2.} \\
  (2053 - 2052)^2\  &+\  2(2053 - 2052)(2052 - 2050)\  &&+\  (2052 - 2050)^2 \\
  (2055 - 2052)^2\  &+\  2(2055 - 2052)(2052 - 2050)\  &&+\  (2052 - 2050)^2 \\
  (2050 - 2052)^2\  &+\  2(2050 - 2052)(2052 - 2050)\  &&+\  (2052 - 2050)^2 \\
  (2051 - 2052)^2\  &+\  \underbrace{2(2051 - 2052)(2052 - 2050)}_{\begin{smallmatrix} \text{The sum of the entries in this} \\  \text{middle column must be 0.} \end{smallmatrix}}\ &&+\  (2052 - 2050)^2
\end{alignat}

The sum of the entries in the middle column must be zero because the sum of the deviations from the sample average must be zero. When the middle column has vanished, we then observe that

Therefore:

That is why the sum of squares of the deviations from the sample mean is too small to give an unbiased estimate of the population variance when the average of those squares is found.

Terminology

This correction is so common that the term "sample variance" and "sample standard deviation" are frequently used to mean the corrected estimators (unbiased sample variation, less biased sample standard deviation), using n  1. However caution is needed: some calculators and software packages may provide for both or only the more unusual formulation. This article uses the following symbols and definitions:

μ is the population mean
\overline{x}\, is the sample mean
σ2 is the population variance
sn2 is the biased sample variance (i.e. without Bessel's correction)
s2 is the unbiased sample variance (i.e. with Bessel's correction)

The standard deviations will then be the square roots of the respective variances. Since the square root introduces bias, the terminology "uncorrected" and "corrected" is preferred for the standard deviation estimators:

sn is the uncorrected sample standard deviation (i.e. without Bessel's correction)
s is the corrected sample standard deviation (i.e. with Bessel's correction), which is less biased, but still biased

Formula

The sample mean is given by

\overline{x}=\frac{1}{n}\sum_{i=1}^n x_i.

The biased sample variance is then written:

s_n^2 = \frac {1}{n} \sum_{i=1}^n  \left(x_i - \overline{x} \right)^ 2 = \frac{\sum_{i=1}^n \left(x_i^2\right)}{n} - \frac{\left(\sum_{i=1}^n x_i\right)^2}{n^2}

and the unbiased sample variance is written:

s^2 = \frac {1}{n-1} \sum_{i=1}^n  \left(x_i - \overline{x} \right)^ 2 = \frac{\sum_{i=1}^n \left(x_i^2\right)}{n-1} - \frac{\left(\sum_{i=1}^n x_i\right)^2}{(n-1)n} = \left(\frac{n}{n-1}\right)\,s_n^2.

Proof of correctness – Alternate 1

Proof of correctness – Alternate 2

Proof of correctness – Alternate 3

See also

Notes

  1. W.J. Reichmann, W.J. (1961) Use and abuse of statistics, Methuen. Reprinted 1964–1970 by Pelican. Appendix 8.
  2. Upton, G.; Cook, I. (2008) Oxford Dictionary of Statistics, OUP. ISBN 978-0-19-954145-4 (entry for "Variance (data)")

External links

This article is issued from Wikipedia - version of the Tuesday, March 08, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.