Standardized moment

In probability theory and statistics, the standardized moment of a probability distribution is a moment (normally a higher degree central moment) that is normalized. The normalization is typically a division by an expression of the standard deviation which renders the moment invariant to level (or scale) and variability. This has the advantage that such normalized moments differ only in other properties than level an variability facilitating e.g. comparison of shape of different probability distributions.[1]


Standard normalization

Let X be a random variable with a probability distribution P and mean value {\textstyle \mu = \mathrm{E}[X]} (i.e. the first raw moment or moment about zero), the operator E denoting the expected value of X. Then the standardized moment of degree k is \frac{\mu_k}{\sigma^k}\!,[2] that is, a ratio of the kth moment about the mean


\mu_k = \operatorname{E} \left[ ( X - \mu )^k \right]  = \int_{-\infty}^{+\infty} (x - \mu)^k P(x)\mathrm{d} x
,

and the standard deviation to the power of k

\sigma^k = \Bigl(\sqrt{\mathrm{E}[(X - \mu)^2]}\Bigr)^k

The power of k is because moments scale as x^k, meaning that \mu_k(\lambda X) = \lambda^k \mu_k(X): they are homogeneous functions of degree k, thus the standardized moment is scale invariant. This can also be understood as being because moments have dimension; in the above ratio defining standardized moments, the dimensions cancel, so they are dimensionless numbers.

The first four standardized moments can be written as:

Degree k Comment
1 
\hat{\mu}_1 = \frac{\mu_1}{\sigma^1} = \frac{\operatorname{E} \left[ ( X - \mu )^1 \right]}{( \operatorname{E} \left[ ( X - \mu )^2 \right])^{1/2}} = \frac{\mu - \mu}{\sqrt{ \operatorname{E} \left[ ( X - \mu )^2 \right]}} = 0
The first standardized moment is zero,

because the first moment about the mean of a mean is always zero.

2 
\hat{\mu}_2 = \frac{\mu_2}{\sigma^2} = \frac{\operatorname{E} \left[ ( X - \mu )^2 \right]}{( \operatorname{E} \left[ ( X - \mu )^2 \right])^{2/2}} = 1
The second standardized moment is one,

because the second moment about the mean is equal to the variance σ2.

3 
\hat{\mu}_3 = \frac{\mu_3}{\sigma^3} = \frac{\operatorname{E} \left[ ( X - \mu )^3 \right]}{( \operatorname{E} \left[ ( X - \mu )^2 \right])^{3/2}}
The third standardized moment is a measure of skewness.
4 
\hat{\mu}_4 = \frac{\mu_4}{\sigma^4} = \frac{\operatorname{E} \left[ ( X - \mu )^4 \right]}{( \operatorname{E} \left[ ( X - \mu )^2 \right])^{4/2}}
The fourth standardized moment refers to the historical kurtosis (used in older works).

Note that for skewness and kurtosis alternative definitions exist, which are based on the third and fourth cumulant respectively.

The kth standardized moment may be generalized as:


\hat{\mu}_k = \frac{\mu_k}{\sigma^k} = \frac{\operatorname{E} \left[ ( X - \mu )^k \right]}{( \operatorname{E} \left[ ( X - \mu )^2 \right])^{k/2}}

Other normalizations

For more details on this topic, see Normalization (statistics).

Another scale invariant, dimensionless measure for characteristics of a distribution is the coefficient of variation, \frac{\sigma}{\mu}. However, this is not a standardized moment, firstly because it is a reciprocal, and secondly because \mu is the first moment about zero (the mean), not the first moment about the mean (which is zero).

See Normalization (statistics) for further normalizing ratios.

See also

References

  1. Ramsey, James Bernard; Newton, H. Joseph; Harvill, Jane L. (2002-01-01). "CHAPTER 4 MOMENTS AND THE SHAPE OF HISTOGRAMS". The Elements of Statistics: With Applications to Economics and the Social Sciences. Duxbury/Thomson Learning. p. 96. ISBN 9780534371111.
  2. W., Weisstein, Eric. "Standardized Moment". mathworld.wolfram.com. Retrieved 2016-03-30.
This article is issued from Wikipedia - version of the Friday, May 06, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.