Asymptotic theory (statistics)

In statistics, asymptotic theory, or large sample theory, is a generic framework for assessment of properties of estimators and statistical tests. Within this framework it is typically assumed that the sample size n grows indefinitely, and the properties of statistical procedures are evaluated in the limit as n → ∞.

In practical applications, asymptotic theory is applied by treating the asymptotic results as approximately valid for finite sample sizes as well. Such approach is often criticized for not having any mathematical grounds behind it, yet it is used ubiquitously anyway. The importance of the asymptotic theory is that it often makes possible to carry out the analysis and state many results which cannot be obtained within the standard “finite-sample theory”.

Overview

Most statistical problems begin with a dataset of size n. The asymptotic theory proceeds by assuming that it is possible to keep collecting additional data, so that the sample size would grow infinitely:


    n \to \infty\,

Under this assumption many results can be obtained that are unavailable for samples of finite sizes. As an example consider the law of large numbers. This law states that for a sequence of iid random variables X1, X2, …, the sample averages \scriptstyle\overline{X}_n converge in probability to the population mean E[Xi] as n → ∞. At the same time for finite n it is impossible to claim anything about the distribution of \scriptstyle\overline{X}_n if the distributions of individual Xi’s is unknown.

For various models slightly different modes of asymptotics may be used:

Besides these standard approaches, various other “alternative” asymptotic approaches exist:

Modes of convergence of random variables

Further information: Convergence of random variables

Asymptotic properties

Estimators


    \hat\theta_n\ \xrightarrow{p}\ \theta_0

Generally an estimator is just some, more or less arbitrary, function of the data. The property of consistency requires that the estimator was estimating the quantity we intended it to. As such, it is the most important property in the estimation theory: estimators that are known to be inconsistent are never used in practice.


    b_n(\hat\theta_n - a_n)\ \xrightarrow{d}\ G ,

then the sequence of estimators \scriptstyle\hat\theta_n is said to have the asymptotic distribution G.

Most often, the estimators encountered in practice have the asymptotically normal distribution, with an = θ0, bn = √n, and G = N(0, V):


    \sqrt{n}(\hat\theta_n - \theta_0)\ \xrightarrow{d}\ \mathcal{N}(0, V).

Asymptotic theorems

Notes

    References

    • Le Cam, Lucien; Yang, Grace Lo (2000). Asymptotics in statistics (some basic concepts) (2nd ed.). Springer. ISBN 0-387-95036-2. 
    • van der Vaart, A.W. (1998). Asymptotic statistics. Cambridge University Press. ISBN 978-0-521-49603-2. 
    This article is issued from Wikipedia - version of the Sunday, June 21, 2015. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.