Hodges' estimator

In statistics, Hodges’ estimator[1] (or the Hodges–Le Cam estimator[2]), named for Joseph Hodges, is a famous[3] counter example of an estimator which is "superefficient", i.e. it attains smaller asymptotic variance than regular efficient estimators. The existence of such a counterexample is the reason for the introduction of the notion of regular estimators.

Hodges’ estimator improves upon a regular estimator at a single point. In general, any superefficient estimator may surpass a regular estimator at most on a set of Lebesgue measure zero.[4]

Construction

Suppose \scriptstyle\hat\theta_n is a "common" estimator for some parameter θ: it is consistent, and converges to some asymptotic distribution Lθ (usually this is a normal distribution with mean zero and variance which may depend on θ) at the n-rate:


    \sqrt{n}(\hat\theta_n - \theta)\ \xrightarrow{d}\ L_\theta\ .

Then the Hodges’ estimator \scriptstyle\hat\theta^H_n is defined as [5]


    \hat\theta_n^H = \begin{cases}\hat\theta_n, & \text{if } |\hat\theta_n| \geq n^{-1/4}, \text{ and} \\ 0, & \text{if } |\hat\theta_n| < n^{-1/4}.\end{cases}

This estimator is equal to \scriptstyle\hat\theta_n everywhere except on the small interval [−n−1/4, n−1/4], where it is equal to zero. It is not difficult to see that this estimator is consistent for θ, and its asymptotic distribution is [6]

\begin{align}
    & n^\alpha(\hat\theta_n^H - \theta) \ \xrightarrow{d}\ 0, \qquad\text{when } \theta = 0, \\
    &\sqrt{n}(\hat\theta_n^H - \theta)\ \xrightarrow{d}\ L_\theta, \quad \text{when } \theta\neq 0,
  \end{align}

for any αR. Thus this estimator has the same asymptotic distribution as \scriptstyle\hat\theta_n for all θ ≠ 0, whereas for θ = 0 the rate of convergence becomes arbitrarily fast. This estimator is superefficient, as it surpasses the asymptotic behavior of the efficient estimator \scriptstyle\hat\theta_n at least at one point θ = 0. In general, superefficiency may only be attained on a subset of measure zero of the parameter space Θ.

Example

The mean square error (times n) of Hodges’ estimator. Blue curve corresponds to n = 5, purple to n = 50, and olive to n = 500.[7]

Suppose x1, …, xn is an iid sample from normal distribution N(θ, 1) with unknown mean but known variance. Then the common estimator for the population mean θ is the arithmetic mean of all observations: \scriptstyle\bar{x}. The corresponding Hodges’ estimator will be \scriptstyle\hat\theta^H_n \;=\; \bar{x}\cdot\mathbf{1}\{|\bar x|\,\geq\,n^{-1/4}\}, where 1{…} denotes the indicator function.

The mean square error (scaled by n) associated with the regular estimator x is constant and equal to 1 for all θ’s. At the same time the mean square error of the Hodges’ estimator \scriptstyle\hat\theta_n^H behaves erratically in the vicinity of zero, and even becomes unbounded as n → ∞. This demonstrates that the Hodges’ estimator is not regular, and its asymptotic properties are not adequately described by limits of the form (θ fixed, n → ∞).

See also

Notes

References

  • Bickel, Peter J.; Klaassen, Chris A.J.; Ritov, Ya’acov; Wellner, Jon A. (1998). Efficient and adaptive estimation for semiparametric models. Springer: New York. ISBN 0-387-98473-9. 
  • Kale, B.K. (1985). "A note on the super efficient estimator". Journal of Statistical Planning and Inference 12: 259–263. doi:10.1016/0378-3758(85)90074-6. 
  • Stoica, P.; Ottersten, B. (1996). "The evil of superefficiency". Signal Processing 55: 133–136. doi:10.1016/S0165-1684(96)00159-4. 
  • Vaart, A. W. van der (1998). Asymptotic statistics. Cambridge University Press. ISBN 978-0-521-78450-4. 
This article is issued from Wikipedia - version of the Monday, December 17, 2012. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.