Stein's unbiased risk estimate

In statistics, Stein's unbiased risk estimate (SURE) is an unbiased estimator of the mean-squared error of "a nearly arbitrary, nonlinear biased estimator."[1] In other words, it provides an indication of the accuracy of a given estimator. This is important since the true mean-squared error of an estimator is a function of the unknown parameter to be estimated, and thus cannot be determined exactly.

The technique is named after its discoverer, Charles Stein.[2]

Formal statement

Let \mu \in {\mathbb R}^d be an unknown parameter and let x \in {\mathbb R}^d be a measurement vector whose components are independent and distributed normally with mean \mu and variance \sigma^2. Suppose h(x) is an estimator of \mu from x, and can be written h(x) = x + g(x), where g is weakly differentiable. Then, Stein's unbiased risk estimate is given by[3]

\mathrm{SURE}(h) = d\sigma^2 + \|g(x)\|^2 + 2 \sigma^2 \sum_{i=1}^d \frac{\partial}{\partial x_i} g_i(x),

where g_i(x) is the ith component of the function g(x), and \|\cdot\| is the Euclidean norm.

The importance of SURE is that it is an unbiased estimate of the mean-squared error (or squared error risk) of h(x), i.e.

E_\mu \{ \mathrm{SURE}(h) \} = \mathrm{MSE}(h),\,\!

with

\mathrm{MSE}(h) = E_\mu \|h(x)-\mu\|^2.

Thus, minimizing SURE can act as a surrogate for minimizing the MSE. Note that there is no dependence on the unknown parameter \mu in the expression for SURE above. Thus, it can be manipulated (e.g., to determine optimal estimation settings) without knowledge of \mu.

Proof

We wish to show that

E_\mu \|h(x)-\mu\|^2 = E_\mu \{ \mathrm{SURE}(h) \} .

We start by expanding the MSE as

\begin{align} E_\mu \| h(x) - \mu\|^2 & = E_\mu \|g(x) + x - \mu\|^2 \\
                                                                           & = E_\mu \|g(x)\|^2 + E_\mu \|x - \mu\|^2 + 2 E_\mu g(x)^T (x - \mu) \\
                                                                           & =  E_\mu \|g(x)\|^2 + d \sigma^2 + 2 E_\mu g(x)^T(x - \mu).
\end{align}

Now we use integration by parts to rewrite the last term:


\begin{align}
E_\mu g(x)^T(x - \mu) & = \int_{R^d} \frac{1}{\sqrt{2 \pi \sigma^{2d}}} \exp\left(-\frac{\|x - \mu\|^2}{2 \sigma^2} \right) \sum_{i=1}^d g_i(x) (x_i - \mu_i) d^d x \\
& = \sigma^2 \sum_{i=1}^d\int_{R^d} \frac{1}{\sqrt{2 \pi \sigma^{2d}}} \exp\left(-\frac{\|x - \mu\|^2}{2 \sigma^2} \right)  \frac{dg_i}{dx_i} d^d x \\
& = \sigma^2 \sum_{i=1}^d E_\mu \frac{dg_i}{dx_i}. 
\end{align}

Substituting this into the expression for the MSE, we arrive at

E_\mu \|h(x) - \mu\|^2 = E_\mu \left( d\sigma^2 + \|g(x)\|^2 + 2\sigma^2 \sum_{i=1}^d \frac{dg_i}{dx_i}\right).

Applications

A standard application of SURE is to choose a parametric form for an estimator, and then optimize the values of the parameters to minimize the risk estimate. This technique has been applied in several settings. For example, a variant of the James–Stein estimator can be derived by finding the optimal shrinkage estimator.[2] The technique has also been used by Donoho and Johnstone to determine the optimal shrinkage factor in a wavelet denoising setting.[1]

References

  1. 1 2 Donoho, David L.; Iain M. Johnstone (December 1995). "Adapting to Unknown Smoothness via Wavelet Shrinkage". Journal of the American Statistical Association (Journal of the American Statistical Association, Vol. 90, No. 432) 90 (432): 1200–1244. doi:10.2307/2291512. JSTOR 2291512.
  2. 1 2 Stein, Charles M. (November 1981). "Estimation of the Mean of a Multivariate Normal Distribution". The Annals of Statistics 9 (6): 1135–1151. doi:10.1214/aos/1176345632. JSTOR 2240405.
  3. Wasserman, Larry (2005). All of Nonparametric Statistics.
This article is issued from Wikipedia - version of the Thursday, July 24, 2014. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.