Estimating equations

In statistics, the method of estimating equations is a way of specifying how the parameters of a statistical model should be estimated. This can be thought of as a generalisation of many classical methods --- the method of moments, least squares, and maximum likelihood --- as well as some recent methods like M-estimators.

The basis of the method is to have, or to find, a set of simultaneous equations involving both the sample data and the unknown model parameters which are to be solved in order to define the estimates of the parameters.[1] Various components of the equations are defined in terms of the set of observed data on which the estimates are to be based.

Important examples of estimating equations are the likelihood equations.

Examples

Consider the problem of estimating the rate parameter, λ of the exponential distribution which has the probability density function:


f(x;\lambda) = \left\{\begin{matrix}
\lambda e^{-\lambda x}, &\; x \ge 0, \\
0, &\; x < 0.
\end{matrix}\right.

Suppose that a sample of data is available from which either the sample mean, \bar{x}, or the sample median, m, can be calculated. Then an estimating equation based on the mean is

\bar{x}=\lambda^{-1},

while the estimating equation based on the median is

m=\lambda^{-1} \ln 2 .

Each of these equations is derived by equating a sample value (sample statistic) to a theoretical (population) value. In each case the sample statistic is a consistent estimator of the population value, and this provides an intuitive justification for this type of approach to estimation.

See also

References

  1. Dodge, Y. (2003) Oxford Dictionary of Statistical Terms, OUP. ISBN 0-19-920613-9
This article is issued from Wikipedia - version of the Monday, May 05, 2014. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.