Well-behaved statistic

A well-behaved statistic is a term sometimes used in the theory of statistics to describe part of a procedure. This usage is broadly similar to the use of well-behaved in more general mathematics. It is essentially an assumption about the formulation of an estimation procedure (which entails the specification of an estimator or statistic) that is used to avoid giving extensive details about what conditions need to hold. In particular it means that the statistic is not an unusual one in the context being studied. Due to this, the meaning attributed to well-behaved statistic may vary from context to context.

The present article is mainly concerned with the context of data mining procedures applied to statistical inference and, in particular, to the group of computationally intensive procedure that have been called algorithmic inference.

Algorithmic inference

Main article: Algorithmic inference

In algorithmic inference, the property of a statistic that is of most relevance is the pivoting step which allows to transference of probability-considerations from the sample distribution to the distribution of the parameters representing the population distribution in such a way that the conclusion of this statistical inference step is compatible with the sample actually observed.

By default, capital letters (such as U, X) will denote random variables and small letters (u, x) their corresponding realizations and with gothic letters (such as \mathfrak U, \mathfrak X) the domain where the variable takes specifications. Facing a sample \boldsymbol x=\{x_1,\ldots,x_m\}, given a sampling mechanism (g_\theta,Z), with \theta scalar, for the random variable X, we have

\boldsymbol x=\{g_\theta(z_1),\ldots,g_\theta(z_m)\}.

The sampling mechanism (g_\theta,\boldsymbol z), of the statistic s, as a function ? of \{x_1,\ldots,x_m\} with specifications in \mathfrak S , has an explaining function defined by the master equation:

s=\rho(x_1,\ldots,x_m)=\rho(g_\theta(z_1),\ldots,g_\theta(z_m))=h(\theta,z_1,\ldots,z_m),\qquad\qquad\qquad (1)

for suitable seeds \boldsymbol z=\{z_1,\ldots,z_m\} and parameter ?

Well-behaved

In order to derive the distribution law of the parameter T, compatible with \boldsymbol x, the statistic must obey some technical properties. Namely, a statistic s is said to be well-behaved if it satisfies the following three statements:

  1. monotonicity. A uniformly monotone relation exists between s and ? for any fixed seed \{z_1,\ldots,z_m\} – so as to have a unique solution of (1);
  2. well-defined. On each observed s the statistic is well defined for every value of ?, i.e. any sample specification \{x_1,\ldots,x_m\}\in\mathfrak X^m such that \rho(x_1,\ldots,x_m)=s has a probability density different from 0 – so as to avoid considering a non-surjective mapping from \mathfrak X^m to \mathfrak S, i.e. associating via s to a sample \{x_1,\ldots,x_m\} a ? that could not generate the sample itself;
  3. local sufficiency. \{\breve\theta_1,\ldots, \breve\theta_N\} constitutes a true T sample for the observed s, so that the same probability distribution can be attributed to each sampled value. Now, \breve\theta_j= h^{-1}(s,\breve z_1^j, \ldots,\breve z_m^j) is a solution of (1) with the seed \{\breve z_1^j,\ldots,\breve z_m^j\}. Since the seeds are equally distributed, the sole caveat comes from their independence or, conversely from their dependence on ? itself. This check can be restricted to seeds involved by s, i.e. this drawback can be avoided by requiring that the distribution of \{Z_1,\ldots,Z_m|S=s\} is independent of ?. An easy way to check this property is by mapping seed specifications into x_is specifications. The mapping of course depends on ?, but the distribution of \{X_1, \ldots,X_m|S=s\} will not depend on ?, if the above seed independence holds – a condition that looks like a local sufficiency of the statistic S.

Example

For instance, for both the Bernoulli distribution with parameter p and the exponential distribution with parameter ? the statistic \sum_{i=1}^m x_i is well-behaved. The satisfaction of the above three properties is straightforward when looking at both explaining functions: g_p(u)=1 if u\leq p, 0 otherwise in the case of the Bernoulli random variable, and g_\lambda(u)=-\log u/\lambda for the Exponential random variable, giving rise to statistics

s_p=\sum_{i=1}^m I_{[0,p]}(u_i)

and

s_\lambda=-\frac{1}{\lambda}\sum_{i=1}^m \log u_i.

Vice versa, in the case of X following a continuous uniform distribution on [0,A] the same statistics do not meet the second requirement. For instance, the observed sample \{c,c/2,c/3\} gives s'_A=11/6c. But the explaining function of this X is g_a(u)=u a. Hence a master equation s_A=\sum_{i=1}^m u_i a would produce with a U sample \{0.8, 0.8, 0.8\} and a solution \breve a=0.76 c. This conflicts with the observed sample since the first observed value should result greater than the right extreme of the X range. The statistic s_A=\max\{x_1,\ldots,x_m\} is well-behaved in this case.

Analogously, for a random variable X following the Pareto distribution with parameters K and A (see Pareto example for more detail of this case),

s_1=\sum_{i=1}^m \log x_i

and

s_2=\min_{i=1,\ldots,m} \{x_i\}

can be used as joint statistics for these parameters.

As a general statement that holds under weak conditions, sufficient statistics are well-behaved with respect to the related parameters. The table below gives sufficient / Well-behaved statistics for the parameters of some of the most commonly used probability distributions.

Common distribution laws together with related sufficient and well-behaved statistics.
Distribution Definition of density function Sufficient/Well-behaved statistic
Uniform discrete f(x;n)=1/n I_{\{1,2,\ldots,n\}}(x) s_n=\max_i x_i
Bernoulli f(x;p)=p^x (1-p)^{1-x} I_{\{0,1\}}(x) s_P=\sum_{i=1}^m x_i
Binomial f(x;n,p)=\binom{n}{x}p^x (1-p)^{n-x} I_{0,1,\ldots, n}(x) s_P=\sum_{i=1}^m x_i
Geometric f(x;p)=p(1-p)^x I_{\{0,1,\ldots\}}(x) s_P=\sum_{i=1}^m x_i
Poisson f(x;\mu)=\mathrm e^{-\mu x} \mu^x / x! I_{\{0,1,\ldots\}}(x) s_{M}=\sum_{i=1}^m x_i
Uniform continuous f(x;a,b)=1/(b-a) I_{[a,b]}(x) s_A=\min_i x_i; s_B=\max_i x_i
Negative exponential f(x;\lambda)=\lambda \mathrm e^{-\lambda x} I_{[0,\infty]}(x) s_{\Lambda}=\sum_{i=1}^m x_i
Pareto f(x;a, k)= \frac{a}{k}\left(\frac{x}{k}\right)^{-a -1} I_{[k,\infty]}(x) s_{A}=\sum_{i=1}^m \log x_i; s_K=\min_i x_i
Gaussian f(x,\mu,\sigma)= 1/(\sqrt{2 \pi}\sigma) \mathrm e^{-(x-\mu^2)/(2\sigma^2)} s_M=\sum_{i=1}^m x_i; s_{\Sigma}=\sqrt{\sum_{i=1}^m(x_i-\bar x)^2}
Gamma f(x;r,\lambda)= \lambda/\Gamma(r) (\lambda x)^{r-1} \mathrm e^{-\lambda x} I_{[0,\infty]}(x) s_{\Lambda}=\sum_{i=1}^m x_i; s_{K}=\prod_{i=1}^m x_i

References

    This article is issued from Wikipedia - version of the Monday, December 29, 2014. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.