Uniform integrability

Uniform integrability is an important concept in real analysis, functional analysis and measure theory, and plays a vital role in the theory of martingales. The definition used in measure theory is closely related to, but not identical to, the definition typically used in probability.

Measure theoretic definition

Textbooks on real analysis and measure theory often use the following definition.[1][2]

Let  (X,\mathfrak{M}, \mu) be a positive measure space. A set \Phi\subset L^1(\mu) is called uniformly integrable if to each  \epsilon>0 there corresponds a  \delta>0 such that

 \left| \int_E f d\mu \right| < \epsilon

whenever f \in \Phi and \mu(E)<\delta.

Probability definition

In the theory of probability, the following definition applies.[3][4][5]

The two probabilistic definitions are equivalent.[6]

Relationship between definitions

The two definitions are closely related. A probability space is a measure space with total measure 1. A random variable is a real-valued measurable function on this space, and the expectation of a random variable is defined as the integral of this function with respect to the probability measure.[7] Specifically,

Let  (\Omega, \mathcal{F}, P) be a probability space. Let the random variable X be a real-valued \mathcal{F}-measurable function. Then the expectation of X is defined by

E(X) = \int_\Omega X dP

provided that the integral exists.

Then the alternative probabilistic definition above can be rewritten in measure theoretic terms as: A set \mathcal{C} of real-valued functions is called uniformly integrable if:

Comparison of this definition with the measure theoretic definition given above shows that the measure theoretic definition requires only that each function be in L^1(\mu). In other words, \int_X f d\mu is finite for each f, but there is not necessarily an upper bound to the values of these integrals. In contrast, the probabilistic definition requires that the integrals have an upper bound.

One consequence of this is that uniformly integrable random variables (under the probabilistic definition) are tight. That is, for each \epsilon > 0, there exists a > 0 such that

 \int_{|X| > a} dP < \epsilon

for all X.[8]

In contrast, uniformly integrable functions (under the measure theoretic definition) are not necessarily tight.[9]

In his book, Bass uses the term uniformly absolutely continuous to refer to sets of random variables (or functions) which satisfy the second clause of the alternative definition. However, this definition does not require each of the functions to have a finite integral.[10]

Related corollaries

The following results apply to the probabilistic definition.[11]

\lim_{K \to \infty} \sup_{X \in \mathcal{C}} E(|X|I_{|X|\geq K})=0.
X_n(\omega) = \begin{cases}
  n, & \omega\in (0,1/n), \\
  0 , & \text{otherwise.} \end{cases}
Clearly X_n\in L^1, and indeed E(|X_n|)=1\ , for all n. However,
E(|X_n|,|X_n|\ge K)= 1\ \text{ for all } n\ge K,
and comparing with definition 1, it is seen that the sequence is not uniformly integrable.
Non-UI sequence of RVs. The area under the strip is always equal to 1, but X_n \to 0 pointwise.
E(|X|)=E(|X|,|X|>K)+E(|X|,|X|<K)
and bounding each of the two, it can be seen that a uniformly integrable random variable is always bounded in L^1.
\ |X_n(\omega)| \le |Y(\omega)|,\ Y(\omega)\ge 0,\ E(Y)< \infty,
then the class \mathcal{C} of random variables \{X_n\} is uniformly integrable.

Relevant theorems

A class of random variables X_n \subset L^1(\mu) is uniformly integrable if and only if it is relatively compact for the weak topology \sigma(L^1,L^\infty).
The family \{X_{\alpha}\}_{\alpha\in\Alpha} \subset L^1(\mu) is uniformly integrable if and only if there exists a non-negative increasing convex function G(t) such that
\lim_{t \to \infty} \frac{G(t)}{t} = \infty and \sup_{\alpha} E(G(|X_{\alpha}|)) < \infty.

Relation to convergence of random variables

Citations

  1. Rudin, Walter (1987). Real and Complex Analysis (3 ed.). Singapore: McGraw–Hill Book Co. p. 133. ISBN 0-07-054234-1.
  2. Royden, H.L. and Fitzpatrick, P.M. (2010). Real Analysis (4 ed.). Boston: Prentice Hall. p. 93. ISBN 0-13-143747-X.
  3. Williams, David (1997). Probability with Martingales (Repr. ed.). Cambridge: Cambridge Univ. Press. pp. 126–132. ISBN 978-0-521-40605-5.
  4. Gut, Allan (2005). Probability: A Graduate Course. Springer. pp. 214–218. ISBN 0-387-22833-0.
  5. Bass, Richard F. (2011). Stochastic Processes. Cambridge: Cambridge University Press. pp. 356–357. ISBN 978-1-107-00800-7.
  6. Gut 2005, p. 214.
  7. Bass 2011, p. 348.
  8. Gut 2005, p. 236.
  9. Royden and Fitzpatrick 2010, p. 98.
  10. Bass 2011, p. 356.
  11. Gut 2005, pp. 215-216.
  12. Dellacherie, C. and Meyer, P.A. (1978). Probabilities and Potential, North-Holland Pub. Co, N. Y. (Chapter II, Theorem T25).
  13. Meyer, P.A. (1966). Probability and Potentials, Blaisdell Publishing Co, N. Y. (p.19, Theorem T22).
  14. Bogachev, Vladimir I. (2007). Measure Theory Volume I. Berlin Heidelberg: Springer-Verlag. p. 268. doi:10.1007/978-3-540-34514-5_4. ISBN 3-540-34513-2.

References

This article is issued from Wikipedia - version of the Wednesday, March 23, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.