Borel–Cantelli lemma

In probability theory, the Borel–Cantelli lemma is a theorem about sequences of events. In general, it is a result in measure theory. It is named after Émile Borel and Francesco Paolo Cantelli, who gave statement to the lemma in the first decades of the 20th century.[1][2] A related result, sometimes called the second Borel–Cantelli lemma, is a partial converse of the first Borel–Cantelli lemma. The lemma states that, under certain conditions, an event will have probability either zero or one. As such, it is the best-known of a class of similar theorems, known as zero-one laws. Other examples include the Kolmogorov 0-1 law and the Hewitt–Savage zero-one law.

Statement of lemma for probability spaces

Let E1,E2,...,En be a sequence of events in some probability space. The Borel–Cantelli lemma states:[3]

If the sum of the probabilities of the En is finite
\sum_{n=1}^\infty \Pr(E_n)<\infty,
then the probability that infinitely many of them occur is 0, that is,
\Pr\left(\limsup_{n\to\infty} E_n\right) = 0.\,

Here, "lim sup" denotes limit supremum of the sequence of events, and each event is a set of outcomes. That is, lim sup En is the set of outcomes that occur infinitely many times within the infinite sequence of events (En). Explicitly,

\limsup_{n\to\infty} E_n = \bigcap_{n=1}^\infty \bigcup_{k \geq n}^\infty E_k.

The theorem therefore asserts that if the sum of the probabilities of the events En is finite, then the set of all outcomes that are "repeated" infinitely many times must occur with probability zero. Note that no assumption of independence is required.

Example

Suppose (Xn) is a sequence of random variables with Pr(Xn = 0) = 1/n2 for each n. The probability that Xn = 0 occurs for infinitely many n is equivalent to the probability of the intersection of infinitely many [Xn = 0] events. The intersection of infinitely many such events is a set of outcomes common to all of them. However, the sum ∑Pr(Xn = 0) converges to π2/6  1.645 < ∞, and so the Borel–Cantelli Lemma states that the set of outcomes that are common to infinitely many such events occurs with probability zero. Hence, the probability of Xn = 0 occurring for infinitely many n is 0. Almost surely (i.e., with probability 1), Xn is nonzero for all but finitely many n.

Proof [4]

Let [E_n] denote the indicator function of the event E_n (using Iverson bracket notation). Then, by the linearity of expectation

\mathbb E\left[\sum_n [E_n]\right] = \sum_n \mathbb E[E_n] = \sum_n \Pr(E_n) < \infty

by hypothesis. By Markov's inequality, for any N,

\Pr\left(\sum_n [E_n] \ge N\right) \le \frac{1}{N}\sum_n \Pr(E_n)

Letting N\to\infty gives, by monotone convergence,

\Pr\left(\sum_n [E_n] = \infty \right) = 0

so that, almost surely, only finitely many of the events E_n are true.

Alternative proof [5]

Let (En) be a sequence of events in some probability space and suppose that the sum of the probabilities of the En is finite. That is suppose:

\sum_{n=1}^\infty \Pr(E_n)<\infty.

Now we can examine the series by examining the elements in the series. We can order the sequence such that the smaller the element is, the later it would come in the sequence. That is :-

\Pr(E_i) \ge \Pr(E_{i + 1}).

As the series converges, we must have that

\sum_{n = N} ^\infty \Pr(E_n) \rightarrow 0

as N goes to infinity. Therefore :

 \inf_{N\geq 1} \sum_{n=N}^\infty \Pr(E_n) = 0. \,

Therefore it follows that


\begin{align}
& \Pr\left(\limsup_{n\to\infty} E_n\right) = \Pr(\text{infinitely many of the } E_n \text{ occur} ) \\[8pt]
= {} & \Pr\left(\bigcap_{N=1}^\infty \bigcup_{n=N}^\infty E_n\right)
\leq \inf_{N \geq 1} \Pr\left( \bigcup_{n=N}^\infty E_n\right) \leq \inf_{N\geq 1} \sum_{n=N}^\infty \Pr(E_n) = 0.
\end{align}

General measure spaces

For general measure spaces, the Borel–Cantelli lemma takes the following form:

Let μ be a (positive) measure on a set X, with σ-algebra F, and let (An) be a sequence in F. If
\sum_{n=1}^\infty\mu(A_n)<\infty,
then
\mu\left(\limsup_{n\to\infty} A_n\right) = 0.\,

Converse result

A related result, sometimes called the second Borel–Cantelli lemma, is a partial inverse of the first Borel–Cantelli lemma. The lemma states: If the events En are independent and the sum of the probabilities of the En diverges to infinity, then the probability that infinitely many of them occur is 1. That is:

If \sum^{\infty}_{n = 1} \Pr(E_n) = \infty and the events (E_n)^{\infty}_{n = 1} are independent, then \Pr(\limsup_{n \rightarrow \infty} E_n) = 1.

The assumption of independence can be weakened to pairwise independence, but in that case the proof is more difficult.

Example

The infinite monkey theorem is a special case of this lemma.

The lemma can be applied to give a covering theorem in Rn. Specifically (Stein 1993, Lemma X.2.1), if Ej is a collection of Lebesgue measurable subsets of a compact set in Rn such that

\sum_j \mu(E_j) = \infty,

then there is a sequence Fj of translates

F_j = E_j + x_j \,

such that

\lim\sup F_j = \bigcap_{n=1}^\infty \bigcup_{k=n}^\infty F_k = \mathbb{R}^n

apart from a set of measure zero.

Proof[5]

Suppose that \sum_{n = 1}^\infty \Pr(E_n) = \infty and the events (E_n)^\infty_{n = 1} are independent. It is sufficient to show the event that the En's did not occur for infinitely many values of n has probability 0. This is just to say that it is sufficient to show that

 1-\Pr(\limsup_{n \rightarrow \infty} E_n) = 0. \,

Noting that:

\begin{align}
1 - \Pr(\limsup_{n \rightarrow \infty} E_n) &= 1 - \Pr\left(\{E_n\text{ i.o.}\}\right) = \Pr\left(\{E_n \text{ i.o.}\}^c \right) \\
& = \Pr\left(\left(\bigcap_{N=1}^\infty \bigcup_{n=N}^\infty E_n\right)^c \right) = \Pr\left(\bigcup_{N=1}^\infty \bigcap_{n=N}^\infty E_n^c \right)\\
&= \Pr\left(\liminf_{n \rightarrow \infty}E_n^{c}\right)= \lim_{N \rightarrow \infty}\Pr\left(\bigcap_{n=N}^\infty E_n^c \right)
\end{align}

it is enough to show: \Pr\left(\bigcap_{n=N}^{\infty}E_n^{c}\right) = 0. Since the (E_n)^{\infty}_{n = 1} are independent:

\begin{align}
\Pr\left(\bigcap_{n=N}^\infty E_n^c\right) 
&= \prod^\infty_{n=N} \Pr(E_n^c) \\
&= \prod^\infty_{n=N} (1-\Pr(E_n)) \\
&\leq\prod^\infty_{n=N} \exp(-\Pr(E_n))\\
&=\exp\left(-\sum^\infty_{n=N} \Pr(E_n)\right)\\
&= 0.
\end{align}

This completes the proof. Alternatively, we can see \Pr\left(\bigcap_{n=N}^\infty E_n^c \right) = 0 by taking negative the logarithm of both sides to get:


\begin{align}
 -\log\left(\Pr\left(\bigcap_{n=N}^{\infty}E_n^{c}\right)\right) &= -\log\left(\prod^{\infty}_{n=N} (1-\Pr(E_n))\right) \\
&= - \sum^\infty_{n=N}\log(1-\Pr(E_n)). 
\end{align}

Since log(1  x)  x for all x > 0, the result similarly follows from our assumption that \sum^\infty_{n = 1} \Pr(E_n) = \infty.

Counterpart

Another related result is the so-called counterpart of the BorelCantelli lemma. It is a counterpart of the Lemma in the sense that it gives a necessary and sufficient condition for the limsup to be 1 by replacing the independence assumption by the completely different assumption that (A_n) is monotone increasing for sufficiently large indices. This Lemma says:

Let (A_n) be such that A_k \subseteq A_{k+1}, and let \bar A denote the complement of A. Then the probability of infinitely many A_k occur (that is, at least one A_k occurs) is one if and only if there exists a strictly increasing sequence of positive integers ( t_k) such that

 \sum_k \Pr( A_{t_{k+1}} \mid \bar A_{t_k}) = \infty.

This simple result can be useful in problems such as for instance those involving hitting probabilities for stochastic process with the choice of the sequence (t_k) usually being the essence.

See also

References

  1. E. Borel, "Les probabilités dénombrables et leurs applications arithmetiques" Rend. Circ. Mat. Palermo (2) 27 (1909) pp. 247–271.
  2. F.P. Cantelli, "Sulla probabilità come limite della frequenza", Atti Accad. Naz. Lincei 26:1 (1917) pp.39–45.
  3. Klenke, Achim (2006). Probability Theory. Springer-Verlag. ISBN 978-1-84800-047-6.
  4. Tao, Terence. "The strong law of large numbers.".
  5. 1 2 "Romik, Dan. Probability Theory Lecture Notes, Fall 2009, UC Davis." (PDF).

External links

This article is issued from Wikipedia - version of the Wednesday, April 27, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.