Kolmogorov's two-series theorem
In probability theory, Kolmogorov's two-series theorem is a result about the convergence of random series. It follows from Kolmogorov's inequality and is used in one proof of the strong law of large numbers.
Statement of the theorem
Let (Xn)n∈N be independent random variables with expected values E[Xn] = an and variances var(Xn) = σn2, such that ∑∞n=1an converges in ℝ and ∑∞n=1 σi2 < ∞. Then ∑∞n=1 Xn converges in ℝ almost surely.
References
- Durrett, Rick. Probability: Theory and Examples. Duxbury advanced series, Third Edition, Thomson Brooks/Cole, 2005, Section 1.8, pp. 60-69.
- M. Loève, Probability theory, Princeton Univ. Press (1963) pp. Sect. 16.3
- W. Feller, An introduction to probability theory and its applications, 2, Wiley (1971) pp. Sect. IX.9
This article is issued from Wikipedia - version of the Monday, January 27, 2014. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.