Law of total expectation

The proposition in probability theory known as the law of total expectation,[1] the law of iterated expectations, the tower rule, the smoothing theorem, and Adam's Law among other names, states that if X is an integrable random variable (i.e., a random variable satisfying E( | X | ) < ∞) and Y is any random variable, not necessarily integrable, on the same probability space, then

\operatorname{E} (X) = \operatorname{E} ( \operatorname{E} ( X \mid Y)),

i.e., the expected value of the conditional expected value of X given Y is the same as the expected value of X.

The conditional expected value E( X | Y ) is a random variable in its own right, whose value depends on the value of Y. Notice that the conditional expected value of X given the event Y = y is a function of y. If we write E( X | Y = y) = g(y) then the random variable E( X | Y ) is just g(Y).

One special case states that if A_1, A_2, \ldots, A_n is a partition of the whole outcome space, i.e. these events are mutually exclusive and exhaustive, then

\operatorname{E} (X) = \sum_{i=1}^{n}{\operatorname{E}(X \mid A_i) \operatorname{P}(A_i)}.

Example

Suppose that two factories supply light bulbs to the market. Factory X's bulbs work for an average of 5000 hours, whereas factory Y's bulbs work for an average of 4000 hours. It is known that factory X supplies 60% of the total bulbs available. What is the expected length of time that a purchased bulb will work for?

Applying the law of total expectation, we have:

\operatorname{E} (L) = \operatorname{E}(L \mid X) \operatorname{P}(X)+\operatorname{E}(L \mid Y) \operatorname{P}(Y) = 5000(.6)+4000(.4)=4600

where

Thus each purchased light bulb has an expected lifetime of 4600 hours.

Proof in the discrete case


\begin{align}
\operatorname{E}_Y \left( \operatorname{E}_{X\mid Y} (X \mid Y) \right) &{} = \operatorname{E}_Y \Bigg[ \sum_x x \cdot \operatorname{P}(X=x \mid Y) \Bigg] \\[6pt]
&{}=\sum_y \Bigg[ \sum_x x \cdot \operatorname{P}(X=x \mid Y=y) \Bigg] \cdot \operatorname{P}(Y=y) \\[6pt]
&{}=\sum_y \sum_x x \cdot \operatorname{P}(X=x \mid Y=y) \cdot \operatorname{P}(Y=y) \\[6pt]
&{}=\sum_x x \sum_y \operatorname{P}(X=x \mid Y=y) \cdot \operatorname{P}(Y=y) \\[6pt]
&{}=\sum_x x \sum_y \operatorname{P}(X=x, Y=y) \\[6pt]
&{}=\sum_x x \cdot \operatorname{P}(X=x) \\[6pt]
&{}=\operatorname{E}(X).
\end{align}

Proof in the general case

The general statement of the result makes reference to a probability space  (\Omega,\mathcal{F},P) on which two sub \sigma-algebras  \mathcal{G}_1 \subseteq \mathcal{G}_2 \subseteq \mathcal{F} are defined. For a random variable  X on such a space, the smoothing law states that

 \operatorname{E}[ \operatorname{E}[X \mid \mathcal{G}_2] \mid \mathcal{G}_1] = \operatorname{E}[X \mid \mathcal{G}_1].

Since a conditional expectation is a Radon–Nikodym derivative, verifying the following two properties establishes the smoothing law:

The first of these properties holds by the definition of the conditional expectation, and the second holds since G_1 \in \mathcal{G}_1 \subseteq \mathcal{G}_2 implies


   \int_{G_1} \operatorname{E}[ \operatorname{E}[X \mid \mathcal{G}_2] \mid \mathcal{G}_1] dP
= \int_{G_1} \operatorname{E}[X \mid \mathcal{G}_2] dP
= \int_{G_1} X dP.

In the special case that \mathcal{G}_1 = \{\empty,\Omega \} and \mathcal{G}_2 = \sigma(Y), the smoothing law reduces to the statement


  \operatorname{E}[ \operatorname{E}[X \mid Y]] = \operatorname{E}[X].

Notation without indices

When using the expectation operator \operatorname{E}, adding indices to the operator may lead to cumbersome notations and these indices are often omitted. In the case of iterated expectations \operatorname{E} \left( \operatorname{E} (X \mid Y) \right) stands for \operatorname{E}_Y \left( \operatorname{E}_{X\mid Y} (X \mid Y) \right) . The innermost expectation is the conditional expectation of X given Y, and the outermost expectation is taken with respect to the conditioning variable Y. This convention is notably used in the rest of this article.

Iterated expectations with nested conditioning sets

The following formulation of the law of iterated expectations plays an important role in many economic and finance models:

\operatorname{E} (X \mid I_1) = \operatorname{E} ( \operatorname{E} ( X \mid I_2) \mid I_1),

where the value of I2 is determined by that of I1. To build intuition, imagine an investor who forecasts a random stock price X based on the limited information set I1. The law of iterated expectations says that the investor can never gain a more precise forecast of X by conditioning on more specific information (I2), if the more specific forecast must itself be forecast with the original information (I1).

This formulation is often applied in a time series context, where Et denotes expectations conditional on only the information observed up to and including time period t. In typical models the information set t + 1 contains all information available through time t, plus additional information revealed at time t + 1. One can then write:[2]

\operatorname{E}_t(X) = \operatorname{E}_t ( \operatorname{E}_{t+1} ( X )).

See also

References

  1. Weiss, Neil A. (2005). A Course in Probability. Boston: Addison–Wesley. pp. 380–383. ISBN 0-321-18954-X.
  2. Ljungqvist, Lars; Sargent, Thomas J. (2004). Recursive Macroeconomic Theory. Cambridge: MIT Press. pp. 401–402. ISBN 0-262-12274-X.
This article is issued from Wikipedia - version of the Wednesday, February 24, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.