Convolution of probability distributions

The convolution of probability distributions arises in probability theory and statistics as the operation in terms of probability distributions that corresponds to the addition of independent random variables and, by extension, to forming linear combinations of random variables. The operation here is a special case of convolution in the context of probability distributions.

Introduction

The probability distribution of the sum of two or more independent random variables is the convolution of their individual distributions. The term is motivated by the fact that the probability mass function or probability density function of a sum of random variables is the convolution of their corresponding probability mass functions or probability density functions respectively. Many well known distributions have simple convolutions: see List of convolutions of probability distributions

The general formula for the distribution of the sum Z=X+Y of two independent discrete variables is [1]

P(Z=z) = \sum_{k=-\infty}^\infty P(X=k)P(Y=z-k).:

The counterpart for independent continuous variables with density functions f(x), g(y) is

h(z)=(f*g)(z)=\int_{-\infty}^\infty f(z-t)g(t) dt = \int_{-\infty}^\infty f(t)g(z-t) dt.

Example derivation

There are several ways of deriving formulae for the convolution of probability distributions. Often the manipulation of integrals can be avoided by use of some type of generating function. Such methods can also be useful in deriving properties of the resulting distribution, such as moments, even if an explicit formula for the distribution itself cannot be derived.

One of the straightforward techniques is to use characteristic functions, which always exists and are unique to a given distribution.

Convolution of Bernoulli distributions

The convolution of two independent identically distributed Bernoulli random variables is a Binomial random variable. That is, in a shorthand notation,

 \sum_{i=1}^2 \mathrm{Bernoulli}(p) \sim \mathrm{Binomial}(2,p).

To show this let

X_i \sim \mathrm{Bernoulli}(p), \quad 0<p<1, \quad 1 \le i \le 2

and define

Y=\sum_{i=1}^2 X_i.

Also, let Z denote a generic binomial random variable:

Z \sim \mathrm{Binomial}(2,p) \,\! .

Using probability mass functions

As X_1 \text{ and } X_2 are independent,

\begin{align}\mathbb{P}[Y=n]&=\mathbb{P}\left[\sum_{i=1}^2 X_i=n\right] \\ 
&=\sum_{m\in\mathbb{Z}} \mathbb{P}[X_1=m]\times\mathbb{P}[X_2=n-m] \\
&=\sum_{m\in\mathbb{Z}}\left[\binom{1}{m}p^m\left(1-p\right)^{1-m}\right]\left[\binom{1}{n-m}p^{n-m}\left(1-p\right)^{1-n+m}\right]\\
&=p^n\left(1-p\right)^{2-n}\sum_{m\in\mathbb{Z}}\binom{1}{m}\binom{1}{n-m} \\
&=p^n\left(1-p\right)^{2-n}\left[\binom{1}{0}\binom{1}{n}+\binom{1}{1}\binom{1}{n-1}\right]\\
&=\binom{2}{n}p^n\left(1-p\right)^{2-n}=\mathbb{P}[Z=n] .
\end{align}

Here, use was made of the fact that \tbinom{n}{k}=0 for k>n in the last but three equality, and of Pascal's rule in the second last equality.

Using characteristic functions

The characteristic function of each X_k and of Z is

\varphi_{X_k}(t)=1-p+pe^{it} \qquad \varphi_Z(t)=\left(1-p+pe^{it}\right)^2

where t is within some neighborhood of zero.

\begin{align}\varphi_Y(t)&=\operatorname{E}\left(e^{it\sum_{k=1}^2 X_k}\right)=\operatorname{E}\left(\prod_{k=1}^2 e^{itX_k}\right)\\
&=\prod_{k=1}^2 \operatorname{E}\left(e^{itX_k}\right)=\prod_{k=1}^2 \left(1-p+pe^{it}\right)\\
&=\left(1-p+pe^{it}\right)^2=\varphi_Z(t)\end{align}

The expectation of the product is the product of the expectations since each X_k is independent. Since Y and Z have the same characteristic function, they must have the same distribution.

See also

References

  1. Susan Holmes (1998). Sums of Random Variables: Statistics 116. Stanford. http://statweb.stanford.edu/~susan/courses/s116/node114.html
This article is issued from Wikipedia - version of the Monday, February 22, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.