Skellam distribution

Skellam
Probability mass function


Examples of the probability mass function for the Skellam distribution. The horizontal axis is the index k. (Note that the function is only defined at integer values of k. The connecting lines do not indicate continuity.)

Parameters \mu_1\ge 0,~~\mu_2\ge 0
Support \{\ldots, -2,-1,0,1,2,\ldots\}
pmf e^{-(\mu_1\!+\!\mu_2)}
\left(\frac{\mu_1}{\mu_2}\right)^{k/2}\!\!I_{k}(2\sqrt{\mu_1\mu_2})
Mean \mu_1-\mu_2\,
Median N/A
Variance \mu_1+\mu_2\,
Skewness \frac{\mu_1-\mu_2}{(\mu_1+\mu_2)^{3/2}}
Ex. kurtosis 1/(\mu_1+\mu_2)\,
MGF e^{-(\mu_1+\mu_2)+\mu_1e^t+\mu_2e^{-t}}
CF e^{-(\mu_1+\mu_2)+\mu_1e^{it}+\mu_2e^{-it}}

The Skellam distribution is the discrete probability distribution of the difference N_1-N_2 of two statistically independent random variables N_1 and N_2, each Poisson-distributed with respective expected values \mu_1 and \mu_2 It is useful in describing the statistics of the difference of two images with simple photon noise, as well as describing the point spread distribution in sports where all scored points are equal, such as baseball, hockey and soccer.

The distribution is also applicable to a special case of the difference of dependent Poisson random variables, but just the obvious case where the two variables have a common additive random contribution which is cancelled by the differencing: see Karlis & Ntzoufras (2003) for details and an application.

The probability mass function for the Skellam distribution for a difference K=N_1-N_2 between two independent Poisson-distributed random variables with means \mu_1 and \mu_2 is given by:


  p(k;\mu_1,\mu_2) = \Pr\{K=k\} = e^{-(\mu_1+\mu_2)}
  \left({\mu_1\over\mu_2}\right)^{k/2}I_{k}(2\sqrt{\mu_1\mu_2})

where Ik(z) is the modified Bessel function of the first kind. Note that since k is an integer we have that Ik(z)=I|k|(z).

Derivation

Note that the probability mass function of a Poisson-distributed random variable with mean μ is given by


 p(k;\mu)={\mu^k\over k!}e^{-\mu}.\,

for k \ge 0 (and zero otherwise). The Skellam probability mass function for the difference of two independent counts K=N_1-N_2 is the convolution of two Poisson distributions: (Skellam, 1946)


  p(k;\mu_1,\mu_2)
  =\sum_{n=-\infty}^\infty
  \!p(k\!+\!n;\mu_1)p(n;\mu_2)

  =e^{-(\mu_1+\mu_2)}\sum_{n=max(0,-k)}^\infty
  {{\mu_1^{k+n}\mu_2^n}\over{n!(k+n)!}}

Since the Poisson distribution is zero for negative values of the count (p(N<0;\mu)=0), the second sum is only taken for those terms where  n >= 0 and  n+k >= 0 . It can be shown that the above sum implies that

\frac{p(k;\mu_1,\mu_2)}{p(-k;\mu_1,\mu_2)}=\left(\frac{\mu_1}{\mu_2}\right)^k

so that:


  p(k;\mu_1,\mu_2)= e^{-(\mu_1+\mu_2)}
  \left({\mu_1\over\mu_2}\right)^{k/2}I_{|k|}(2\sqrt{\mu_1\mu_2})

where I k(z) is the modified Bessel function of the first kind. The special case for \mu_1=\mu_2(=\mu) is given by Irwin (1937):


  p\left(k;\mu,\mu\right) = e^{-2\mu}I_{|k|}(2\mu).

Note also that, using the limiting values of the modified Bessel function for small arguments, we can recover the Poisson distribution as a special case of the Skellam distribution for \mu_2=0.

Properties

As it is a discrete probability function, the Skellam probability mass function is normalized:


  \sum_{k=-\infty}^\infty p(k;\mu_1,\mu_2)=1
  .

We know that the probability generating function (pgf) for a Poisson distribution is:


  G\left(t;\mu\right)= e^{\mu(t-1)}
  .

It follows that the pgf, G(t;\mu_1,\mu_2), for a Skellam probability mass function will be:

G(t;\mu_1,\mu_2) = \sum_{k=0}^\infty p(k;\mu_1,\mu_2)t^k
= G\left(t;\mu_1\right)G\left(1/t;\mu_2\right)\,
= e^{-(\mu_1+\mu_2)+\mu_1 t+\mu_2/t}.

Notice that the form of the probability generating function implies that the distribution of the sums or the differences of any number of independent Skellam-distributed variables are again Skellam-distributed. It is sometimes claimed that any linear combination of two Skellam-distributed variables are again Skellam-distributed, but this is clearly not true since any multiplier other than \pm 1 would change the support of the distribution and alter the pattern of moments in a way that no Skellam distribution can satisfy.

The moment-generating function is given by:

M\left(t;\mu_1,\mu_2\right) = G(e^t;\mu_1,\mu_2)
 = \sum_{k=0}^\infty { t^k \over k!}\,m_k

which yields the raw moments mk . Define:

\Delta\ \stackrel{\mathrm{def}}{=}\  \mu_1-\mu_2\,
\mu\ \stackrel{\mathrm{def}}{=}\   (\mu_1+\mu_2)/2.\,

Then the raw moments mk are

m_1=\left.\Delta\right.\,
m_2=\left.2\mu+\Delta^2\right.\,
m_3=\left.\Delta(1+6\mu+\Delta^2)\right.\,

The central moments M k are

M_2=\left.2\mu\right.,\,
M_3=\left.\Delta\right.,\,
M_4=\left.2\mu+12\mu^2\right..\,

The mean, variance, skewness, and kurtosis excess are respectively:

\left.\right.E(n)=\Delta\,
\sigma^2=\left.2\mu\right.\,
\gamma_1=\left.\Delta/(2\mu)^{3/2}\right.\,
\gamma_2=\left.1/2\mu\right..\,

The cumulant-generating function is given by:


  K(t;\mu_1,\mu_2)\ \stackrel{\mathrm{def}}{=}\   \ln(M(t;\mu_1,\mu_2))
  = \sum_{k=0}^\infty { t^k \over k!}\,\kappa_k

which yields the cumulants:

\kappa_{2k}=\left.2\mu\right.
\kappa_{2k+1}=\left.\Delta\right. .

For the special case when μ1 = μ2, an asymptotic expansion of the modified Bessel function of the first kind yields for large μ:


  p(k;\mu,\mu)\sim
  {1\over\sqrt{4\pi\mu}}\left[1+\sum_{n=1}^\infty
  (-1)^n{\{4k^2-1^2\}\{4k^2-3^2\}\cdots\{4k^2-(2n-1)^2\}
  \over n!\,2^{3n}\,(2\mu)^n}\right].

(Abramowitz & Stegun 1972, p. 377). Also, for this special case, when k is also large, and of order of the square root of 2μ, the distribution tends to a normal distribution:


  p(k;\mu,\mu)\sim
  {e^{-k^2/4\mu}\over\sqrt{4\pi\mu}}.

These special results can easily be extended to the more general case of different means.

The following recurrence relation holds. Let P(k) = p(k; \mu_1, \mu_2) be the probability mass function for a Skellam-distributed random variable with parameters \mu_1 and \mu_2. Then

\left\{\begin{array}{l}
-\mu _1 P(k)+\mu _2 P(k+2)+(k+1) P(k+1)=0, \\[10pt]
P(0)=e^{-\mu _1-\mu _2}
   \, _0\tilde{F}_1\left(;1;\mu _1 \mu _2\right), \\[10pt]
P(1)=e^{-\mu _1-\mu _2}
   \mu _1 \, _0\tilde{F}_1\left(;2;\mu _1 \mu _2\right)
\end{array}\right\}

Bounds on weight above zero

If X \sim Skellam (\mu_1, \mu_2) , with \mu_1 < \mu_2, then


\frac{\exp(-(\sqrt{\mu_1} -\sqrt{\mu_2})^2  )}{(\mu_1 + \mu_2)^2} - \frac{e^{-(\mu_1 + \mu_2)}}{2\sqrt{\mu_1 \mu_2}} - \frac{e^{-(\mu_1 + \mu_2)}}{4\mu_1 \mu_2} \leq \Pr\{X  \geq 0\} \leq \exp (- (\sqrt{\mu_1} -\sqrt{\mu_2})^2)

Details can be found in Poisson distribution#Poisson_Races

References

This article is issued from Wikipedia - version of the Monday, February 29, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.