Bernoulli distribution
| Parameters |  | 
|---|---|
| Support |  | 
| pmf |  | 
| CDF |  | 
| Mean |  | 
| Median |  | 
| Mode |  | 
| Variance |  | 
| Skewness |  | 
| Ex. kurtosis |  | 
| Entropy |  | 
| MGF |  | 
| CF |  | 
| PGF |  | 
| Fisher information |  | 
In probability theory and statistics, the Bernoulli distribution, named after Swiss scientist Jacob Bernoulli,[1] is the probability distribution of a random variable which takes the value 1 with success probability of  and the value 0 with failure probability of
 and the value 0 with failure probability of  . It can be used to represent a coin toss where 1 and 0 would represent "head" and "tail" (or vice versa), respectively. In particular, unfair coins would have
. It can be used to represent a coin toss where 1 and 0 would represent "head" and "tail" (or vice versa), respectively. In particular, unfair coins would have  .
.
The Bernoulli distribution is a special case of the two-point distribution, for which the two possible outcomes need not be 0 and 1. It is also a special case of the binomial distribution; the Bernoulli distribution is a binomial distribution where n=1.
Properties
If  is a random variable with this distribution, we have:
 is a random variable with this distribution, we have:
The probability mass function  of this distribution, over possible outcomes k, is
 of this distribution, over possible outcomes k, is
This can also be expressed as
The Bernoulli distribution is a special case of the binomial distribution with  .[2]
.[2]
The kurtosis goes to infinity for high and low values of  , but for
, but for  the two-point distributions including the Bernoulli distribution have a lower excess kurtosis than any other probability distribution, namely −2.
 the two-point distributions including the Bernoulli distribution have a lower excess kurtosis than any other probability distribution, namely −2.
The Bernoulli distributions for  form an exponential family.
 form an exponential family.
The maximum likelihood estimator of  based on a random sample is the sample mean.
 based on a random sample is the sample mean.
Mean
The expected value of a Bernoulli random variable  is
 is
This is due to the fact that for a Bernoulli distributed random variable  with
 with  and
 and  we find
 we find
Variance
The variance of a Bernoulli distributed  is
 is
We first find
From this follows
Skewness
The skewness is  . When we take the standardized Bernoulli distributed random variable
. When we take the standardized Bernoulli distributed random variable ![\frac{X-\operatorname{E}[X]}{\sqrt{\operatorname{Var}[X]}}](../I/m/bd0ca0d64062f2fab0199309951b8e2a.png) we find that this random variable attains
 we find that this random variable attains  with probability
 with probability  and attains
 and attains  with probability
 with probability  . Thus we get
. Thus we get
Related distributions
- If  are independent, identically distributed  (i.i.d.)  random variables, all Bernoulli distributed with success probability p, then are independent, identically distributed  (i.i.d.)  random variables, all Bernoulli distributed with success probability p, then
The Bernoulli distribution is simply  .
.
- The categorical distribution is the generalization of the Bernoulli distribution for variables with any constant number of discrete values.
- The Beta distribution is the conjugate prior of the Bernoulli distribution.
- The geometric distribution models the number of independent and identical Bernoulli trials needed to get one success.
- If Y ~ Bernoulli(0.5), then (2Y-1) has a Rademacher distribution.
See also
Notes
- ↑ James Victor Uspensky: Introduction to Mathematical Probability, McGraw-Hill, New York 1937, page 45
- ↑ McCullagh and Nelder (1989), Section 4.2.2.
References
- McCullagh, Peter; Nelder, John (1989). Generalized Linear Models, Second Edition. Boca Raton: Chapman and Hall/CRC. ISBN 0-412-31760-5.
- Johnson, N.L., Kotz, S., Kemp A. (1993) Univariate Discrete Distributions (2nd Edition). Wiley. ISBN 0-471-54897-9
External links
|  | Wikimedia Commons has media related to Bernoulli distribution. | 
- Hazewinkel, Michiel, ed. (2001), "Binomial distribution", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4
- Weisstein, Eric W., "Bernoulli Distribution", MathWorld.
- Interactive graphic: Univariate Distribution Relationships
| 
 | ||||||||||||||

![f(k;p) = \begin{cases} p & \text{if }k=1, \\[6pt]
1-p & \text {if }k=0.\end{cases}](../I/m/4a24eb0c61b03cb0b1865292e8d3c846.png)


![\operatorname{E}[X] = \Pr(X=1)\cdot 1 + \Pr(X=0)\cdot 0 = p \cdot 1 + q\cdot 0 = p](../I/m/a64aaddccdc30f0949acc0fe7b75843f.png)
![\operatorname{Var}[X] = pq = p(1-p)](../I/m/ac458827783330bcb71940ae5608d3fc.png)
![\operatorname{E}[X^2] = \Pr(X=1)\cdot 1^2 + \Pr(X=0)\cdot 0^2 = p \cdot 1^2 + q\cdot 0^2 = p](../I/m/6f3a4c52d8c0d97d5394b01f30262225.png)
![\operatorname{Var}[X] = \operatorname{E}[X^2]-\operatorname{E}[X]^2 = p-p^2 = p(1-p) = pq](../I/m/3dcc5492e2c9e36faef8845f0d607919.png)
![\begin{align}
\gamma_1 &= \operatorname{E} \left[\left(\frac{X-\operatorname{E}[X]}{\sqrt{\operatorname{Var}[X]}}\right)^3\right] \\ 
&= p \cdot \left(\frac{q}{\sqrt{pq}}\right)^3 + q \cdot \left(-\frac{p}{\sqrt{pq}}\right)^3 \\
&= \frac{1}{\sqrt{pq}^3} \left(pq^3-qp^3\right) \\
&= \frac{pq}{\sqrt{pq}^3} (q-p) \\
&= \frac{q-p}{\sqrt{pq}}
\end{align}](../I/m/81aedc66e6d506ce012925308e709e71.png)
 (
 (