Bernoulli trial

Graphs of probability P of not observing independent events each of probability p after n Bernoulli trials vs n p for various p. 3 examples are shown:
Blue arrow: Throwing a 6-sided die 6 times gives 33.5% chance that 6 (or any other given number) never turns up; it can be observed that as n increases, the probability of a 1/n-chance event never appearing after n tries rapidly converges to 1/e.
Grey arrow: To get 50-50 chance of throwing a Yahtzee (5 cubic dice all showing the same number) requires 0.69 × 1296 ~ 898 throws.
Green arrow: Drawing a card from a deck of playing cards without jokers 100 (1.92 × 52) times with replacement gives 85.7% chance of drawing the ace of spades at least once.

In the theory of probability and statistics, a Bernoulli trial (or binomial trial) is a random experiment with exactly two possible outcomes, "success" and "failure", in which the probability of success is the same every time the experiment is conducted.[1] It is named after Jacob Bernoulli, a Swiss mathematician of the 17th century.[2]

The mathematical formalization of the Bernoulli trial is known as the Bernoulli process. This article offers an elementary introduction to the concept, whereas the article on the Bernoulli process offers a more advanced treatment.

Since a Bernoulli trial has only two possible outcomes, it can be framed as some "yes or no" question. For example:

Therefore, success and failure are merely labels for the two outcomes, and should not be construed literally. The term "success" in this sense consists in the result meeting specified conditions, not in any moral judgement. More generally, given any probability space, for any event (set of outcomes), one can define a Bernoulli trial, corresponding to whether the event occurred or not (event or complementary event). Examples of Bernoulli trials include:

Definition

Independent repeated trials of an experiment with exactly two possible outcomes are called Bernoulli trials. Call one of the outcomes "success" and the other outcome "failure". Let p be the probability of success in a Bernoulli trial, and q be the probability of failure. Then the probability of success and the probability of failure sum to unity (one), since these are complementary events: "success" and "failure" are mutually exclusive and exhaustive. More comprehensively, one has the following relations:


\begin{align}
p &= 1 - q\\
q &= 1 - p\\
p + q &= 1
\end{align}

Alternatively, these can be stated in terms of odds: given probability p of success and q of failure, the odds for are p:q and the odds against are q:p. These can also be expressed as numbers, by dividing, yielding the odds for o_f and the odds against o_a:


\begin{align}
o_f &= p/q = p/(1-p) = (1-q)/q\\
o_a &= q/p = (1-p)/p = q/(1-q)
\end{align}

These are multiplicative inverses, so they multiply to 1, with the following relations:


\begin{align}
o_f &= 1/o_a\\
o_a &= 1/o_f\\
o_f \cdot o_a &= 1
\end{align}

In the case that a Bernoulli trial is representing an event from finitely many equally likely outcomes, where S of the outcomes are success and F of the outcomes are failure, the odds for are S:F and the odds against are F:S. This yields the following formulas for probability and odds:


\begin{align}
p &= S/(S+F)\\
q &= F/(S+F)\\
o_f &= S/F\\
o_a &= F/S
\end{align}

Note that here the odds are computed by dividing the number of outcomes, not the probabilities, but the proportion is the same, since these ratios only differ by multiplying both terms by the same constant factor.

Random variables describing Bernoulli trials are often encoded using the convention that 1 = "success", 0 = "failure".

Closely related to a Bernoulli trial is a binomial experiment, which consists of a fixed number n of statistically independent Bernoulli trials, each with a probability of success p, and counts the number of successes. A random variable corresponding to a binomial is denoted by B(n,p), and is said to have a binomial distribution. The probability of exactly k successes in the experiment B(n,p) is given by:

P(k)={n \choose k} p^k q^{n-k}
Where {n \choose k} is a Binomial coefficient

Bernoulli trials may also lead to negative binomial distributions (which count the number of successes in a series of repeated Bernoulli trials until a specified number of failures are seen), as well as various other distributions.

When multiple Bernoulli trials are performed, each with its probability of success, these are sometimes referred to as Poisson trials.[3]

Example: tossing coins

Consider the simple experiment where a fair coin is tossed four times. Find the probability that exactly two of the tosses result in heads.

Solution

For this experiment, let a heads be defined as a success and a tails as a failure. Because the coin is assumed to be fair, the probability of success is p = \tfrac{1}{2}. Thus the probability of failure, q, is given by

q = 1 - p = 1 - \tfrac{1}{2} = \tfrac{1}{2}.

Using the equation above, the probability of exactly two tosses out of four total tosses resulting in a heads is given by:

\begin{align}
P(2)
  &= {4 \choose 2} p^2 q^2 \\
  &= 6 \times (\tfrac{1}{2})^2 \times (\tfrac{1}{2})^2 \\
  &= \dfrac {3}{8}
\end{align}.

See also

References

  1. Papoulis, A. (1984). "Bernoulli Trials". Probability, Random Variables, and Stochastic Processes (2nd ed.). New York: McGraw-Hill. pp. 57–63.
  2. James Victor Uspensky: Introduction to Mathematical Probability, McGraw-Hill, New York 1937, page 45
  3. Rajeev Motwani and P. Raghavan. Randomized Algorithms. Cambridge University Press, New York (NY), 1995, p.67-68

External links

Wikimedia Commons has media related to Bernoulli trial.
This article is issued from Wikipedia - version of the Monday, December 14, 2015. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.