Likelihood function
In statistics, a likelihood function (often simply the likelihood) is a function of the parameters of a statistical model given data. Likelihood functions play a key role in statistical inference, especially methods of estimating a parameter from a set of statistics. In informal contexts, "likelihood" is often used as a synonym for "probability." In statistics, a distinction is made depending on the roles of outcomes vs. parameters. Probability is used before data are available to describe possible future outcomes given a fixed value for the parameter (or parameter vector). Likelihood is used after data are available to describe a function of a parameter (or parameter vector) for a given outcome.
Definition
The likelihood of a set of parameter values, θ, given outcomes x, is equal to the probability of those observed outcomes given those parameter values, that is
- .
The likelihood function is defined differently for discrete and continuous probability distributions.
Discrete probability distribution
Let X be a random variable with a discrete probability distribution p depending on a parameter θ. Then the function
considered as a function of θ, is called the likelihood function (of θ, given the outcome x of the random variable X). Sometimes the probability of the value x of X for the parameter value θ is written as ; often written as to emphasize that this differs from which is not a conditional probability, because θ is a parameter and not a random variable.
Continuous probability distribution
Let X be a random variable following an absolutely continuous probability distribution with density function f depending on a parameter θ. Then the function
considered as a function of θ, is called the likelihood function (of θ, given the outcome x of X). Sometimes the density function for the value x of X for the parameter value θ is written as ; this should not be confused with which should not be considered a conditional probability density.
In general
In measure-theoretic probability theory, the density function is defined as the Radon-Nikodym derivative of the probability distribution relative to a dominating measure. This provides a likelihood function for any probability model with all distributions, whether discrete, absolutely continuous, a mixture or something else. (Likelihoods will be comparable, e.g., for parameter estimation, only if they are Radon–Nikodym derivatives with respect to the same dominating measure.) .
Inference
For discussion about making inferences via likelihood functions, see the method of maximum likelihood and likelihood-ratio testing.
Log-likelihood
For many applications, the natural logarithm of the likelihood function, called the log-likelihood, is more convenient to work with. Because the logarithm is a monotonically increasing function, the logarithm of a function achieves its maximum value at the same points as the function itself, and hence the log-likelihood can be used in place of the likelihood in maximum likelihood estimation and related techniques. Finding the maximum of a function often involves taking the derivative of a function and solving for the parameter being maximized, and this is often easier when the function being maximized is a log-likelihood rather than the original likelihood function.
For example, some likelihood functions are for the parameters that explain a collection of statistically independent observations. In such a situation, the likelihood function factors into a product of individual likelihood functions. The logarithm of this product is a sum of individual logarithms, and the derivative of a sum of terms is often easier to compute than the derivative of a product. In addition, several common distributions have likelihood functions that contain products of factors involving exponentiation. The logarithm of such a function is a sum of products, again easier to differentiate than the original function.
Edwards [1] established the axiomatic basis for use of the log-likelihood ratio as a measure of relative support for one hypothesis against another. The support function is then the natural logarithm of the likelihood function. Both terms are used in phylogenetics but were not adopted in a general treatment of the topic of statistical evidence.[2]
Example: the gamma distribution
The gamma distribution has two parameters α and β. The likelihood function is
- .
Finding the maximum likelihood estimate of β for a single observed value x looks rather daunting. Its logarithm is much simpler to work with:
Maximizing the log-likelihood first requires taking the partial derivative with respect to β:
- .
If there are a number of independent observations x1, ..., xn, then the joint log-likelihood will be the sum of individual log-likelihoods, and the derivative of this sum will be a sum of derivatives of each individual log-likelihood:
To complete the maximization procedure for the joint log-likelihood, the equation is set to zero and solved for β:
Here denotes the maximum-likelihood estimate, and is the sample mean of the observations.
Likelihood function of a parameterized model
Among many applications, we consider here one of broad theoretical and practical importance. Given a parameterized family of probability density functions (or probability mass functions in the case of discrete distributions)
where θ is the parameter, the likelihood function is
written
where x is the observed outcome of an experiment. In other words, when f(x | θ) is viewed as a function of x with θ fixed, it is a probability density function, and when viewed as a function of θ with x fixed, it is a likelihood function.
This is not the same as the probability that those parameters are the right ones, given the observed sample. Attempting to interpret the likelihood of a hypothesis given observed evidence as the probability of the hypothesis is a common error, with potentially disastrous consequences in medicine, engineering or jurisprudence. See prosecutor's fallacy for an example of this.
From a geometric standpoint, if we consider f (x, θ) as a function of two variables then the family of probability distributions can be viewed as a family of curves parallel to the x-axis, while the family of likelihood functions are the orthogonal curves parallel to the θ-axis.
Likelihoods for continuous distributions
The use of the probability density instead of a probability in specifying the likelihood function above is justified as follows. The likelihood that an observation lies in the interval , where is a specific observed value and a constant, is given by . Observe that , since is positive and constant. Because , where is the probability density function of the variable , it follows that . The first fundamental theorem of calculus and the l'Hôpital's rule together provide that . Then, . Therefore,
- ,
and so maximizing the probability density at amounts to maximizing the likelihood of the specific observation .
Likelihoods for mixed continuous–discrete distributions
The above can be extended in a simple way to allow consideration of distributions which contain both discrete and continuous components. Suppose that the distribution consists of a number of discrete probability masses pk(θ) and a density f(x | θ), where the sum of all the p's added to the integral of f is always one. Assuming that it is possible to distinguish an observation corresponding to one of the discrete probability masses from one which corresponds to the density component, the likelihood function for an observation from the continuous component can be dealt with in the manner shown above. For an observation from the discrete component, the likelihood function for an observation from the discrete component is simply
where k is the index of the discrete probability mass corresponding to observation x, because maximizing the probability mass (or probability) at x amounts to maximizing the likelihood of the specific observation.
The fact that the likelihood function can be defined in a way that includes contributions that are not commensurate (the density and the probability mass) arises from the way in which the likelihood function is defined up to a constant of proportionality, where this "constant" can change with the observation x, but not with the parameter θ.
Example 1
Let be the probability that a certain coin lands heads up (H) when tossed. So, the probability of getting two heads in two tosses (HH) is . If , then the probability of seeing two heads is 0.25:
With this, we can say that the likelihood that , given the observation HH, is 0.25, that is
But this is not the same as saying that the probability that , given the observation HH, is 0.25. For that, we need concepts from Bayesian inference. In particular, Bayes' theorem says that the posterior probability (density) is proportional to the likelihood times the prior probability. When tossing a physical coin, the probability is zero that is exactly 0.5, because any physical device has imperfections. The edges of any coin will be slightly beveled, and the mass distribution will never be perfectly uniform. This will generate a distribution for . Moreover, the features on the coin generate slight imbalances, suggesting that even the average of this distribution will likely not be exactly 0.5. However, it might be hard to find a coin that is demonstratively not fair, i.e., for which is clearly greater than (or less than) 0.5.
Example 2
Consider a jar containing N lottery tickets numbered from 1 through N. If you pick a ticket randomly then you get positive integer n, with probability 1/N if n ≤ N and with probability zero if n > N. This can be written
where the Iverson bracket [n ≤ N] is 1 when n ≤ N and 0 otherwise. When considered a function of n for fixed N this is the probability distribution, but when considered a function of N for fixed n this is a likelihood function. The maximum likelihood estimate for N is N0 = n (by contrast, the unbiased estimate is 2n − 1).
This likelihood function is not a probability distribution, because the total
is a divergent series.
Suppose, however, that you pick two tickets rather than one.
The probability of the outcome {n1, n2}, where n1 < n2, is
When considered a function of N for fixed n2, this is a likelihood function. The maximum likelihood estimate for N is N0 = n2.
This time the total
is a convergent series, and so this likelihood function can be normalized into a probability distribution.
If you pick 3 or more tickets, the likelihood function has a well defined mean value, which is larger than the maximum likelihood estimate. If you pick 4 or more tickets, the likelihood function has a well defined standard deviation too.
With 2 or more tickets, the probability distributions just derived match the results from a Bayesian analysis assuming an improper, uniform prior for N over all positive integers. The use of improper priors is often justified by saying that the information from the data dominates the information from the prior. If only a very few tickets are available, and a precise answer is important, this can justify the work of collecting relevant information from other sources to use as an informative prior.
Relative likelihood
Relative likelihood function
Suppose that the maximum likelihood estimate for θ is . Relative plausibilities of other θ values may be found by comparing the likelihood of those other values with the likelihood of . The relative likelihood of θ is defined[3][4] as
A 10% likelihood region for θ is
and more generally, a p% likelihood region for θ is defined[3][4] to be
If θ is a single real parameter, a p% likelihood region will typically comprise an interval of real values. In that case, the region is called a likelihood interval.[3][4][5]
Likelihood intervals can be compared to confidence intervals, especially Likelihood ratio tests based on Wilks's theorem, which says that 2 times log(likelihood ratio) of nested hypotheses is approximately chi-square under certain commonly met criteria. Most importantly, the parameters of the larger model fixed to produce the smaller model must lie in the interior of the parameter space; one or more cannot be on a boundary. In such cases, the approximating chi-square distribution has degrees of freedom equal to the number of parameters in the larger model fixed to produce the smaller model.[5] The indexing percentage for likelihood intervals is quite different from confidence intervals: If θ is a single real parameter, then under certain conditions, a 14.7% likelihood interval for θ will be the same as a 95% confidence interval.[3]
The idea of basing an interval estimate on the relative likelihood goes back to Fisher in 1956 and has been used by many authors since then.[5] A likelihood interval can be used without claiming any particular coverage probability; as such, it differs from confidence intervals.
Relative likelihood of models
The definition of relative likelihood can be generalized to compare different statistical models. This generalization is based on AIC (Akaike information criterion), or sometimes AICc (Akaike Information Criterion with correction).
Suppose that, for some dataset, we have two statistical models, M1 and M2. Also suppose that AIC(M1) ≤ AIC(M2). Then the relative likelihood of M2 with respect to M1 is defined[6] to be
- exp((AIC(M1)−AIC(M2))/2)
To see that this is a generalization of the earlier definition, suppose that we have some model M with a (possibly multivariate) parameter θ. Then for any θ, set M2 = M(θ), and also set M1 = M(). The general definition now gives the same result as the earlier definition.
Likelihoods that eliminate nuisance parameters
In many cases, the likelihood is a function of more than one parameter but interest focuses on the estimation of only one, or at most a few of them, with the others being considered as nuisance parameters. Several alternative approaches have been developed to eliminate such nuisance parameters so that a likelihood can be written as a function of only the parameter (or parameters) of interest; the main approaches being marginal, conditional and profile likelihoods.[7][8]
These approaches are useful because standard likelihood methods can become unreliable or fail entirely when there are many nuisance parameters or when the nuisance parameters are high-dimensional. This is particularly true when the nuisance parameters can be considered to be "missing data"; they represent a non-negligible fraction of the number of observations and this fraction does not decrease when the sample size increases. Often these approaches can be used to derive closed-form formulae for statistical tests when direct use of maximum likelihood requires iterative numerical methods. These approaches find application in some specialized topics such as sequential analysis.
Conditional likelihood
Sometimes it is possible to find a sufficient statistic for the nuisance parameters, and conditioning on this statistic results in a likelihood which does not depend on the nuisance parameters.
One example occurs in 2×2 tables, where conditioning on all four marginal totals leads to a conditional likelihood based on the non-central hypergeometric distribution. This form of conditioning is also the basis for Fisher's exact test.
Marginal likelihood
Sometimes we can remove the nuisance parameters by considering a likelihood based on only part of the information in the data, for example by using the set of ranks rather than the numerical values. Another example occurs in linear mixed models, where considering a likelihood for the residuals only after fitting the fixed effects leads to residual maximum likelihood estimation of the variance components.
Profile likelihood
When the likelihood function depends on many parameters, depending on the application, we might be interested in only a subset of these parameters. It is often possible to reduce the number of the uninteresting (nuisance) parameters by writing them as functions of the parameters of interest. For example, the functions might be the value of the nuisance parameter which maximizes the likelihood given the value of the other (interesting) parameters.
This procedure is called concentration of the parameters and results in the concentrated likelihood function, also occasionally known as the maximized likelihood function, but most often called the profile likelihood function. It is then possible (and simpler) to find the values of the parameters which maximizes the profile likelihood function (similar to the Maximum likelihood)
For example, consider a regression analysis model with normally distributed errors. The most likely value of the error variance is the variance of the residuals. The residuals depend on all other parameters. Hence the variance parameter can be written as a function of the other parameters.
Unlike conditional and marginal likelihoods, profile likelihood methods can always be used, even when the profile likelihood cannot be written down explicitly. However, the profile likelihood is not a true likelihood, as it is not based directly on a probability distribution, and this leads to some less satisfactory properties. Attempts have been made to improve this, resulting in modified profile likelihood.
The idea of profile likelihood can also be used to compute confidence intervals that often have better small-sample properties than those based on asymptotic standard errors calculated from the full likelihood. In the case of parameter estimation in partially observed systems, the profile likelihood can be also used for identifiability analysis.[9] Results from profile likelihood analysis can be incorporated in uncertainty analysis of model predictions.[10]
Partial likelihood
A partial likelihood is a factor component of the likelihood function that isolates the parameters of interest.[11] It is a key component of the proportional hazards model.
Historical remarks
Likelihood (eikos, versimilis) captures the idea that something is likely to happen or to have happened. As a formal concept, it has appeared in jurisprudence, commerce and scholasticism long before it was given a rigorous mathematical foundation.[12] In English, "likelihood" has been distinguished as being related to, but weaker than, "probability" since its earliest uses. The comparison of hypotheses by evaluating likelihoods has been used for centuries, for example by John Milton in Areopagitica (1644): "when greatest likelihoods are brought that such things are truly and really in those persons to whom they are ascribed".
In the Netherlands Christiaan Huygens used the concept of likelihood in his book "Van rekeningh in spelen van geluck" ("On Reasoning in Games of Chance") in 1657.
In Danish, "likelihood" was used by Thorvald N. Thiele in 1889.[13][14][15]
In English, "likelihood" appears in many writings by Charles Sanders Peirce, where model-based inference (usually abduction but sometimes including induction) is distinguished from statistical procedures based on objective randomization. Peirce's preference for randomization-based inference is discussed in "Illustrations of the Logic of Science" (1877–1878) and "A Theory of Probable Inference" (1883)".
"probabilities that are strictly objective and at the same time very great, although they can never be absolutely conclusive, ought nevertheless to influence our preference for one hypothesis over another; but slight probabilities, even if objective, are not worth consideration; and merely subjective likelihoods should be disregarded altogether. For they are merely expressions of our preconceived notions" (7.227 in his Collected Papers).
"But experience must be our chart in economical navigation; and experience shows that likelihoods are treacherous guides. Nothing has caused so much waste of time and means, in all sorts of researchers, as inquirers' becoming so wedded to certain likelihoods as to forget all the other factors of the economy of research; so that, unless it be very solidly grounded, likelihood is far better disregarded, or nearly so; and even when it seems solidly grounded, it should be proceeded upon with a cautious tread, with an eye to other considerations, and recollection of the disasters caused." (Essential Peirce, volume 2, pages 108–109)
Like Thiele, Peirce considers the likelihood for a binomial distribution. Peirce uses the logarithm of the odds-ratio throughout his career. Peirce's propensity for using the log odds is discussed by Stephen Stigler.[16]
In Great Britain, "likelihood" was popularized in mathematical statistics by Ronald Fisher in 1922:[17] "On the mathematical foundations of theoretical statistics". In that paper, Fisher also uses the term "method of maximum likelihood". Fisher argues against inverse probability as a basis for statistical inferences, and instead proposes inferences based on likelihood functions. Fisher's use of "likelihood" fixed the terminology that is used by statisticians throughout the world.
In 2010, the Intergovernmental Panel on Climate Change published the following "likelihood scale" for use in its Fifth Assessment Report:[18]
Term | Likelihood of the outcome |
---|---|
Virtually certain | 99-100 % probability |
Very likely | 90-100 % probability |
Likely | 66-100 % probability |
About as likely as not | 33 to 66 % probability |
Unlikely | 0-33 % probability |
Very unlikely | 0-10 % probability |
Exceptionally unlikely | 0-1 % probability |
See also
Notes
- ↑ Edwards, A.W.F. 1972. Likelihood. Cambridge University Press, Cambridge (expanded edition, 1992, Johns Hopkins University Press, Baltimore). ISBN 0-8018-4443-6
- ↑ Royall, R. 1997. Statistical Evidence. Chapman and Hall / CRC, Boca Raton.
- 1 2 3 4 Kalbfleisch J.G. (1985) Probability and Statistical Inference, Springer (§9.3.)
- 1 2 3 Sprott D.A. (2000) Statistical Inference in Science, Springer (chap.2)
- 1 2 3 Hudson, D. J. (1971). "Interval Estimation from the Likelihood Function". Journal of the Royal Statistical Society, Series B 33 (2): 256–262.
- ↑ Burnham K. P. & Anderson D.R. (2002), Model Selection and Multimodel Inference, §2.8 (Springer).
- ↑ Pawitan, Yudi (2001). In All Likelihood: Statistical Modelling and Inference Using Likelihood. Oxford University Press. ISBN 0-19-850765-8.
- ↑ Wen Hsiang Wei. "Generalized linear model course notes". Tung Hai University, Taichung, Taiwan. pp. Chapter 5. Retrieved 2007-01-23.
- ↑ Raue, A; Kreutz, C; Maiwald, T; Bachmann, J; Schilling, M; Klingmüller, U; Timmer, J (2009). "Structural and practical identifiability analysis of partially observed dynamical models by exploiting the profile likelihood". Bioinformatics 25 (15): 1923–9. doi:10.1093/bioinformatics/btp358. PMID 19505944.
- ↑ Vanlier, J; Tiemann, C; Hilbers, P; van Riel, N (2012). "An integrated strategy for prediction uncertainty analysis". Bioinformatics 28 (8): 1130–5. doi:10.1093/bioinformatics/bts088. PMID 22355081.
- ↑ Cox, D. R. (1975). "Partial likelihood". Biometrika 62 (2): 269–276. doi:10.1093/biomet/62.2.269. MR 0400509.
- ↑ James Franklin (2001), The Science of Conjecture: Evidence and Probability before Pascal, The Johns Hopkins University Press, ISBN 0-8018-7109-3
- ↑ Anders Hald (1998). A History of Mathematical Statistics from 1750 to 1930. New York: Wiley. ISBN 0-471-17912-4.
- ↑ Steffen L. Lauritzen, Aspects of T. N. Thiele’s Contributions to Statistics. Bulletin of the International Statistical Institute, 58, 27–30, 1999.
- ↑ Steffen L. Lauritzen (2002). Thiele: Pioneer in Statistics. [Oxford University Press]. p. 288. ISBN 978-0-19-850972-1.
- ↑ Stigler, Stephen M. (2002). Statistics on the Table: The History of Statistical Concepts and Methods. Harvard University Press. p. 195. ISBN 9780674009790.
[Peirce] found that [his subjects'] estimates varied directly with the log odds that they actually were correct, a remarkable early appearance of the log odds as an experimentally determined measure of certainty
- ↑ Fisher, R.A. (1922). "On the mathematical foundations of theoretical statistics". Philosophical Transactions of the Royal Society A 222 (594–604): 309–368. doi:10.1098/rsta.1922.0009. JFM 48.1280.02. JSTOR 91208.
- ↑ M. D. Mastrandrea, C. B. Field, T. F. Stocker, O. Edenhofer, K. L. Ebi, D. J. Frame, H. Held, E. Kriegler, K. J. Mach, P. R. Matschoss, G.-K. Plattner, G. W. Yohe, and F. W. Zwiers, Guidance Note for Lead Authors of the IPCC Fifth Assessment Report on Consistent Treatment of Uncertainties, Intergovernmental Panel on Climate Change, 2010.
References
- Hald, A. (1998), A History of Mathematical Statistics from 1750 to 1930, John Wiley & Sons, ISBN 0-471-17912-4.
- Hald, A. (1999), "On the History of Maximum Likelihood in Relation to Inverse Probability and Least Squares", Statistical Science 14 (2): 214–222, doi:10.1214/ss/1009212248, JSTOR 2676741.
- Pratt, J. W. (May 1976), "F. Y. Edgeworth and R. A. Fisher on the Efficiency of Maximum Likelihood Estimation", The Annals of Statistics 4 (3): 501–514, doi:10.1214/aos/1176343457, JSTOR 2958222.
- Stigler, S. M. (1978), "Francis Ysidro Edgeworth, Statistician", Journal of the Royal Statistical Society, Series A 141 (3): 287–322, doi:10.2307/2344804, JSTOR 2344804.
- Stigler, S. M. (1986), The History of Statistics: The Measurement of Uncertainty before 1900, Harvard University Press, ISBN 0-674-40340-1.
- Stigler, S. M. (1999), Statistics on the Table: The History of Statistical Concepts and Methods, Harvard University Press, ISBN 0-674-83601-4.
External links
Look up likelihood in Wiktionary, the free dictionary. |