Imprecise probability
Imprecise probability generalizes probability theory to allow for partial probability specifications, and is applicable when information is scarce, vague, or conflicting, in which case a unique probability distribution may be hard to identify. Thereby, the theory aims to represent the available knowledge more accurately. Imprecision is useful for dealing with expert elicitation, because:
- People have a limited ability to determine their own subjective probabilities and might find that they can only provide an interval.
- As an interval is compatible with a range of opinions, the analysis ought to be more convincing to a range of different people.
Introduction
Uncertainty is traditionally modelled by a probability distribution, as argued by Kolmogorov,[1] Laplace, de Finetti,[2] Ramsey, Cox, Lindley, and many others. However, this has not been unanimously accepted by scientists, statisticians, and probabilists: it has been argued that some modification or broadening of probability theory is required, because one may not always be able to provide a probability for every event, particularly when only little information or data is available—an early example of such criticism is Boole's critique[3] of Laplace's work—, or when we wish to model probabilities that a group agrees with, rather than those of a single individual.
Perhaps the most straightforward generalization is to replace a single probability specification with an interval specification. Lower and upper probabilities, denoted by and , or more generally, lower and upper expectations (previsions),[4][5][6][7] aim to fill this gap:
- the special case with for all events provides precise probability, whilst
- and represents no constraint at all on the specification of ,
with a flexible continuum in between.
Some approaches, summarized under the name nonadditive probabilities,[8] directly use one of these set functions, assuming the other one to be naturally defined such that , with the complement of . Other related concepts understand the corresponding intervals for all events as the basic entity.[9][10]
History
The idea to use imprecise probability has a long history. The first formal treatment dates back at least to the middle of the nineteenth century, by George Boole,[3] who aimed to reconcile the theories of logic (which can express complete ignorance) and probability. In the 1920s, in A Treatise on Probability, Keynes[11] formulated and applied an explicit interval estimate approach to probability.
Since the 1990s, the theory has gathered strong momentum, initiated by comprehensive foundations put forward by Walley,[7] who coined the term imprecise probability, by Kuznetsov,[12] and by Weichselberger,[9][10] who uses the term interval probability. Walley's theory extends the traditional subjective probability theory via buying and selling prices for gambles, whereas Weichselberger's approach generalizes Kolmogorov's axioms without imposing an interpretation.
Usually assumed consistency conditions relate imprecise probability assignments to non-empty closed convex sets of probability distributions. Therefore, as a welcome by-product, the theory also provides a formal framework for models used in robust statistics[13] and non-parametric statistics.[14] Included are also concepts based on Choquet integration,[15] and so-called two-monotone and totally monotone capacities,[16] which have become very popular in artificial intelligence under the name (Dempster-Shafer) belief functions.[17][18] Moreover, there is a strong connection[19] to Shafer and Vovk's notion of game-theoretic probability.[20]
Mathematical models
So, the term imprecise probability—although an unfortunate misnomer as it enables more accurate quantification of uncertainty than precise probability—appears to have been established in the 1990s, and covers a wide range of extensions of the theory of probability, including:
- previsions[2]
- lower and upper probabilities, or interval probabilities[3][9][11]
- belief functions[17][18]
- possibility and necessity measures[21][22][23]
- lower and upper previsions[5][6][7]
- comparative probability orderings[11][24][25][26]
- partial preference orderings
- sets of desirable gambles[5][6][7]
- p-boxes[27]
- robust Bayes methods[28]
Interpretation of imprecise probabilities according to Walley
A unification of many of the above mentioned imprecise probability theories was proposed by Walley,[7] although this is in no way the first attempt to formalize imprecise probabilities. In terms of probability interpretations, Walley’s formulation of imprecise probabilities is based on the subjective variant of the Bayesian interpretation of probability. Walley defines upper and lower probabilities as special cases of upper and lower previsions and the gambling framework advanced by Bruno de Finetti. In simple terms, a decision maker’s lower prevision is the highest price at which the decision maker is sure he or she would buy a gamble, and the upper prevision is the lowest price at which the decision maker is sure he or she would buy the opposite of the gamble (which is equivalent to selling the original gamble). If the upper and lower previsions are equal, then they jointly represent the decision maker’s fair price for the gamble, the price at which the decision maker is willing to take either side of the gamble. The existence of a fair price leads to precise probabilities.
The allowance for imprecision, or a gap between a decision maker's upper and lower previsions, is the primary difference between precise and imprecise probability theories. Interestingly, such gaps arise naturally in betting markets which happen to be financially illiquid due to asymmetric information.
Bibliography
- ↑ Kolmogorov, A. N. (1950). Foundations of the Theory of Probability. New York: Chelsea Publishing Company.
- 1 2 de Finetti, Bruno (1974-5). Theory of Probability. New York: Wiley. Check date values in:
|date=
(help) - 1 2 3 Boole, George (1854). An investigation of the laws of thought on which are founded the mathematical theories of logic and probabilities. London: Walton and Maberly.
- ↑ Smith, Cedric A. B. (1961). "Consistency in statistical inference and decision". Journal of the Royal Statistical Society B (23): 1–37.
- 1 2 3 Williams, Peter M. (1975). Notes on conditional previsions. School of Math. and Phys. Sci., Univ. of Sussex.
- 1 2 3 Williams, Peter M. (2007). "Notes on conditional previsions". International Journal of Approximate Reasoning 44 (3): 366–383. doi:10.1016/j.ijar.2006.07.019.
- 1 2 3 4 5 Walley, Peter (1991). Statistical Reasoning with Imprecise Probabilities. London: Chapman and Hall. ISBN 0-412-28660-2.
- ↑ Denneberg, Dieter (1994). Non-additive Measure and Integral. Dordrecht: Kluwer.
- 1 2 3 Weichselberger, Kurt (2000). "The theory of interval probability as a unifying concept for uncertainty". International Journal of Approximate Reasoning 24 (2–3): 149–170. doi:10.1016/S0888-613X(00)00032-3.
- 1 2 Weichselberger, K. (2001). Elementare Grundbegriffe einer allgemeineren Wahrscheinlichkeitsrechnung I - Intervallwahrscheinlichkeit als umfassendes Konzept. Heidelberg: Physica.
- 1 2 3 Keynes, John Maynard (1921). A Treatise on Probability. London: Macmillan And Co.
- ↑ Kuznetsov, Vladimir P. (1991). Interval Statistical Models. Moscow: Radio i Svyaz Publ.
- ↑ Ruggeri, Fabrizio (2000). Robust Bayesian Analysis. D. Ríos Insua. New York: Springer.
- ↑ Augustin, T.; Coolen, F. P. A. (2004). "Nonparametric predictive inference and interval probability". Journal of Statistical Planning and Inference 124 (2): 251–272. doi:10.1016/j.jspi.2003.07.003.
- ↑ de Cooman, G.; Troffaes, M. C. M.; Miranda, E. (2008). "n-Monotone exact functionals". Journal of Mathematical Analysis and Applications 347: 143–156. arXiv:0801.1962. Bibcode:2008JMAA..347..143D. doi:10.1016/j.jmaa.2008.05.071.
- ↑ Huber, P. J.; V. Strassen (1973). "Minimax tests and the Neyman-Pearson lemma for capacities". The Annals of Statistics 1 (2): 251–263. doi:10.1214/aos/1176342363.
- 1 2 Dempster, A. P. (1967). "Upper and lower probabilities induced by a multivalued mapping". The Annals of Mathematical Statistics 38 (2): 325–339. doi:10.1214/aoms/1177698950. JSTOR 2239146.
- 1 2 Shafer, Glenn (1976). A Mathematical Theory of Evidence. Princeton University Press.
- ↑ de Cooman, G.; Hermans, F. (2008). "Imprecise probability trees: Bridging two theories of imprecise probability". Artificial Intelligence 172 (11): 1400–1427. doi:10.1016/j.artint.2008.03.001.
- ↑ Shafer, Glenn; Vladimir Vovk (2001). Probability and Finance: It's Only a Game!. Wiley.
- ↑ Zadeh, L. A. (1978). "Fuzzy sets as a basis for a theory of possibility". Fuzzy Sets and Systems 1: 3–28. doi:10.1016/0165-0114(78)90029-5.
- ↑ Dubois, Didier; Henri Prade (1985). Théorie des possibilité. Paris: Masson.
- ↑ Dubois, Didier; Henri Prade (1988). Possibility Theory - An Approach to Computerized Processing of Uncertainty. New York: Plenum Press.
- ↑ de Finetti, Bruno (1931). "Sul significato soggettivo della probabilità". Fundamenta Mathematicae 17: 298–329.
- ↑ Fine, Terrence L. (1973). Theories of Probability. New York: Academic Press.
- ↑ Fishburn, P. C. (1986). "The axioms of subjective probability". Statistical Science 1 (3): 335–358. doi:10.1214/ss/1177013611.
- ↑ Ferson, Scott; Vladik Kreinovich, Lev Ginzburg, David S. Myers, Kari Sentz (2003). "Constructing Probability Boxes and Dempster-Shafer Structures". SAND2002-4015. Sandia National Laboratories. Retrieved 2009-09-23. Cite uses deprecated parameter
|coauthors=
(help) - ↑ Berger, James O. (1984). "The robust Bayesian viewpoint". In Kadane, J. B. Robustness of Bayesian Analyses. Elsevier Science. pp. 63–144.
See also
External links
- The Society for Imprecise Probability: Theories and Applications
- What is imprecision? Journal of Statistical Theory and Practice (call for papers)
- Open source implementation of a classifier based on Imprecise Probabilities
- The imprecise probability group at IDSIA