Newcomb's paradox

In philosophy and mathematics, Newcomb's paradox, also referred to as Newcomb's problem, is a thought experiment involving a game between two players, one of whom purports to be able to predict the future. Whether the problem actually is a paradox is disputed.

Newcomb's paradox was created by William Newcomb of the University of California's Lawrence Livermore Laboratory. However, it was first analyzed and was published in a philosophy paper spread to the philosophical community by Robert Nozick in 1969,[1] and appeared in Martin Gardner's Scientific American column in 1974.[2] Today it is a much debated problem in the philosophical branch of decision theory[3] but has received little attention from the mathematical side.

The problem

A person is playing a game operated by the Predictor, an entity presented as somehow being exceptionally skilled at predicting people's actions. The player of the game is presented with two boxes, one transparent (labeled A) and the other opaque (labeled B). The player is permitted to take the contents of both boxes, or just the opaque box B. Box A contains a visible $1,000. The contents of box B, however, are determined as follows: At some point before the start of the game, the Predictor makes a prediction as to whether the player of the game will take just box B, or both boxes. If the Predictor predicts that both boxes will be taken, then box B will contain nothing. If the Predictor predicts that only box B will be taken, then box B will contain $1,000,000.

Nozick also stipulates that if the Predictor predicts that the player will choose randomly, then box B will contain nothing.

By the time the game begins, and the player is called upon to choose which boxes to take, the prediction has already been made, and the contents of box B have already been determined. That is, box B contains either $0 or $1,000,000 before the game begins, and once the game begins even the Predictor is powerless to change the contents of the boxes. Before the game begins, the player is aware of all the rules of the game, including the two possible contents of box B, the fact that its contents are based on the Predictor's prediction, and knowledge of the Predictor's infallibility. The only information withheld from the player is what prediction the Predictor made, and thus what the contents of box B are.

Predicted choice Actual choice Payout
A and B A and B $1,000
A and B B only $0
B only A and B $1,001,000
B only B only $1,000,000

The problem is called a paradox because two analyses that both sound intuitively logical give conflicting answers to the question of what choice maximizes the player's payout. The first analysis argues that, regardless of what prediction the Predictor has made, taking both boxes yields more money. That is, if the prediction is for both A and B to be taken, then the player's decision becomes a matter of choosing between $1,000 (by taking A and B) and $0 (by taking just B), in which case taking both boxes is obviously preferable. But, even if the prediction is for the player to take only B, then taking both boxes yields $1,001,000, and taking only B yields only $1,000,000—taking both boxes is still better, regardless of which prediction has been made.

The second analysis suggests that taking only B is the correct option. This analysis argues that we can ignore the possibilities that return $0 and $1,001,000, as they both require that the Predictor has made an incorrect prediction, and the problem states that the Predictor is never wrong. Thus, the choice becomes whether to receive $1,000 (both boxes) or to receive $1,000,000 (only box B)—so taking only box B is better.

In his 1969 article, Nozick noted that "To almost everyone, it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly."

A solution of the paradox must point out an error in one of the two arguments. Either the intuition is wrong, or there is something wrong with the way proposed for affecting the past.

The exact nature of the Predictor varies between retellings of the paradox. Some assume that the character always has a reputation for being completely infallible and incapable of error; others assume that the predictor has a very low error rate. The Predictor can be presented as a psychic, a superintelligent alien, a deity, a brain-scanning computer, etc. However, the original discussion by Nozick says only that the Predictor's predictions are "almost certainly" correct, and also specifies that "what you actually decide to do is not part of the explanation of why he made the prediction he made". With this original version of the problem, some of the discussion below is inapplicable.

Attempted resolutions

Simon Burgess has argued that we need to recognize two stages to the problem.[4] The first stage is that before which the predictor has gained all the information on which the prediction will be based. If, for example, we suppose that the prediction is at least partially based on a brain scan of the player then the first stage will not be over at least until that brain scan has been taken. An important point to appreciate is that while the player is still in that first stage he or she will presumably be able to influence the predictor's prediction (e.g., by committing to taking only one box). The second stage commences after the completion of the brain scan (and/or after the gathering of any other information on which the prediction is based). As Burgess points out, the first stage is the one in which all of us currently find ourselves. Moreover, there is a clear sense in which the first stage is more significant than the second because it is then that the player can determine whether the $1,000,000 is in box B. Once he or she gets to the second stage, the best that can be done is to determine whether to get the $1,000 in box A.

Those persuaded by Burgess's approach do not say, tout court, either that it is rational to one-box or that it is rational to two-box. Rather, they argue that a player should make his or her decision while in the first stage and that that decision should be to commit to one-boxing. Once in the second stage, the rational decision would be to two-box, although by that stage the player should already have made up his or her mind to one-box. Burgess has repeatedly emphasized that he is not arguing that the player should change his or her mind on getting to the second stage. The safe and rational strategy to adopt is to simply make a commitment to one-boxing while in the first stage and to have no intention of wavering from that commitment, i.e., make an 'unqualified resolution'. Burgess points out that those who make no such commitment and therefore miss out on the $1,000,000 have simply failed to be prepared. In a more recent paper Burgess has explained that, given his analysis, Newcomb's problem should be seen as being akin to the toxin puzzle.[5] This is because both problems highlight the fact that one can have a reason to intend to do something without having a reason to actually do it.

With regard to causal structure, Burgess has consistently followed Ellery Eells and others in treating Newcomb's problem as a common cause problem. Contrary to David Lewis, he argues against the idea that Newcomb's problem is another version of the prisoner's dilemma. Burgess's argument on this point emphasizes the contrasting causal structures of the two problems.

William Lane Craig has suggested that, in a world with perfect predictors (or time machines, because a time machine could be used as a mechanism for making a prediction), retrocausality can occur.[6] If a person truly knows the future, and that knowledge affects his or her actions, then events in the future will be causing effects in the past. The chooser's choice will have already caused the predictor's action. Some have concluded that if time machines or perfect predictors can exist, then there can be no free will and choosers will do whatever they're fated to do. Taken together, the paradox is a restatement of the old contention that free will and determinism are incompatible, since determinism enables the existence of perfect predictors. Some philosophers argue this paradox is equivalent to the grandfather paradox. Put another way, they claim the paradox presupposes a perfect predictor, implying the "chooser" is not free to choose, yet simultaneously presumes a choice can be debated and decided. This suggests to some that the paradox is an artifact of these contradictory assumptions. However, Nozick's exposition specifically excludes backward causation (such as time travel) and requires only that the predictions be of high accuracy, not that they are absolutely certain to be correct.

David Wolpert and Gregory Benford have reformulated the problem as a noncooperative game in which players set the conditional distributions in a Bayes net. It is straightforward to prove that the two strategies for which boxes to choose make mutually inconsistent assumptions for the underlying Bayes net. Depending on which Bayes net one assumes, one can derive either strategy as optimal. In this there is no paradox, only unclear language that hides the fact that one is making two inconsistent assumptions.[7] However, that paper also gives a "time reversed" version of Newcomb's problem, in which the so-called "prediction" is made after the strategy has been chosen - which the authors claim is equivalent because the probability arguments make no mention of time. In that time reversed version, at least, the assumption according to which one is always completely free to choose a strategy without affecting the predictor's "prediction" in any way, is incompatible with the original statement of the problem, in which the predictor is very accurate.

Gary Drescher argues in his book Good and Real that the correct decision is to one-box, by appealing to a situation he argues is analogous - a rational agent in a deterministic universe deciding whether or not to cross a potentially busy street.

Eliezer Yudkowsky argues that the correct decision is to one-box, from a conception of rationality as "systematized winning"[8][9] and a principle he calls "reflective consistency".[10]

Andrew Irvine argues that the problem is structurally isomorphic to Braess' paradox, a non-intuitive but ultimately non-paradoxical result concerning equilibrium points in physical systems of various kinds.[11]

Newcomb's paradox can also be related to the question of machine consciousness, specifically if a perfect simulation of a person's brain will generate the consciousness of that person.[12] Suppose we take the predictor to be a machine that arrives at its prediction by simulating the brain of the chooser when confronted with the problem of which box to choose. If that simulation generates the consciousness of the chooser, then the chooser cannot tell whether they are standing in front of the boxes in the real world or in the virtual world generated by the simulation in the past. The "virtual" chooser would thus tell the predictor which choice the "real" chooser is going to make.

In practice, one could make the decision by assuming a Bayesian prior over the Predictor's True Positive and True Negative rates, and simply calculating the expected value of each choice under this prior.

Applicability to the real world

In versions of the Newcomb problem that do not include Nozick's stipulation that a predicted random choice will be "punished" with an empty box, the problem is not realisable in the real world (if reality is continuous). This is because, according to chaos theory, it is not possible even in principle to always predict a complex entity's future behavior with high accuracy. The entity (person or computer program) could simply choose to use an inherently unpredictable process, such as a quantum event source, to make a totally random decision.

Nozick's additional stipulation, in a footnote in the original article, attempts to preclude this problem by stipulating that any predicted use of a random choice or random event will be treated as equivalent, by the predictor, to a prediction of choosing both boxes. However, this assumes that inherently unpredictable quantum events (e.g. in people's brains) would not come into play anyway during the process of thinking about which choice to make,[13] which is an unproven assumption. Indeed, some have speculated that quantum effects in the brain might be essential for a full explanation of consciousness (see Orchestrated objective reduction), or - perhaps even more relevantly for Newcomb's problem - for an explanation of free will.[14]

Extensions to Newcomb's problem

Many thought experiments similar to or based on Newcomb's problem have been discussed in the literature.[1][10] For example, a quantum-theoretical version of Newcomb's problem in which box B is entangled with box A has been proposed.[15]

The Meta-Newcomb Problem

Another related problem is the Meta-Newcomb Problem.[16] The setup of this problem is similar to the original Newcomb problem. However, the twist here is that the Predictor may elect to decide whether to fill box B after the player has made a choice, and the player does not know whether box B has already been filled. Also, there is also another predictor—a Meta-Predictor, who has also predicted correctly every single time in the past, who predicts the following: "Either you will choose both boxes, and the Predictor will make its decision after you, or you will choose only box B, and the Predictor will already have made its decision."

In this situation, a proponent of taking both boxes is faced with a dilemma. If the player takes both boxes, the Predictor will not yet have made its decision, and therefore it will have been more rational for the player to take box B only. But if the player takes box B only, the Predictor will already have made its decision, so the player's decision cannot cause the Predictor's decision.[10]

Fatalism

Newcomb's paradox is related to logical fatalism in that they both suppose absolute certainty of the future. In logical fatalism, this assumption of certainty creates circular reasoning ("a future event is certain to happen, therefore it is certain to happen"), while Newcomb's paradox considers whether the participants of its game are able to affect a predestined outcome.[17]

Notes

  1. 1 2 Robert Nozick (1969). "Newcomb's Problem and Two Principles of Choice". In Rescher, Nicholas. Essays in Honor of Carl G Hempel (PDF). Springer.
  2. Gardner, Martin (March 1974). "Mathematical Games". Scientific American. p. 102. Reprinted with an addendum and annotated bibliography in his book The Colossal Book of Mathematics (ISBN 0-393-02023-1)
  3. "Causal Decision Theory". Stanford Encyclopedia of Philosophy. The Metaphysics Research Lab, Stanford University. Retrieved 3 February 2016.
  4. Burgess, Simon (January 2004). "Newcomb's problem: an unqualified resolution". Synthese 138 (2): 261–287. doi:10.1023/b:synt.0000013243.57433.e7. JSTOR 20118389.
  5. Burgess, Simon (February 2012). "Newcomb's problem and its conditional evidence: a common cause of confusion". Synthese 184 (3): 319–339. doi:10.1007/s11229-010-9816-1. JSTOR 41411196.
  6. Craig (1987). "Divine Foreknowledge and Newcomb's Paradox". Philosophia 17 (3): 331–350. doi:10.1007/BF02455055.
  7. Wolpert, D. H.; Benford, G. (June 2013). "The lesson of Newcomb's paradox". Synthese 190 (9): 1637–1646. doi:10.1007/s11229-011-9899-3. JSTOR 41931515.
  8. Newcomb's Problem and Regret of Rationality
  9. Rationality is Systematized Winning
  10. 1 2 3 Timeless Decision Theory
  11. Irvine, Andrew (1993). "How Braess’ paradox solves Newcomb's problem". International Studies in the Philosophy of Science 7 (2): 141–60. doi:10.1080/02698599308573460.
  12. Neal, R. M. (2006). "Puzzles of Anthropic Reasoning Resolved Using Full Non-indexical Conditioning". arXiv:math.ST/0608592.
  13. Christopher Langan. "The Resolution of Newcomb's Paradox". Noesis (44).
  14. Chetan S. Mandayam Nayakar, R. Srikanth (22 Nov 2010). "Quantum randomness and free will". Retrieved 24 Feb 2013.
  15. Piotrowski, Edward; Jan Sladowski (2003). "Quantum solution to the Newcomb's paradox". International Journal of Quantum Information 1 (3): 395–402. doi:10.1142/S0219749903000279.
  16. Bostrom, Nick (2001). "The Meta-Newcomb Problem". Analysis 61 (4): 309–310. doi:10.1093/analys/61.4.309.
  17. Dummett, Michael (1996), The Seas of Language, Clarendon Press Oxford, pp. 352–358

References

External links

This article is issued from Wikipedia - version of the Saturday, February 20, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.