Cognitive bias mitigation

For a broader coverage related to this topic, see Debiasing.

Cognitive bias mitigation is the prevention and reduction of the negative effects of cognitive biases – unconscious, automatic influences on human judgment and decision making that reliably produce reasoning errors.

Coherent, comprehensive theories of cognitive bias mitigation are lacking. This article describes debiasing tools, methods, proposals and other initiatives, in academic and professional disciplines concerned with the efficacy of human reasoning, associated with the concept of cognitive bias mitigation; most address mitigation tacitly rather than explicitly.

A long-standing debate regarding human decision making bears on the development of a theory and practice of bias mitigation. This debate contrasts the rational economic agent standard for decision making versus one grounded in human social needs and motivations. The debate also contrasts the methods used to analyze and predict human decision making, i.e. formal analysis emphasizing intellectual capacities versus heuristics emphasizing emotional states. This article identifies elements relevant to this debate.

Context

A large body of evidence[1][2][3][4][5][6][7][8][9][10][11] has established that a defining characteristic of cognitive biases is that they manifest automatically and unconsciously over a wide range of human reasoning, so even those aware of the existence of the phenomenon are unable to detect, let alone mitigate, their manifestation via awareness only.

Real-world effects of cognitive bias

There are few studies explicitly linking cognitive biases to real-world incidents with highly negative outcomes. Examples:

There are numerous investigations of incidents determining that human error was central to highly negative potential or actual real-world outcomes, in which manifestation of cognitive biases is a plausible component. Examples:

Each of the approximately 100 cognitive biases known to date can also produce negative outcomes in our everyday lives, though rarely as serious as in the examples above. An illustrative selection, recounted in multiple studies:[1][2][3][4][5][6][7][8][9][10]

Cognitive bias mitigation to date

An increasing number of academic and professional disciplines are working explicitly focusing on identifying Cognitive Bias Mitigation. Notable exceptions are the field of debiasing, and a model by the NeuroLeadership Institute that categorizes over 150 known cognitive biases into a decision-making framework.[22]

What follows is a characterization of the assumptions, theories, methods and results, in disciplines concerned with the efficacy of human reasoning, that plausibly bear on a theory and/or practice of Cognitive Bias Mitigation. In most cases this is based on explicit reference to cognitive biases or their mitigation, in others on unstated but self-evident applicability. This characterization is organized along lines reflecting historical segmentation of disciplines, though in practice there is a significant amount of overlap.

Decision theory

Decision theory, a discipline with its roots grounded in neo-classical economics, is explicitly focused on human reasoning, judgment, choice and decision making, primarily in 'one-shot games' between two agents with or without perfect information. The theoretical underpinning of decision theory assumes that all decision makers are rational agents trying to maximize the economic expected value/utility of their choices, and that to accomplish this they utilize formal analytical methods such as mathematics, probability, statistics, and logic under cognitive resource constraints.[23][24][25]

Normative, or prescriptive, decision theory concerns itself with what people should do, given the goal of maximizing expected value/utility; in this approach there is no explicit representation in practitioners' models of unconscious factors such as cognitive biases, i.e. all factors are considered conscious choice parameters for all agents. Practitioners tend to treat deviations from what a rational agent would do as 'errors of irrationality', with the implication that Cognitive Bias Mitigation can only be achieved by decision makers becoming more like rational agents, though no explicit measures for achieving this are proffered.

Positive, or descriptive, decision theory concerns itself with what people actually do; practitioners tend to acknowledge the persistent existence of 'irrational' behavior, and while some mention human motivation and biases as possible contributors to such behavior, these factors are not made explicit in their models. Practitioners tend to treat deviations from what a rational agent would do as evidence of important, but as yet not understood, decision-making variables, and have as yet no explicit or implicit contributions to make to a theory and practice of Cognitive Bias Mitigation.

Game theory

Game theory, a discipline with roots in economics and system dynamics, is a method of studying strategic decision making in situations involving multi-step interactions with multiple agents with or without perfect information. As with decision theory, the theoretical underpinning of game theory assumes that all decision makers are rational agents trying to maximize the economic expected value/utility of their choices, and that to accomplish this they utilize formal analytical methods such as mathematics, probability, statistics, and logic under cognitive resource constraints.[26][27][28][29]

One major difference between decision theory and game theory is the notion of 'equilibrium', a situation in which all agents agree on a strategy because any deviation from this strategy punishes the deviating agent. Despite analytical proofs of the existence of at least one equilibrium in a wide range of scenarios, game theory predictions, like those in decision theory, often do not match actual human choices.[30] As with decision theory, practitioners tend to view such deviations as 'irrational', and rather than attempt to model such behavior, by implication hold that Cognitive Bias Mitigation can only be achieved by decision makers becoming more like rational agents.

In the full range of game theory models there are many that do not guarantee the existence of equilibria, i.e. there are conflict situations where there is no set of agents' strategies that all agents agree are in their best interests. However, even when theoretical equilibria exist, i.e. when optimal decision strategies are available for all agents, real-life decision-makers often do not find them; indeed they sometimes apparently do not even try to find them, suggesting that some agents are not consistently 'rational'. game theory does not appear to accommodate any kind of agent other than the rational agent.

Behavioral economics

Unlike neo-classical economics and decision theory, behavioral economics and the related field, behavioral finance, explicitly consider the effects of social, cognitive and emotional factors on individuals' economic decisions. These disciplines combine insights from psychology and neo-classical economics to achieve this.[24][31][32]

Prospect theory[33] was an early inspiration for this discipline, and has been further developed by its practitioners. It is one of the earliest economic theories that explicitly acknowledge the notion of Cognitive Bias, though the model itself accounts for only a few, including loss aversion, anchoring and adjustment bias, endowment effect, and perhaps others. No mention is made in formal prospect theory of Cognitive Bias Mitigation, and there is no evidence of peer-reviewed work on Cognitive Bias Mitigation in other areas of this discipline.

However, Kahneman and others have authored recent articles in business and trade magazines addressing the notion of Cognitive Bias Mitigation in a limited form.[34] These contributions assert that Cognitive Bias Mitigation is necessary and offer general suggestions for how to achieve it, though the guidance is limited to only a few Cognitive Biases and is not self-evidently generalizable to others.

Neuroeconomics

Neuroeconomics is a discipline made possible by advances in brain activity imaging technologies. This discipline merges some of the ideas in experimental economics, behavioral economics, cognitive science and social science in an attempt to better understand the neural basis for human decision making.

fMRI experiments suggest that the limbic system is consistently involved in resolving economic decision situations that have emotional valence, the inference being that this part of the human brain is implicated in creating the deviations from rational agent choices noted in emotionally valent economic decision making. Practitioners in this discipline have demonstrated correlations between brain activity in this part of the brain and prospection activity, and neuronal activation has been shown to have measurable, consistent effects on decision making.[35][36][37][38][39] These results must be considered speculative and preliminary, but are nonetheless suggestive of the possibility of real-time identification of brain states associated with Cognitive Bias manifestation, and the possibility of purposeful interventions at the neuronal level to achieve Cognitive Bias Mitigation.

Cognitive psychology

Several streams of investigation in this discipline are noteworthy for their possible relevance to a theory of cognitive bias mitigation.

One approach to mitigation originally suggested by Daniel Kahneman and Amos Tversky, expanded upon by others, and applied in real-life situations, is Reference class forecasting. This approach involves three steps: with a specific project in mind, identify a number of past projects that share a large number of elements with the project under scrutiny; for this group of projects, establish a probability distribution of the parameter that is being forecast; and, compare the specific project with the group of similar projects, in order to establish the most likely value of the selected parameter for the specific project. This simply stated method masks potential complexity regarding application to real-life projects: few projects are characterizable by a single parameter; multiple parameters exponentially complicates the process; gathering sufficient data on which to build robust probability distributions is problematic; and, project outcomes are rarely unambiguous and their reportage is often skewed by stakeholders' interests. Nonetheless, this approach has merit as part of a Cognitive Bias Mitigation protocol when the process is applied with a maximum of diligence, in situations where good data is available and all stakeholders can be expected to cooperate.

A concept rooted in considerations of the actual machinery of human reasoning, bounded rationality is one that may inform significant advances in cognitive bias mitigation. Originally conceived of by Herbert A. Simon[40] in the 1960s and leading to the concept of satisficing as opposed to optimizing, this idea found experimental expression in the work of Gerd Gigerenzer and others. One line of Gigerenzer's work led to the "Fast and Frugal"framing of the human reasoning mechanism,[41] which focused on the primacy of 'recognition' in decision making, backed up by tie-resolving heuristics operating in a low cognitive resource environment. In a series of objective tests, models based on this approach outperformed models based on rational agents maximizing their utility using formal analytical methods. One contribution to a theory and practice of cognitive bias mitigation from this approach is that it addresses mitigation without explicitly targeting individual cognitive biases and focuses on the reasoning mechanism itself to avoid cognitive biases manifestation.

Intensive situational training is capable of providing individuals with what appears to be cognitive bias mitigation in decision making, but amounts to a fixed strategy of selecting the single best response to recognized situations regardless of the 'noise' in the environment. Studies and anecdotes reported in popular-audience media[14][21][42][43] of firefighter captains, military platoon leaders and others making correct, snap judgments under extreme duress suggest that these responses are likely not generalizable and may contribute to a theory and practice of cognitive bias mitigation only the general idea of domain-specific intensive training.

Similarly, expert-level training in such foundational disciplines as mathematics, statistics, probability, logic, etc. can be useful for cognitive bias mitigation when the expected standard of performance reflects such formal analytical methods. However, a study of software engineering professionals[44] suggests that for the task of estimating software projects, despite the strong analytical aspect of this task, standards of performance focusing on workplace social context were much more dominant than formal analytical methods. This finding, if generalizable to other tasks and disciplines, would discount the potential of expert-level training as a cognitive bias mitigation approach, and could contribute a narrow but important idea to a theory and practice of cognitive bias mitigation.

Laboratory experiments in which cognitive bias mitigation is an explicit goal are rare. One 1980 study[45] explored the notion of reducing the optimism bias by showing subjects other subjects' outputs from a reasoning task, with the result that their subsequent decision-making was somewhat debiased.

A recent research effort by Morewedge and colleagues (2015) found evidence for domain-general forms of debiasing. In two longitudinal experiments, debiasing training techniques featuring interactive games that elicited six cognitive biases (anchoring, bias blind spot, confirmation bias, fundamental attribution error, projection bias, and representativeness), provided participants with individualized feedback, mitigating strategies, and practice, resulted in an immediate reduction of more than 30% in commission of the biases and a long term (2 to 3-month delay) reduction of more than 20%. The instructional videos were also effective, but were less effective than the games.[46]

Evolutionary psychology

This discipline explicitly challenges the prevalent view that humans are rational agents maximizing expected value/utility, using formal analytical methods to do so. Practitioners such as Cosmides, Tooby, Haselton, Confer and others posit that cognitive biases are more properly referred to as Cognitive Heuristics, and should be viewed as a toolkit of cognitive shortcuts[47][48][49][50] selected for by evolutionary pressure and thus are features rather than flaws, as assumed in the prevalent view. Theoretical models and analyses supporting this view are plentiful.[51] This view suggests that negative reasoning outcomes arise primarily because the reasoning challenges faced by modern humans, and the social and political context within which these are presented, make demands on our ancient 'heuristic toolkit' that at best create confusion as to which heuristics to apply in a given situation, and at worst generate what adherents of the prevalent view call 'reasoning errors'.

In a similar vein, Mercier and Sperber describe a theory[52] for confirmation bias, and possibly other cognitive biases, which is a radical departure from the prevalent view, which holds that human reasoning is intended to assist individual economic decisions. Their view suggests that it evolved as a social phenomenon and that the goal was argumentation, i.e. to convince others and to be careful when others try to convince us. It is too early to tell whether this idea applies more generally to other Cognitive Biases, but the point of view supporting the theory may be useful in the construction of a theory and practice of Cognitive Bias Mitigation.

There is an emerging convergence between Evolutionary Psychology and the concept of our reasoning mechanism being segregated (approximately) into 'System 1' and 'System 2'.[14][47] In this view, System 1 is the 'first line' of cognitive processing of all perceptions, including internally generated 'pseudo-perceptions', which automatically, subconsciously and near-instantaneously produces emotionally valenced judgments of their probable effect on the individual's well-being. By contrast, System 2 is responsible for 'executive control', taking System 1's judgments as advisories, making future predictions, via prospection, of their actualization and then choosing which advisories, if any, to act on. In this view, System 2 is slow, simple-minded and lazy, usually defaulting to System 1 advisories and overriding them only when intensively trained to do so or when cognitive dissonance would result. In this view, our 'heuristic toolkit' resides largely in System 1, conforming to the view of Cognitive Biases being unconscious, automatic and very difficult to detect and override. Evolutionary Psychology practitioners emphasize that our heuristic toolkit, despite the apparent abundance of 'reasoning errors' attributed to it, actually performs exceptionally well, given the rate at which it must operate, the range of judgments it produces, and the stakes involved. The System 1/2 view of the human reasoning mechanism appears to have empirical plausibility (see Neuroscience, next) and thus may contribute to a theory and practice of Cognitive Bias Mitigation.

Neuroscience

Neuroscience offers empirical support for the concept of segregating the human reasoning mechanism into System 1 and System 2, as described above, based on brain activity imaging experiments using fMRI technology. While this notion must remain speculative until further work is done, it appears to be a productive basis for conceiving options for constructing a theory and practice of Cognitive Bias Mitigation.[53][54]

Anthropology

Anthropologists have provided generally accepted scenarios[55][56][57][58][59] of how our progenitors lived and what was important in their lives. These scenarios of social, political, and economic organization are not uniform throughout history or geography, but there is a degree of stability throughout the Paleolithic era, and the Holocene in particular. This, along with the findings in Evolutionary Psychology and Neuroscience above, suggests that our Cognitive Heuristics are at their best when operating in a social, political and economic environment most like that of the Paleolithic/Holocene. If this is true, then one possible means to achieve at least some Cognitive Bias Mitigation is to mimic, as much as possible, Paleolithic/Holocene social, political and economic scenarios when one is performing a reasoning task that could attract negative cognitive bias effects.

Human reliability engineering

A number of paradigms, methods and tools for improving human performance reliability[21][60][61][62][63][64] have been developed within the discipline of human reliability engineering. Though there is some attention paid to the human reasoning mechanism itself, the dominant approach is to anticipate problematic situations, constrain human operations through process mandates, and guide human decisions through fixed response protocols specific to the domain involved. While this approach can produce effective responses to critical situations under stress, the protocols involved must be viewed as having limited generalizability beyond the domain for which they were developed, with the implication that solutions in this discipline may provide only generic frameworks to a theory and practice of Cognitive Bias Mitigation.

Machine learning

Machine learning, a branch of artificial intelligence, has been used to investigate human learning and decision making.[65]

One technique particularly applicable to Cognitive Bias Mitigation is neural network learning and choice selection, an approach inspired by the imagined structure and function of actual neural networks in the human brain. The multilayer, cross-connected signal collection and propagation structure typical of Neural Network models, where weights govern the contribution of signals to each connection, allow very small models to perform rather complex decision-making tasks at high fidelity.

In principle, such models are capable of modeling decision making that takes account of human needs and motivations within social contexts, and suggest their consideration in a theory and practice of Cognitive Bias Mitigation. Challenges to realizing this potential: accumulating the considerable amount of appropriate real world 'training sets' for the neural network portion of such models; characterizing real-life decision-making situations and outcomes so as to drive models effectively; and the lack of direct mapping from a neural network's internal structure to components of the human reasoning mechanism.

Software engineering

This discipline, though not focused on improving human reasoning outcomes as an end goal, is one in which the need for such improvement has been explicitly recognized,[19][20] though the term "Cognitive Bias Mitigation" is not universally used.

One study[66] identifies specific steps to counter the effects of confirmation bias in certain phases of the software engineering lifecycle.

Another study[44] takes a step back from focussing on cognitive biases and describes a framework for identifying "Performance Norms", criteria by which reasoning outcomes are judged correct or incorrect, so as to determine when Cognitive Bias Mitigation is required, to guide identification of the Biases that may be 'in play' in a real-world situation, and subsequently to prescribe their mitigations. This study refers to a broad research program with the goal of moving toward a theory and practice of Cognitive Bias Mitigation.

Other

A commercial initiative offers what it refers to as a 'Cognitive Bias Modification' service (see link below). While suggestive, it is mentioned here only for completeness, as there is no evidence that this service is backed by peer-reviewed research results.

Other initiatives aimed directly at a theory and practice of Cognitive Bias Mitigation may exist within other disciplines under different labels than employed here.

See also

References

  1. 1 2 Ariely, D. (2008). Predictably Irrational: The Hidden Forces That Shape Our Decisions, Harper Collins.
  2. 1 2 Epley, N.; Gilovich, T. (2006). "The Anchoring-and-Adjustment Heuristic: Why the Adjustments are Insufficient". Psychological Science 17 (4): 311–318. doi:10.1111/j.1467-9280.2006.01704.x.
  3. 1 2 Gigerenzer, G. (2006). "Bounded and Rational." Contemporary Debates in Cognitive Science. R. J. Stainton, Blackwell Publishing: 115–133.
  4. 1 2 Gilovich, T. (1991). How We Know What Isn't So: The Fallibility of Human Reason in Everyday Life. New York, NY, The Free Press.
  5. 1 2 Hammond, J. S.; Keeney, R. L.; et al. (2006). "The Hidden Traps in Decision Making". Harvard Business Review 84 (1): 118–126.
  6. 1 2 Haselton, M. G., D. Nettie, et al. (2005). The Evolution of Cognitive Bias. Handbook of Evolutionary Psychology. D. M. Buss. Hoboken, Wiley: 724–746.
  7. 1 2 Henrich; et al. (2010). "Markets, Religion, Community Size, and the Evolution of Fairness and Punishment". Science 327: 1480–1484. doi:10.1126/science.1182238. PMID 20299588.
  8. 1 2 Lerher, J. (2009). How We Decide. New York, NY, Houghton Mifflin Harcourt.
  9. 1 2 Nozick, R. (1993). The Nature of Rationality. Ewing, NJ, Princeton University Press.
  10. 1 2 Schacter, D. L. (1999). "The Seven Sins of Memory: Insights From Psychology and Cognitive Neuroscience". American Psychologist 54 (3): 182–203. doi:10.1037/0003-066x.54.3.182.
  11. Morewedge, Carey K.; Kahneman, Daniel (October 2010). "Associative processes in intuitive judgment". Trends in Cognitive Sciences 14 (10): 435–440. doi:10.1016/j.tics.2010.07.004. PMID 20696611.
  12. Roberto, M. A. (2002). "Lessons from Everest: The Interaction of Cognitive Bias, Psychological, Safety and System Complexity." California Management Review (2002) 45(1): 136–158.
  13. Knauff, M.; Budeck, C.; Wolf, A. G.; Hamburger, K. (2010). "The Illogicality of Stock-Brokers: Psychological Experiments on the Effects of Prior Knowledge and Belief Biases on Logical Reasoning in Stock Trading". PLoS ONE 5 (10): e13483. doi:10.1371/journal.pone.0013483.
  14. 1 2 3 Kahneman, D. (2011). Thinking, Fast and Slow, Doubleday Canada.
  15. ID=19830723-0 (1983). "Gimli Glider Accident Report." Aviation Safety Network http://aviation-safety.net
  16. Stephenson, Arthur G.; LaPiana, Lia S.; Mulville, Daniel R.; Rutledge, Peter J.; Bauer, Frank H.; Folta, David; Dukeman, Greg A.; Sackheim, Robert et al (1999-11-10). "Mars Climate Orbiter Mishap Investigation Board Phase I Report." National Air and Space Administration.
  17. British Columbia Ministry of Energy, Mines and Petroleum Resources: Sullivan Mine Accident Report, May 17, 2006.
  18. Beynon-Davies, P., "Information systems `failure': case of the LASCAD project", European Journal of Information Systems, 1995.
  19. 1 2 Mann, C. C. (2002). "Why Software is So Bad." Technology Review, MIT, July 2002.
  20. 1 2 Stacy, W.; MacMillan, J. (1995). "Cognitive Bias in Software Engineering". Communications of the ACM 38 (6): 57–63. doi:10.1145/203241.203256.
  21. 1 2 3 Gawande, A. (2010). The Checklist Manifesto: How to Get Things Right. New York, NY, Metropolitan Books.
  22. Lieberman, Matthew; Rock, David; Grant Halvorson, Heidi; Cox, Christine (November 2015). "Breaking Bias Updated: The SEEDS Model(tm)". NeuroLeadership Journal 6.
  23. Kahneman, D.; Thaler, R. (2006). "Utility Maximization and Experienced Utility". Journal of Economic Perspectives 20 (1): 221–234. doi:10.1257/089533006776526076.
  24. 1 2 Frey, B.; Stutzer, A. (2002). "What Can Economists Learn from Happiness Research?". Journal of Economic Literature 40 (2): 402–35. doi:10.1257/002205102320161320.
  25. Kahneman, D. (2000). "Experienced Utility and Objective Happiness: A Moment-Based Approach." Chapter 37 in: D. Kahneman and A. Tversky (Eds.) "Choices, Values and Frames." New York: Cambridge University Press and the Russell Sage Foundation, 1999.
  26. Binmore, K. (2007). "A Very Short Introduction to Game Theory." Oxford University Press.
  27. Camerer, C. F., Ho T.-H., Chong, J.-K. (2002). "A Cognitive Hierarchy Theory of One-Shot Games and Experimental Analysis." Forth, Quarterly Journal of Economics."
  28. Broseta, B., Costa-Gomes, M., Crawford, V. (2000). "Cognition and Behavior in Normal-Form Games: An Experimental Study." Department of Economics, University of California at San Diego, Permalink: http://www.escholarship.org/uc/item/0fp8278k.
  29. Myerson, R. B. (1991). "Game Theory: Analysis of Conflict." Harvard University Press.
  30. Wright J. R., Leyton-Brown, K., Behavioral Game-Theoretic Models: A Bayesian Framework For Parameter Analysis, to appear in Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2012), (8 pages), 2012.
  31. Kahneman, D. "Maps of Bounded Rationality: Psychology for Behavioral Economics." American Economic Review (December 2003): 1449–1475.
  32. Mullainathan, Sendhil, and Richard Thaler. "Behavioral Economics." MIT Department of Economics Working Paper 00-27. (September 2000).
  33. Kahneman, D.; Tversky, A. (1979). "Prospect Theory: An Analysis of Decision Under Risk". Econometrica 47 (2): 263–291. doi:10.2307/1914185.
  34. Kahneman, D., Lovallo, D., Sibony, O. (2011). "Before You Make That Big Decision." Harvard Business Review, June, 2011.
  35. Loewenstein, G., Rick, S., Cohen, J. (2008). Neuroeconomics Annual Reviews 59: 647–672.
  36. Rustichini, A (2009). "Neuroeconomics: What have we found, and what should we search for?". Current Opinion in Neurobiology 19: 672–677. doi:10.1016/j.conb.2009.09.012.
  37. Padoa-Schioppa, C.; Assad, J.A. (2007). "The Representation of Economic Value in the Orbitofrontal Cortex is Invariant for Changes of Menu". Nature Reviews Neuroscience 11: 95–102. doi:10.1038/nn2020.
  38. Spreng, R. N., Mar, R. A., Kim, A. S. N. (2008). The Common Neural Basis of Autobiographical Memory, Prospection, Navigation, Theory of Mind and the Default Mode: A Quantitative Meta-Analysis. Journal of Cognitive Neuroscience, (Epub ahead of print)(2010).
  39. Jamison, J.; Wegener, J. (2010). "Multiple Selves in Intertemporal Choice". Journal of Economic Psychology 31: 832–839. doi:10.1016/j.joep.2010.03.004.
  40. Simon, H. A. (1991). "Bounded Rationality and Organizational Learning". Organization Science 2 (1): 125–134. doi:10.1287/orsc.2.1.125.
  41. Gigerenzer, G.; Goldstein, D. G. (1996). "Reasoning the Fast and Frugal Way: Models of Bounded Rationality". Psychological Review 103 (4): 650–669. doi:10.1037/0033-295x.103.4.650.
  42. Gladwell, M. (2006). Blink: The Power of Thinking Without Thinking. New York, NY, Little, Brown and Company.
  43. Shermer, M. (2010). A review of Paul Thagard's "The Brain and the Meaning of Life". Skeptic Magazine. Altadena, CA, Skeptics Society. 16: 60–61.
  44. 1 2 Conroy, P., Kruchten, P. (2012). "Performance Norms: An Approach to Reducing Rework in Software Development", to appear in IEEE Xplore re 2012 Canadian Conference on Electrical and Computing Engineering.
  45. Weinstein, N. D. (1980). "Unrealistic Optimism About Future Life Events". Department of Human Ecology and Social Sciences, Cook College, Rutgers, The State University". Journal of Personality and Social Psychology 39 (5): 806–820. doi:10.1037/0022-3514.39.5.806.
  46. Morewedge, C. K.; Yoon, H.; Scopelliti, I.; Symborski, C. W.; Korris, J. H.; Kassam, K. S. (13 August 2015). "Debiasing Decisions: Improved Decision Making With a Single Training Intervention". Policy Insights from the Behavioral and Brain Sciences 2 (1): 129–140. doi:10.1177/2372732215600886.
  47. 1 2 Cosmides, L., Tooby, J. "Evolutionary Psychology: A Primer." at http://www.psych.ucsb.edu/research/cep/primer.html."
  48. Haselton, M. G.; Bryant, G. A.; Wilke, A.; Frederick, D. A.; Galperin, A.; Frankenhuis, W. E.; Moore, T. (2009). "Adaptive Rationality: An Evolutionary Perspective on Cognitive Bias". Social Cognition 27 (5): 733–763. doi:10.1521/soco.2009.27.5.733.
  49. Haselton, M. G., D. Nettie, et al. (2005). "The Evolution of Cognitive Bias." Handbook of Evolutionary Psychology. D. M. Buss. Hoboken, Wiley: 724–746.
  50. Confer; et al. (2010). "Evolutionary Psychology: Controversies, Questions, Prospects, and Limitations". American Psychologist 65 (2): 110–126. doi:10.1037/a0018413. PMID 20141266.
  51. Chudek, M., Henrich, J., (2011). "Culture–Gene Coevolution, Norm-Psychology and the Emergence of Human Prosociality", (in press) Trends in Cognitive Sciences, Elsevier, doi:10.1016/j.tics.2011.03.003
  52. Mercier, H.; Sperber, D. (2011). "Argumentative Theory". Behavioral and Brain Sciences 34 (2): 57–74. doi:10.1017/s0140525x10000968.
  53. Damasio, A. (2010). Self Comes to Mind: Constructing the Conscious Brain. New York, NY, Pantheon.
  54. Changeux, J.-P. P., A. Damasio, et al., Eds. (2007). Neurobiology of Human Values (Research and Perspectives in Neurosciences). Heidelberg, Germany, Springer.
  55. Ember, C. R. (1978). Myths About Hunter-Gatherers, University of Pittsburgh Of the Commonwealth System of Higher Education, 17(4), pp 439–448.
  56. Gabow, S. L. (1977). "Population Structure and the Rate of Hominid Brain Evolution". Journal of Human Evolution 6 (7): 643–665. doi:10.1016/s0047-2484(77)80136-x.
  57. Hamilton, M. J.; Milne, B. T.; Walker, R.S.; Burger, O.; Brown, J.H. (2007). "The complex Structure of Hunter–Gatherer Social Networks". Proceedings of the Royal Society, B 2007 (274): 2195–2203.
  58. Kuhn, S. L.; Stiner, M. C. (2006). "What's a Mother To Do? The Division of Labor among Neanderthals and Modern Humans in Eurasia". Current Anthropology 47 (6): 953–981. doi:10.1086/507197.
  59. Marlowe, F. W. (2005). "Hunter-Gatherers and Human Evolution". Evolutionary Anthropology: Issues, News, and Reviews 14 (2): 54–67. doi:10.1002/evan.20046.
  60. Gertman, D., Blackman, H., Marble, J., Byers, J. and Smith, C. (2005). The SPAR-H human reliability analysis method.
  61. Hollnagel, E. (1998). Cognitive reliability and error analysis method: CREAM. Elsevier.
  62. Roth, E. et al. (1994). An empirical investigation of operator performance in cognitive demanding simulated emergencies. NUREG/CR-6208, Westinghouse Science and Technology Center. Report prepared for Nuclear Regulatory Commission.
  63. Wiegmann, D. & Shappell, S. (2003). A human error approach to aviation accident analysis: The human factors analysis and classification system.. Ashgate.
  64. Wilson, J.R. (1993). SHEAN (Simplified Human Error Analysis code) and automated THERP. United States Department of Energy Technical Report Number WINCO–11908.
  65. Sutton, R. S., Barto, A. G. (1998). MIT CogNet Ebook Collection; MITCogNet 1998, Adaptive Computation and Machine Learning, ISBN 978-0-262-19398-6.
  66. Caliki, G., Bener, A., Arsian, B. (2010). "An Analysis of the Effects of Company Culture, Education and Experience on Confirmation Bias Levels of Software Developers and Testers." ADM/IEEE 32nd International Conference on Software Engineering – ICSE 2010 Volume 2: pp187-190.

External links

This article is issued from Wikipedia - version of the Sunday, May 01, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.