Automation bias

Automation bias – sometimes referred to by other terms such as automation-induced complacency or over-reliance on automation[1] – is the propensity for humans to favor suggestions from automated decision making systems and to ignore contradictory information made without automation, even if it is correct. This bias takes the form of errors of exclusion and inclusion: an automation bias of exclusion takes place when humans rely on an automated system that does not inform them of a problem, while an error of inclusion arises when humans make choices based on incorrect suggestions relayed by automated systems.[2] Automation bias has been examined across many research fields.[1]

Some factors leading to an over-reliance on automation include inexperience in a task (though inexperienced users tend to be most benefited by automated decision support systems), lack of confidence in one’s own abilities, a reflexive trust of the automated system, a lack of readily available alternative information, or as a way of saving time and effort on complex tasks or high workloads.[1] [3] [4]

Automation bias can be mitigated by the design of automated systems, such as reducing the prominence of the display, decreasing detail or complexity of information displayed, or couching automated assistance as supportive information rather than as directives or commands.[1] Training on an automated system which includes introducing deliberate errors has been shown to be significantly more effective at reducing automation bias than just informing users that errors can occur.[5] However, excessive checking and questioning automated assistance can increase time pressures and complexity of tasks thus reducing the benefits of automated assistance, so design of an automated decision support system can balance positive and negative effects rather than attempt to eliminate negative effects.[3]

See also

References

  1. 1 2 3 4 Goddard, K.; Roudsari, A.; Wyatt, J. C. (2012). "Automation bias: a systematic review of frequency, effect mediators, and mitigators". Journal of the American Medical Informatics Association 19 (1): 121–127. doi:10.1136/amiajnl-2011-000089.
  2. Cummings, M.L. (2004). Automation bias in intelligent time critical decision support systems. In AIAA 1st Intelligent Systems Technical Conference, AIAA 2004.
  3. 1 2 Alberdi, E., Strigini, L., Povyakalo, A., & Ayton, P. (2009). Why Are People’s Decisions Sometimes Worse with Computer Support? In B. Buth, G. Rabe & T. Seyfarth (Eds.), Computer Safety, Reliability, and Security (Vol. 5775, pp. 18-31): Springer Berlin Heidelberg.
  4. Goddard, Kate; Roudsari, Abdul; Wyatt, Jeremy C. (2014). "Automation bias: Empirical results assessing influencing factors". International Journal of Medical Informatics 83 (5): 368–375. doi:10.1016/j.ijmedinf.2014.01.001.
  5. Bahner, J. Elin; Hüper, Anke-Dorothea; Manzey, Dietrich (2008). "Misuse of automated decision aids: Complacency, automation bias and the impact of training experience". International Journal of Human-Computer Studies 66 (9): 688–699. doi:10.1016/j.ijhcs.2008.06.001.

Further reading

This article is issued from Wikipedia - version of the Tuesday, June 16, 2015. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.