Unexpected events

Conventional systems engineering methodologies do not provide guidance on how to reduce the risks of unexpected events. The term unexpected event has recently been coined, to stress the need to design systems to handle unexpected situations.[1] This Wikipedia entry summarizes the contribution made by several scientists and methodologists (notably, Hollnagel,[2] Casey,[3] Reason,[4] Dekker[5] and Leveson[6]), who specialize in human errors, to express their view that unexpected events should not be attributed to the person who happened to be there at the time of the mishap, and that they should not be regarded as force majeure. A better approach, which may help mitigating the risks of unexpected events, is by considering the human limited capability to coordinate with the system during the interaction.[1]

Definition

An unexpected event is a normal event, arriving in an exceptional situation, with costly results: productivity loss,[7] reduced customer satisfaction[8] and accidents.[9]

In practice, systems cannot always meet all the expectations of its stakeholder.

Well known examples of accidents due to unexpected events include:

Easy-to-read examples of unfortunate unexpected events are described in Casey's book "Set Phasers on Stun".[10] A comprehensive list of accidents may be found in this list.

Characteristics of unexpected events

Donald Norman has noted the two characteristics of unexpected events: first, they always occur, and second, when they do occur, they are always unexpected.[11]

The accident investigator Sidney Dekker observed that unexpected events, such as user slips and mode errors, are commonly considered as human errors. If the results are dramatic, they are typically attributed to the user's carelessness. If there is nobody to accuse, they may be regarded as force majeure. Dekker called this "the Old View" of accident investigation.[12]

Recent studies demonstrate that systems engineers can reduce operational risks due to unexpected events by "the New View": by considering the human limitations, and by taking steps to ensure that the system and its users are working in a coordinated manner.[12]

Classification

Main sources of abnormal system behavior include:[13]

The Old View

Recent accident studies demonstrate that people are tempted to regard the persons who issued the trigger, or who were evident to the trigger as "bad apples"

When faced with a human error problem, you may be tempted to ask 'Why didn't they watch out better? How could they not have noticed?'. You think you can solve your human error problem by telling people to be more careful, by reprimanding the miscreants, by issuing a new rule or procedure. These are all expressions of 'The Bad Apple Theory', where you believe your system is basically safe if it were not for those few unreliable people in it. This old view of human error is increasingly outdated and will lead you nowhere.[12]

Mitigating the risks of unexpected events

A new definition of human errors implies that they are the result of the mishap, not the source.[14] The "new view" of mishaps is that they might be due to organizational failure to prevent them (see James Reason, [Organizational_models_of_accidents],(1997). This can be achieved by promoting safety culture and employing safety engineering.

Safety culture: James Reason argued that it is the duty of the organization to define the lines between acceptable and unacceptable behavior.[15] Sidney Dekker proposed that organizations may do more for safety by promoting "Just Culture"

Responses to incidents and accidents that are seen as unjust can impede safety investigations, promote fear rather than mindfulness in people who do safety-critical work, make organizations more bureaucratic rather than more careful, and cultivate professional secrecy, evasion, and self-protection. A just culture is critical for the creation of a safety culture. Without reporting of failures and problems, without openness and information sharing, a safety culture cannot flourish..[16]

Safety engineering: Harel and Weiss proposed that the system may be designed so that the mishaps are avoided. This is a multi-disciplinary task:[1] Common engineering methodologies and practices do not mitigate the risks of unexpected events, such as use errors or mode errors, typically attributed to ‘force majeure’. Traditional safety engineering is concerned about the first source of abnormal system behavior: component failure. The second source of abnormal system behavior: slips and mistakes, is typically done by human error analysis.[17] User-centered design is concerned about the third source of abnormal system behavior: wrong user orientation.[18] The fourth source of abnormal system behavior: unexpected operational context, may be managed using the STAMP (Systems-Theoretic Accident Model and Processes) approach to safety. A key feature of this framework is the association of constrain with normal system behavior. According to this model, accidents are due to the improper setting of constrain, or to insufficient means to enforce them on the system. The methods presented in this study are about setting and enforcing such constrains.[19]

The paradigm of Extended System Engineering[20] is that system engineers can mitigate such operational risks by considering the human limitations in assuring that the system and its operators are coordinated.

A methodology for dealing with the unexpected operational context is by task-oriented system engineering.[21]

References

  1. 1 2 3 , Mitigating the Risks of Unexpected Events by Systems Engineering
  2. , Hollnagel: Why "Human Error" is a Meaningless Concept
  3. , Casey: Set Phasers on Stun
  4. , Reason: Managing the Risks of Organizational Accidents
  5. , Dekker: The Field Guide to Understanding Human Error
  6. , Nancy Leveson home page
  7. , Thomas K. Landauer (1996): The Trouble with Computers: Usefulness, Usability, and Productivity
  8. , A. Harel, R. Kennett and F. Ruggeri (2008) - Modeling Web Usability Diagnostics on the basis of Usage Statistics, in: Statistical Methods in eCommerce Research, W. Jank and G. Shmueli editors, Wiley.
  9. , Sheridan & Nadler, (2006): Review of Human-Automation Interaction Failures and Lessons Learned (Report No. DOT-VNTSC-NASA-06-01)
  10. , Steven Casey (1993). Set Phasers on Stun, And Other True Tales of Design, Technology and Human Error, Aegean Publishing.
  11. , The design of future things
  12. 1 2 3 , Sidney Dekker, (2006). The Field Guide to Understanding Human Error
  13. , Avi Harel (2008) Standards for Defending Systems against Interaction Faults, Incose International Symposium, Utrecht, The Netherlands.
  14. Hollnagel, E. (1991). The phenotype of erroneous actions: Implications for HCI design. In G. W. R. Weir and J. L. Alty (Eds.), Human-computer interaction and complex systems. Academic Press.
  15. , James Reason, (1998). Achieving a safe culture: theory and practice.
  16. , Sidney Dekker (2007). Just culture: balancing safety and accountability
  17. Donald. A. Norman (1980). Why people make mistakes. Reader’s Digest, 117, 103-106.
  18. Donald. A. Norman (1990). The "problem" with automation: Inappropriate feedback and interaction, not "over automation". In Human Factors in Hazardous Situations, D. E. Broadbent, J. Reason, and A. Baddeley, Eds. Clarendon Press, New York, NY, 137-145.
  19. , Nancy Leveson: Leveson, N.G. (2004). A New Accident Model for Engineering Safer Systems, Safety Science, Vol. 42, No. 4, pp. 237-270.
  20. , A. Zonnenshain, A. Harel (2008) - Extended System Engineering - ESE: Integrating Usability Engineering in System Engineering, The 17th International Conference of the Israel Society for Quality, Jerusalem, Israel
  21. , A. Zonnenshein, A. Harel (2009) - Task-oriented System Engineering, INCOSE International Symposium, Singapore.

External links

This article is issued from Wikipedia - version of the Wednesday, December 16, 2015. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.