Archive for the ‘Normal Accident Theory’ Category

Chocolates and patient safety

In High Reliability Orgs, Normal Accident Theory, Patient Safety, Resiliency on September 22, 2013 at 3:17 pm

From Rosemary Gibson:

How Overtreatment and High Volume Health Care is Making Patient Safety a More Distant Reality by Rosemary Gibson

“This past week at the National Health Care Quality Colloquium I showed the classic “I Love Lucy” chocolate factory video during a presentation. The laughter was audible and the point was made: when the pace of work speeds up, work-arounds and cutting corners are inevitable. Employees tell the boss everything is fine –when its not.

In a health care system riddled with defects where doctors and nurses are required by health care executives to work at a faster pace, the number of adverse events — and patients harmed — will increase. High volume health care and productivity targets are a toxic mix. … Read and’s funny but not…
The Chocolate scene on you tube

When productivity trumps safety we all lose.

In human error, Normal Accident Theory, Resiliency, Safety climate on September 30, 2012 at 9:32 am

““You are judged by numbers in the lab,” McShane said. “There is a culture of pressure to get it done with no new ­resources. But there is no ­excuse for [cheating] at the end of the day.” (, 2012)

So goes the story of Annie Dookan, a chemist in a Massachusetts crime lab who is suspected of compromising evidence in many of the 34000 samples she has tested in her 9 year career.  Her motivation seems to be no more nefarious than trying to look like a stellar employee.

What does this have to do with patient safety? It is common in hospitals today to push the border in terms of productivity.  Add some more patients, add new procedures, add no more  staff.  In safety studies this can result in what is known as drift.  You get through one shift with suboptimal staffing and nothing bad happens so you chance it again, then again, and little by little in order to cope: staff develop workarounds and short cuts that all begin to be seen as normal (culture) and less risky as staff has not gotten feedback on any bad results. If staff continue to be judged on output (census, patient turnover, lower expenditures) they will seek to make these their priority rather than follow safe procedures.

According to Cook (2000) work processes do not chose failure but drift toward it as production pressures and change erode the defenses that normally keep failure at a distance. “This drift is the result of systematic, predictable organizational factors at work, not simply erratic individuals.  To understand how failure sometimes happens, one must first understand how success is obtained-people learn and adapt to create safety in a world fraught with gaps, hazards, trade-offs, and multiple goals.”

In safety critical environments that deal with people’s lives, leaders should be preoccupied with failure not productivity. A leader is responsible to identify drifts by being present in daily processes. Drifts can be identified by observing staff behaviors, reviewing peer reports and asking people what types of things they are worried about. Asking staff to “do their best” without a supporting environment will not result in a high performing system. Productivity goals should be made based on an analysis of the work not by how much money is in the budget. I think it’s time as a nation we say in all instances “if there isnt enough money to do things right, don’t do them at all.”

 Annie Dookin made some bad choices but she worked in an environment where bad choices were acceptable and when peers did speak up, nothing was done. Who is responsible for this?

And who is responsible for the incarceration or punnishment of some people who might be innocent who are imprisoned: all because a culture of productivity over-ranked safe procedures. In these circumstances, just as in healthcare, humans always suffer.

Safety first. Productivity second. These cannot just be words and slogans. They have to be guiding principles that are evident in everything we do, in healthcare and in crime labs.  It scares me that this lab was run by……..The Department of Public Health 😦

Beyond Reason…to Resiliency

In High Reliability Orgs, human error, Human Factors, Normal Accident Theory on November 13, 2010 at 5:06 pm

An earlier post presented James Reason’s Swiss cheese model of failure in complex organizations. This model and the concept of latent failures are linear models of failure in that the failure is the result of one breakdown then another then another which all combined contribute to a failure by someone or something at the sharp end of a process.
More recent theories expand on this linear model and describe complex systems as interactive in that complex interactions, processes and relationships all interact in a non-linear fashion to produce failure. Examples of these are Normal Accident theory (NAT) and the theory of High Reliability Organizations (HRO). NAT holds that once a system becomes complex enough accidents are inevitable. There will come a point when humans lose control of a situation and failure results; such as in the case of Three Mile Island. In High Reliability Theory, organizations attempt to prevent the inevitable accident by monitoring the environment (St Pierre, et al., 2008). HRO look at their near misses to find holes in their systems; they look for complex causes of error, reduce variability and increase redundancy in the hopes of preventing failures (Woods, et al., 2010). While these efforts are worthwhile, this still has not reduced failures in organizations to an acceptable level. Sometimes double checks fail and standardization and policies increase complexity.

One of the new ways of thinking about safety is known as Resilience Engineering. Read the rest of this entry »

Man versus System

In human error, Normal Accident Theory, Patient Safety, Safety climate on November 5, 2010 at 10:20 am

The person approach to looking at safety issues assumes failures are the result of the individual(s) involved in direct patient contact.  In this model, when something goes wrong it is the provider’s fault due to a knowledge deficit, not paying attention (and other cognitive processes), or not at their best (St. Pierre, et al., 2008).  Some other descriptions assumed of  individuals involved in a person approach to failures include: forgetful, unmotivated, negligence, lazy, stupid, reckless…click below to read more Read the rest of this entry »

Scratch Tickets & Independent Double Checks

In Human Factors, Interuptions, Multitasking, Normal Accident Theory, Patient Safety, Teamwork on September 19, 2010 at 2:34 pm

I played tennis this morning with a friend. On the way home I thought I would stop at the supermarket to pick up some snacks for the Patriots game today. I realized I forgot my debit card (ah, the limitations of the human memory). Looking for alternate forms of payment, I found winning lottery scratch tickets in my glove compartment.

I quickly added them up (3 of them) and confirmed that … Read the rest of this entry »

The BP Oil spill and Normal Accident Theory

In Normal Accident Theory on September 5, 2010 at 2:37 pm

If we believe in Normal Accident Theory (NAT) should we focus on prevention but also expect failure ?

NAT and the BP oil spill excerpt from Culturing Science blog:
“We are all human. We all know what it’s like to procrastinate, to forget to leave a message, to have our minds wander. In his book, Chiles argues, citing over 50 examples in immense detail, that most disasters are caused by “ordinary mistakes” – and that to live in this modern world, we have to “acknowledge the extraordinary damage that ordinary mistakes can now cause.” Most of the time, things run smoothly. But when they don’t, our culture requires us to find someone to blame instead of recognizing that our own lifestyles cause these disasters. Instead of reconsidering the way we live our lives, we simply dump our frustration off so that we can continue living our lives in comfort (Waters, 2010).”

Where is the best place to direct resources to truly improve safety? (continued…)
Read the rest of this entry »

Normal Accident Theory

In Normal Accident Theory on September 2, 2010 at 8:45 pm

The Three Mile Island nuclear power plant accident prompted the idea of Normal Accident Theory. This theory holds that as systems become complex, accidents become inevitable or normal (St. Pierre, et al., 2008). The susceptibility to accidents in these complex organizations is determined by the dimensions of interactive complexity and coupling (St. Pierre, et al., 2008). An accident is defined as “an incident in which non-trivial loss occurs” (Cooke & Rohleder, 2006, p. 214). An incident is an unexpected or unwanted change in process that has the potential to cause a loss (Cooke & Rohleder, 2006). An accident is classified as a disaster when loss of life or extensive property damage or money loss occurs (Cooke & Rohleder, 2006)….. Read the rest of this entry »

%d bloggers like this: