St. Luke’s and the Science of High Reliability

In today’s post below, blog editor Roya Camp interviews our very own Dr. Marshall Priest, executive medical director of St. Luke’s Heart, about St. Luke’s commitment to patient safety and our desire to become a high-reliability organization. 

Dr. Priest discusses the fact that health care is not safe enough, harms far too many, and is not employing the same culture and strategies that high-risk industries outside of health care are using to make their operations safe. 

In large part, this is because we do not use systems thinking to approach patient safety and have not made elimination of patient harm a priority, given all the turbulence and change in the industry. 

St. Luke’s is working toward that. We are striving for zero harm and a culture of safety. We are committing resources and people to this effort, educating our boards, adopting best practices from outside of health care, and focusing on patient safety at our upcoming annual St. Luke’s Health System Summit in February, when leaders, physicians, and board members will assemble from across the 300-mile span of our system to share information and the latest thinking.

I hope you find this piece as inspiring and as much a call to action as I did.

There are certain things doctors know about health care that other people don’t.

They know, for example, that health care in the face of terminal illness is often not helpful and that while it may prolong life, it does not necessarily make for a good life. And so they often don’t prolong their own lives when such a decision must be made.

Many of them also know that hospital medical errors are the third leading cause of death in the United States, trailing only heart disease and cancer. That would make health care the leading cause of death brought on by other human beings.

And so, physicians take extraordinary precautions to try to guard against harm when they or loved ones enter the hospital, going to extreme lengths to make sure their profession upholds that time-honored pledge: first, do no harm.

Call it health care’s dirty little secret.

About 14 years ago, the Institute of Medicine published a landmark report titled, “To Err Is Human.” The report said, among other things, that health care in the United States was not as safe as it should be, that at least 44,000 people, and perhaps as many as 98,000 people, die in hospitals each year as a result of medical errors that could have been prevented, and that the knowledge already was available to prevent many of the errors. The report urged a 50 percent reduction in errors over the next five years.

It did not happen. Millions of dollars have been spent on various efforts to improve the quality of health care nationwide since the Institute of Medicine’s 1999 call to arms, yet very little has changed to make care better and safer overall. And in September, the Journal of Patient Safety said the numbers may be much higher, possibly four times the institute’s high-end estimate from 14 years ago.

Consider these sobering observations, circa 2011 and 2013, from the work and writing of Mark R. Chassin and Jerod M. Loeb of the Joint Commission:

“No hospitals or health systems have achieved consistent excellence throughout their institutions. … Isolated examples of improvement now can be found, and some of the results are impressive. Measured against the magnitude of the problems, however, the overall impact has been underwhelming. … Operations on the wrong patient or the wrong body part continue to take place, perhaps as often as fifty times per week. Fires break out in our operating rooms during surgery, perhaps as frequently as six hundred times a year … The frequency and severity of these failings stand in particularly sharp contrast to the extraordinary successes that industries outside health care have had in achieving and sustaining remarkable levels of safety.”

“We are experiencing an epidemic of serious and preventable adverse events,” Chassin and Loeb wrote, going on to say, “The available evidence suggests that the risk of harmful error in health care may be increasing.”

For Dr. Marshall Priest, executive medical director of St. Luke’s Heart, the need for healthcare delivery systems to join the ranks of what are known as “high-reliability organizations,” those industries outside health care that are distinguished by their safety track records that Chassin and Loeb refer to, has never been greater.

Investigations by the National Transportation Safety Board and agencies that regulate other extremely high-risk endeavors have helped to reveal patterns of behavior that lead to disaster. St. Luke’s leaders are studying the changes that have led to high reliability in these other fields for application in healthcare delivery.

High-reliability organizations are those in industries such as aviation, chemical and petroleum production, and nuclear power, where the risk and consequences of mishaps are grave and that “operate under hazardous conditions while maintaining safety levels that are far better than those of health care,” according to Chassin and Loeb, in an article published last year. Chassin served on the Institute of Medicine committee that produced “To Err is Human.”

Both commercial aviation and the nuclear industry have learned, over a period of years and through tragic, high-profile disasters, how to maintain a state of constant vigilance in ensuring against mass tragedies. Such scenarios haven’t characterized health care, where preventable harm happens to individual patients rather than entire planeloads of passengers or communities of residents, and this may be part of the reason healthcare institutions haven’t been compelled to join the ranks of the highly reliable.

Dr. Priest and others are hoping to change that by bringing the high-reliability culture and mentality learned by these other high-risk businesses to St. Luke’s Health System.

Quality and safety are critical components of a high-reliability culture that, in a healthcare setting, would mean that harm is always prevented, in every setting. Aviation and nuclear power are now very safe, based on the processes they have put in place for safety, including checklists, robust process improvement, and using near misses as an opportunity to make changes before tragedy ensues.

High-reliability organizations are characterized by what researchers have called a “collective mindfulness,” in which all participants are “acutely aware that even small failures in safety protocols or processes can lead to catastrophic adverse outcomes.” They “are preoccupied with failure, never satisfied … and they are always alert to the smallest signal that a new threat to safety may be developing,” according to Chassin and Loeb.

Hospitals and healthcare organizations, in contrast, “behave as if they accept failure as an inevitable feature of their daily work,” the authors continued, leading to “misplaced confidence that their safety systems are adequate.”

Dr. Priest, who has specialized in an unusually high-risk area of a high-risk enterprise for more than three decades, can understand how it is that well-intentioned healthcare practitioners aren’t in a position to see just how serious the issue is.

“I think all of our caregivers want quality and safety,” he said. “I don’t think there’s anybody that doesn’t want that. However, safety is a science, there’s lots of learning of high-reliability principles to be done to achieve high levels of safety.”

High reliability has come hard for aviation and some other fields. St. Luke’s leaders believe there’s plenty of reason to learn from such industries, and plenty of statistical evidence that a high-reliability approach is the way to go in health care.

Dr. Priest has immersed himself in the literature and thinking of experts across industries and management practices, and knows that high-reliability thinking has come over time to other high-risk industries.  

“The science of safety and the work required to achieve a high-level safety culture has been an eye-opener to me,” Dr. Priest said. “We are on a journey, to zero harm.”

Dr. Priest and a handful of other physician leaders and others throughout St. Luke’s have set about to shift the mindset. Early work has to do with building trust; high-reliability organizations are characterized by an environment that fosters “a continuous flow of information about possible hazards,” in the words of Chassin and Loeb. “All front-line workers must trust each other in order to feel safe when they identify a problem,” and staff “must trust that management will fix the problem.”

 “We have to engage our front-line providers to feel safe,” Dr. Priest said. “If you see something, say something.”

In their work, Chassin and Loeb note that they do not know of high-reliability organizations in health care and that in fact, healthcare systems “often exhibit the very opposite of high reliability.” High-reliability organizations “do not have safety processes that fail 40 to 60 percent of the time, which is the case in health care,” the authors observed.

Dr. Priest believes high reliability is integral to ensuring health systems’ viability over time.

“I think that’s part of us getting through the yellow box and delivering value,” he said. “I think we’re getting there. I think we’re farther along that journey than we were a year ago.”

Tags: , , , , , ,

2 Responses to “St. Luke’s and the Science of High Reliability on “St. Luke’s and the Science of High Reliability”

  • This was a most interesting article highlighting high-reliability organizations and St Luke’s commitment to integrate this culture as Dr. Priest suggests.

    I wonder how we will reconcile the current “Just Culture” model with the HRO “preoccupation with failure” model. Any thoughts?

    Karen Stevens, RN

  • Hi, Karen.

    Excellent question! I think the two coexist nicely. We tend to find that serious safety events are a consequence of the holes in the swiss cheese being perfectly aligned. In other words, several failures have to all occur at the same time.

    While we certainly have to continue to insist on personal accountability (e.g., washing hands is a responsibility of the individual), these events we are trying to prevent are most often system failures, so the focus has to continue to be “how did our systems/processes allow this to happen?”

    Humans are always going to make mistakes and errors. To get to high reliability, it will take using technology to help humans to make the best decisions and provide the best care (e.g., medication administration systems).

    Great question! Thanks for participating in the discussion and for following the blog!

What's on your mind? I welcome your comments.

Name (required)
Email (required)
Website