In a recent article, researchers at Babson College make a case for the use of artificial intelligence (AI) to augment humans in health care decision-making.

“Cognitive technologies are being introduced in health care in part to reduce human decision-making and the potential for human error in providing care. Medical errors are the third leading cause of death in the United States, but they are not generally due to inherently bad clinicians. Instead, they are often attributed to cognitive errors (such as failures in perception, failed heuristics, and biases), an absence or underuse of safety nets and other protocols, and unwarranted variation in physician practice patterns.

The use of AI technologies promises to reduce the cognitive workload for physicians, thus improving care, diagnostic accuracy, clinical and operational efficiency, and the overall patient experience…But such systems are not yet in broad use, and, when they are used, they serve as a ‘second set of eyes.’”

While some fear that AI might replace humans, “we know of no radiologists who have lost their jobs from” the use of AI to assist in interpreting radiological images. “AI technologies such as IBM Watson have excited observers with their potential to treat cancer, but they don’t seem to have replaced any oncologists and, for that matter, there have been no rigorous examinations of their impact on patients. Sedasys, a semi-automated system for administering the anesthesia drug Propofol, met with poor sales and resistance from anesthesiologists and was withdrawn from the market. AI technologies may automate some medical tasks in the future, but few if any jobs have been fully computerized thus far.

Instead of large-scale job loss resulting from automation of human work, we propose that AI provides an opportunity for the more human-centric approach of augmentation. In contrast to automation, augmentation presumes that smart humans and smart machines can coexist and create better outcomes than either could alone. AI systems may perform some health care tasks with limited human intervention, thereby freeing clinicians to perform higher-level tasks.”

AI and Drug Interactions

Scientists at Stanford University are using AI to predict possible side effects from drug interactions. “With 125 billion possible side effects between all possible pairs of drugs, accurately predicting how a patient may react to a new drug can be a dangerous guessing game.” While still under development, the system, called Decagon, “could aid doctors when prescribing drugs to patients already on a laundry list of medications, while also assisting researchers in finding better combinations of drugs to treat complex diseases…The deep learning system infers patterns about drug interaction side effects and then predicts what the consequences from taking multiple drugs together would be.”

Process Improvement—Learning from Physicians

When teams tasked with improving a process approach me, the dialogue often goes something like this. Team (T)–“We need to improve this process.” Me (M)–“Why” T—“Because it doesn’t work as well as it could.” M—“How do you know that?” T—“Customers are complaining” or “We don’t like how it works.” As the dialogue continues, I probe for the team’s understanding of how the process now works and how they would like it to work in the future. I push for metrics, sometimes referred to as Key Process Indicators (KPI) that document the current state of the process. Then I press the team to define the desired future state of the process.

Often the team gets impatient with me as I suggest they define the current process with flow charts and collect data to assess the current state of the process using the KPIs. In my experience, these activities alone often reveal areas where significant improvement can be obtained. The pushback I receive is often “We don’t have time to do all of that. We just need to improve the process.”

Those of us involved in quality, and healthcare quality in particular, would be well advised to benchmark our approach to process improvement to that commonly used by physicians. Physicians reading this article may well suggest that my observations are naïve given that I am not a physician. They would have a point. However, I can still learn from what my naïve observation reveals to me about how they go about improving processes to achieve improved patient health.

When I approach my physician about a problem, I really am just describing symptoms of the problem. His or her job is to identify (diagnose) the root cause of those symptoms and prescribe improvements (treatments) to address the root cause. Often he begins with questions such as “Where does it hurt?” “How does it hurt–sharp pain; dull ache?” “When did it start?” “What lifestyle changes have you experienced?” He often prescribes tests (of KPIs) to assess indicators of potential causes and perhaps compares the results he obtains to previous baseline results of tests conducted in the past when no symptoms were present. Based on those tests, he is in a better position to define a future desired state—e.g. lower your blood sugar levels, or clear up the lung infection, or treat the cracked tibia. What he does not do is prescribe treatments to improve the system (me) until he understands the current state of that system and the causal factors affecting the performance of that system.

Why do we expect that we can optimize the performance of processes that are much simpler than the human process(which can be viewed as a large collection of processes) if we have not defined the KPIs for that process, do not understand the current state of the process, haven’t defined the future desired state for that process, identified the gaps, and determined the root cause for the gaps before we start initiating programs to improve the process? This sounds much like the cliché “Ready; Shoot; Aim” doesn’t it? Physicians don’t do that. Why should quality professionals do that?

The up-front time necessary to define the process and identify root causes for gaps between the current and future states should be viewed as an investment. In our personal lives, to prepare for the future, we must invest some of our scarce monetary resources in order to be able to pay for long-term benefits such as our children’s college and retirement. Sometimes that means finding ways to get by with less money in the short run in order to have more later. It is the same with your most scarce resource—time. In order to do what I suggest, you must find time. That might mean postponing some less critical activities, delegating some of your current duties, or even spending an extra hour or two at work occasionally. But if you invest that time wisely, the improved processes will often provide a positive return on investment. A more efficient process with fewer problems will occupy less of your time over the long run than you invested to improve it.

I will describe just one example of the effect of taking the “physician’s approach.” The team leader of an improvement project was already swamped with work. She strongly resisted my advice to follow the “physician’s process”, but in the end identified some ways to find time to do it. She delegated some of her routine tasks to her direct reports, invested that time in the recommended approach, and the team she led found ways to make the process more efficient and more responsive to customers. In the after-action analysis, she told me that her direct reports were doing a better job with the tasks she delegated to them than she had done, so she was leaving them in their hands. The improved process efficiency freed up time for her employees which could be allocated to other important tasks. And the increased responsiveness to customers (these were internal customers) improved relations with other departments and reduced the time associated with the process in their departments. Sounds like a WIN, WIN, WIN to me!