Will AI Eventually Replace Doctors?
Skip to content
The Insightful Leader Live: AI Isn’t Magic—It’s Hard Work. Here’s How to Get Started. | Register Now
Healthcare Feb 1, 2023

Will AI Eventually Replace Doctors?

Maybe not entirely. But the doctor–patient relationship is likely to change dramatically.

doctors offices in small nodules

Jesús Escudero

Based on the research of

David Dranove

Craig Garthwaite

The year is 2070. You walk into an urgent care clinic feeling unwell, and an Alexa-esque device asks you to describe your symptoms. The computer takes down your information, retrieves details from your past electronic health records, and suggests diagnostic tests for a human technician to perform. After getting the test results, the software program prescribes a medication to treat your condition.

This futuristic scenario is one example of how AI might become part of healthcare. In fact, AI systems are already being developed to read medical scans and tissue samples to determine if a patient has a disease. Future software could analyze patterns across thousands of health records to pinpoint the most effective treatment for a particular patient—for instance, which cancer therapy might work best given their genetic makeup.

In a recent paper, David Dranove and Craig Garthwaite, professors of strategy at Kellogg, explored the implications of incorporating AI into healthcare—in particular, how such software would affect the central role of the physician.

For now, the need for human interaction in healthcare is likely to keep AI on the sidelines as a complement, rather than a substitute, for doctors, Dranove says. But perhaps in a few decades, patients will be comfortable interacting with computers and even trust them as their main source of medical guidance. “Maybe in the long run, that will change,” he says.

Mixed evidence

Proponents of this new technology believe that AI could help in two main ways.

The first area where AI could make inroads is treatment plans informed by data mining. The software could extract patterns from electronic records of previous patients’ characteristics, genetic variations, symptoms, treatment, and health outcomes. Based on a new patient’s similarities to past cases, the AI program then might be able to predict the most effective drugs to prescribe or surgery to perform.

The second area is in diagnosis, particularly in the fields of radiology and pathology. A computer could be given a large set of images from previous patients with known diagnoses. The software program could then be trained on those images to recognize features that indicate a positive or negative result.

Some studies suggest that AI can perform such tasks fairly well—and sometimes pick up on signs of disease that doctors miss. For instance, one team reported that an AI program detected breast cancer in mammograms—particularly, invasive cancers in the early stages—more accurately than did radiologists.

Other studies have explored whether it’s better for AI to complement or replace physicians’ expertise when it comes to making diagnoses. But this research has come to conflicting conclusions, Dranove says. In some cases, such as the breast-cancer study, doctors who were given guidance from AI made less-accurate decisions than AI alone.

But in other cases, the combination of physician expertise and AI was the best option. For example, one team tested AI software trained to detect hip fractures in radiographs. Two experienced radiologists who incorporated the AI program’s output into their evaluations performed better than the software by itself.

“The evidence is mixed,” Dranove says.

A need for compassion

But, even if the evidence ends up showing that AI can do as well or better than doctors in some situations, will AI actually replace physicians? The answer depends partly on how critical human interaction is, Dranove says. For instance, physicians elicit information from patients, explain why a procedure is necessary, and provide instructions for follow-up care. Dranove believes that most older adults today, and perhaps younger adults as well, still want to hear from a human being about their health.

“There’s a need for compassion in communication that AI is unable to contribute,” he says.

Healthcare organizations might decide that a lower-paid medical professional, such as a nurse or physician assistant, can play that role, with their decisions guided by AI. But that too will depend on whether doctors’ tasks can be boiled down to standardized questions and responses, or whether greater nuance and expertise is required, Dranove says. For example, a physician might be more adept at helping the patient feel comfortable discussing their health condition and determining how much a disease is truly affecting a person’s quality of life.

“I think we’re going to see a patchwork quilt where AI gets implemented.”

David Dranove

Even in radiology, one of the specialties that seems the most threatened by AI, the job still involves substantial human interaction. Dranove and Garthwaite examined a list of tasks for which radiologists bill; these included services such as X-ray scans, CT scans, ultrasound examinations, mammography, and so on. At first glance, radiologists appeared to be largely spending their days using technology.

But a more comprehensive list of tasks, from the Occupational Information Network, showed that the job also involved many interpersonal exchanges. For instance, radiologists need to discuss results with other medical staff and explain risks, benefits, and treatment options to patients.

“It’s not just reading a film and writing a report,” Dranove says.

Who gets the profits?

The researchers also considered what would happen to the value chain in healthcare if AI were to become a complement to physicians, rather than a substitute. The value chain includes all the parties who contribute to and benefit from it: the patient, doctor, nurse, healthcare system, drug company, insurance company, and so on. As with the production of any good or service, healthcare can create value—including better health for patients, wages for providers, and profits for companies—and incur costs.

Because physicians play such a central role, they often capture a large portion of the value in the form of very high salaries. If AI took over diagnosis and treatment decisions, one might expect doctors to become less valued and for their wages to sink accordingly. On the other hand, might doctors end up receiving even higher salaries, if they can issue faster or more-accurate medical decisions with AI’s help?

Although doctors may become more productive, they won’t necessarily reap financial benefits, Dranove says. Instead, the healthcare system is more likely to capture the additional value through higher profits. For example, the organization might improve its healthcare quality metrics and thus argue to an insurance company that they should be paid more.

“Doctors will not be replaced by AI, but they may not directly profit from it either,” Dranove says.

And it’s not clear if even the healthcare organization will get monetary rewards. Medical care in the United States is often based on a fee-for-service model. If AI reduces overtreatment and leads to fewer procedures, “you’re losing money,” he says.

Organizations therefore might not have a strong financial motivation to develop and use AI, even if it improves patient outcomes. The exception would be self-contained systems such as the Veterans Health Administration; if they save money, they reap all the benefits.

A patchwork quilt

Incorporating AI into healthcare faces many other hurdles. One of the biggest is lack of access to data. “You can’t outperform a physician based on reams and reams of data if you don’t have lots and lots of patients on which to train the computer,” Dranove says.

In the United States, medical records are scattered across healthcare systems, and HIPAA restricts the ability to share information. As a result, most AI development so far has occurred within medical organizations that are using only their own patients’ records. This means that large healthcare systems have an advantage over smaller ones, which might not have enough data to train the software effectively.

“I think we’re going to see a patchwork quilt where AI gets implemented,” he says.

While it’s possible that these large organizations could share their trained software with others, they might hesitate to do so. “From a societal standpoint, I should share that information” because it could improve health outcomes for patients elsewhere, Dranove says. But the organization’s perspective might be, “why would I give away for free something that makes my system that much more valuable?” Without a federal law requiring data or software sharing, he says, “I think this is going to be a highly fragmented process for a long time.”

That doesn’t mean that large healthcare organizations should be held back from developing AI, he says. But a coordinated approach will distribute the benefits of AI more equally.

“If data can be shared, then everybody will have that opportunity,” he says.

Featured Faculty

Walter J. McNerney Professor of Health Industry Management; Faculty Director of PhD Program; Professor of Strategy

Professor of Strategy; Herman Smith Research Professor in Hospital and Health Services Management; Director of Healthcare at Kellogg

About the Writer

Roberta Kwok is a freelance science writer based in Kirkland, Washington.

About the Research

Dranove, David, and Craig Garthwaite. 2022. “Artificial Intelligence, the Evolution of the Healthcare Value Chain, and the Future of the Physician.” Working paper.

Read the original

Add Insight to your inbox.
This website uses cookies and similar technologies to analyze and optimize site usage. By continuing to use our websites, you consent to this. For more information, please read our Privacy Statement.
More in Policy & the Economy Healthcare