AI Chatbots Are Becoming Physician's Assistants - Science Techniz

Page Nav

HIDE

Grid

GRID_STYLE

Trending News

latest

AI Chatbots Are Becoming Physician's Assistants

Physician's medical decisions benefit from the chatbot, recent studies find.  Artificial Intelligence in healthcare is transforming pati...

Physician's medical decisions benefit from the chatbot, recent studies find.
 Artificial Intelligence in healthcare is transforming patient care and medical diagnostic treatment, the AI-powered chatbots are becoming increasingly proficient in diagnosing diseases and supporting physicians as virtual assistants. Across the United States, medical professionals from various institutions are working to integrate AI into healthcare practices, keeping pace with the technology's rapid advancement.

Jonathan H. Chen, MD, PhD, an assistant professor of medicine, and a team of researchers are investigating how large language models (LLMs)—a type of AI technology used by platforms such as IBM WatsonPathAI, and others—can effectively support physicians and improve clinical performance. 

"For years, I’ve said that when combined, human plus computer will outperform either one alone," Chen noted. "This study challenges us to think more critically about that relationship. We need to ask ourselves, 'What is a computer good at? What is a human good at?' It may be time to rethink how we blend these skills and identify the tasks best suited for AI."

The research findings were published in Nature Medicine on February 5. The study was co-authored by Jonathan H. Chen, MD, PhD, and Adam Rodman, MD, assistant professor at Harvard University. Postdoctoral scholars Ethan Goh, MD, and Robert Gallo, MD, served as co-lead authors.

Boosted by chatbots

In October 2024, Chen and Goh led a team that ran a study, published in JAMA Network Open, that tested how the chatbot performed when diagnosing diseases and that found its accuracy was higher than that of doctors, even if they were using a chatbot. The current paper digs into the squishier side of medicine, evaluating chatbot and physician performance on questions that fall into a category called "clinical management reasoning."

Chatbots in healthcare transforming patient care and medical diagnostics.
Goh explains the difference like this: Imagine you're using a map app on your phone to guide you to a certain destination. Using an LLM to diagnose a disease is sort of like using a map to pinpoint the correct location. How you get there is the management reasoning part -- do you take backroads because there's traffic? Stay the course, bumper to bumper? Or wait and hope the roads clear up?

In a medical context, these decisions can get tricky. Say a doctor incidentally discovers a hospitalized patient has a sizeable mass in the upper part of the lung. What would the next steps be? The doctor (or chatbot) should recognize that a large nodule in the upper lobe of the lung statistically has a high chance of spreading throughout the body. The doctor could immediately take a biopsy of the mass, schedule the procedure for a later date or order imaging to try to learn more.

Determining which approach is best suited for the patient comes down to a host of details, starting with the patient's known preferences. Are they reticent to undergo an invasive procedure? Does the patient's history show a lack of following up on appointments? Is the hospital's health system reliable when organizing follow-up appointments? What about referrals? These types of contextual factors are crucial to consider, Chen said.

The team designed a trial to study clinical management reasoning performance in three groups: the chatbot alone, 46 doctors with chatbot support, and 46 doctors with access only to internet search and medical references. They selected five de-identified patient cases and gave them to the chatbot and to the doctors, all of whom provided a written response that detailed what they would do in each case, why and what they considered when making the decision.

In addition, the researchers tapped a group of board-certified doctors to create a rubric that would qualify a medical judgment or decision as appropriately assessed. The decisions were then scored against the rubric.

To the team's surprise, the chatbot outperformed the doctors who had access only to the internet and medical references, ticking more items on the rubric than the doctors did. But the doctors who were paired with a chatbot performed as well as the chatbot alone.

A future of chatbot doctors?

Exactly what gave the physician-chatbot collaboration a boost is up for debate. Does using the LLM force doctors to be more thoughtful about the case? Or is the LLM providing guidance that the doctors wouldn't have thought of on their own? It's a future direction of exploration, Chen said.

The positive outcomes for chatbots and physicians paired with chatbots beg an ever-popular question: Are AI doctors on their way?

"Perhaps it's a point in AI's favor," Chen said. But rather than replacing physicians, the results suggest that doctors might want to welcome a chatbot assist. "This doesn't mean patients should skip the doctor and go straight to chatbots. Don't do that," he said. "There's a lot of good information out there, but there's also bad information. The skill we all have to develop is discerning what's credible and what's not right. That's more important now than ever."

Researchers from VA Palo Alto Health Care System, Beth Israel Deaconess Medical Center, Harvard University, University of Minnesota, University of Virginia, Microsoft and Kaiser contributed to this work. The study was funded by the Gordon and Betty Moore Foundation, the Stanford Clinical Excellence Research Center and the VA Advanced Fellowship in Medical Informatics. Stanford's Department of Medicine also supported the work.