in ,

AI’s role in medicine is growing, but doctors doubt chatbot care

AI’s role in medicine is growing, but doctors doubt chatbot care

Dr. Sina Bari, a practicing surgeon and AI healthcare leader at data company iMerit, has already seen how generative AI can mislead patients when used without a medical context.

Recently, one of his patients arrived with a printed ChatGPT conversation claiming a prescribed medication carried a 45% risk of pulmonary embolism. After digging into the source, Dr. Bari discovered the statistic came from a study focused on a small subgroup of tuberculosis patients—a scenario that had nothing to do with his patient’s condition.

Hosting 75% off

It was a stark reminder of how confidently AI can surface incorrect or irrelevant medical information.

Yet despite that experience, Dr. Bari wasn’t alarmed when OpenAI announced ChatGPT Health, a dedicated healthcare-focused chatbot launched last week. Instead, he was cautiously optimistic.

Why Some Doctors Welcome ChatGPT Health

ChatGPT Health, expected to roll out in the coming weeks, gives users a more private space to discuss health-related questions. OpenAI says conversations within the tool won’t be used to train its underlying AI models—a key distinction from standard ChatGPT interactions.

“I think it’s great,” Dr. Bari said. “This is already happening. Formalizing it, protecting patient information, and putting guardrails around it will make it more powerful for patients.”

The product also allows users to upload medical records and integrate data from apps like Apple Health and MyFitnessPal, enabling more personalized guidance.

That personalization, however, introduces a new set of concerns.

Privacy and Regulation: A Grey Area for AI Health Tools

Security experts warn that ChatGPT Health could blur regulatory lines. Medical data may flow from HIPAA-compliant healthcare systems into platforms that don’t fall under the same regulatory framework.

“All of a sudden, you have medical data transferring from HIPAA-compliant organizations to non-HIPAA-compliant vendors,” said Itai Schwartz, co-founder of data loss prevention firm MIND. “Regulators are going to have to decide how to approach this.”

Still, many in the industry argue that regulation is already lagging behind reality.

More than 230 million people reportedly discuss health topics with ChatGPT every week, making healthcare one of the chatbot’s most common use cases.

“This was one of ChatGPT’s biggest organic use cases,” said Andrew Brackin, a partner at health-tech investor Gradient. “Building a more private, secure, and optimized version for health questions just makes sense.”

The Hallucination Problem Isn’t Going Away

Despite safeguards, AI chatbots still struggle with hallucinations—fabricating or misinterpreting facts—a risk that becomes especially dangerous in medicine.

According to Vectara’s Factual Consistency Evaluation Model, OpenAI’s GPT-5 shows higher hallucination rates than comparable models from Google and Anthropic. That alone is enough to make clinicians uneasy about AI acting as a frontline medical advisor.

But others argue that focusing solely on hallucinations ignores a deeper systemic crisis.

Access to Care Is the Bigger Emergency

Dr. Nigam Shah, professor of medicine at Stanford and chief data scientist at Stanford Health Care, believes the healthcare access gap is more urgent than flawed chatbot advice.

“In many systems, seeing a primary care doctor takes three to six months,” Shah said. “If your choice is waiting half a year or talking to something that isn’t a doctor but can still help—what would you choose?”

For Shah, the more effective path forward isn’t patient-facing chatbots but AI tools embedded directly into healthcare systems.

AI’s Strongest Use Case: Supporting Clinicians, Not Replacing Them

Research consistently shows that administrative work consumes up to 50% of a primary care physician’s time, severely limiting how many patients they can see.

Automating documentation, chart review, and insurance workflows could dramatically expand healthcare capacity—without asking patients to rely on AI alone.

At Stanford, Shah leads the development of ChatEHR, an AI tool built directly into electronic health record systems. It allows clinicians to query patient records conversationally, reducing time spent navigating complex charts.

“Making the medical record more user-friendly means physicians spend less time searching and more time talking to patients,” said Dr. Sneha Jain, an early ChatEHR tester.

Insurers and Health Systems Are Also Turning to AI

Anthropic is pursuing a similar strategy. This week, the company announced Claude for Healthcare, aimed at automating administrative tasks across provider and insurer workflows.

One major target: prior authorization requests, which can take 20 to 30 minutes per case.

“Some of you process hundreds or thousands of these each week,” said Anthropic CPO Mike Krieger at JPMorgan’s Healthcare Conference. “Cutting even 20 minutes from each is a massive productivity gain.”

The Tension at the Heart of AI in Medicine

As AI becomes more embedded in healthcare, an unavoidable tension remains. Doctors are trained to be cautious and patient-first, while technology companies ultimately answer to shareholders.

“I think that tension matters,” Dr. Bari said. “Patients depend on us to be conservative and skeptical to protect them.”

That balance—between innovation and restraint—may determine whether AI becomes a trusted partner in healthcare, or just another tool patients learn to question.

Hosting 75% off

Written by Hajra Naz

Apple Rolls Out Creator Studio Bundle for Content Creators

Apple Introduces Creator Studio Subscription at $12.99 Monthly for Content Creators

Veo 3.1 update lets Google users generate vertical videos from images

Veo 3.1 update lets Google users generate vertical videos from images