Doctors Think AI Shows Promise—But Worry It Will Mislead Patients
Jul 16, 2025 am 11:10 AMChatGPT is now a competitor to Doctor Google, according to a new report from Elsevier, an academic publishing company that also develops AI tools for medical professionals like Scopus AI and Reaxys. The findings, released today, come from a survey conducted earlier this year involving 2,206 doctors and nurses across 109 countries. Participants included 268 clinicians from North America, 1,170 from the Asia Pacific, 439 from Europe, 164 from Latin America, and 147 from the Middle East and Africa; 18 chose not to share their location.
The survey asked clinicians about their views on AI’s current role in healthcare and its potential future impact. Elsevier distributed the survey via email to healthcare workers who had previously published work or contributed to third-party panels, acknowledging that because the sample was not random, the results may not be universally applicable.
One of the primary concerns expressed by respondents was how patients are using ChatGPT and similar AI tools. Many arrive at appointments with incorrect assumptions based on what they’ve read online. This issue stems from the fact that these models often produce inaccurate information—OpenAI’s GPT-4o and GPT-4o-mini, for instance, reportedly generate false responses 30–50% of the time, according to internal testing.
This misinformation adds pressure on already overburdened healthcare professionals. In North America, 34% of time-constrained clinicians reported dealing with patients who ask many questions, while globally the figure was around 22%.
Jan Herzhoff, president of Elsevier’s global healthcare businesses and study sponsor, told Forbes that a more alarming trend could be patients opting out of visiting hospitals altogether and relying solely on AI or websites for health advice. He noted that more than half of U.S. clinicians expect most patients to self-diagnose instead of consulting a doctor within three years, though data on current trends remains unclear.
Despite these concerns, more clinicians are turning to AI themselves. The percentage of healthcare workers using AI in clinical settings has nearly doubled in the past year—from 26% to 48%. While many remain hopeful about AI’s potential to improve efficiency, few believe their institutions are effectively leveraging it to solve real-world issues.
Most surveyed clinicians anticipate that AI will help them save time, deliver faster and more accurate diagnoses, and ultimately enhance patient care within the next three years. Startups like K Health and Innovaccer, which have raised $384 million and $675 million respectively, are among those building such tools. According to PwC’s Strategy& team, the global AI healthcare market is projected to hit $868 billion by 2030.
Herzhoff emphasized that AI should act as a support system for clinicians, not a replacement. He sees particular promise in using AI for administrative tasks. Currently, clinicians use AI to detect harmful drug interactions before prescribing and to draft letters to patients. Among those who use AI, 50% regularly or always rely on general-purpose tools like ChatGPT or Gemini, compared to just 22% who frequently use specialized medical tools.
Part of the reason for this preference may stem from limited institutional support. Only one-third of respondents said their workplace offers adequate access to digital tools and AI training, while just 29% reported effective AI governance at their institution.
Ensuring that AI systems are trained on high-quality, peer-reviewed content is another top priority for clinicians—70% of doctors and 60% of nurses consider this essential.
Still, adoption faces challenges. Herzhoff acknowledged that although AI has the potential to save time in the long run, learning how to use it effectively—especially mastering prompt engineering—requires effort that many clinicians don’t have the bandwidth for right now. Nearly half (47%) of clinicians said fatigue has affected their ability to provide quality patient care.
“Paradoxically, they don’t have the time, and they don’t find the time to use these tools,” Herzhoff admitted.
While clinicians look forward to using AI to streamline daily tasks, they remain cautious about allowing it to influence clinical decisions. Only 16% currently use AI for decision-making, and 37% say they are unwilling to do so.
As one respondent put it: “AI should give me the information I need to make good decisions. But I won’t hand over responsibility for clinical judgment to an algorithm—I must retain control over final outcomes.”
The above is the detailed content of Doctors Think AI Shows Promise—But Worry It Will Mislead Patients. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

Investing is booming, but capital alone isn’t enough. With valuations rising and distinctiveness fading, investors in AI-focused venture funds must make a key decision: Buy, build, or partner to gain an edge? Here’s how to evaluate each option—and pr

Let’s talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Heading Toward AGI And

Remember the flood of open-source Chinese models that disrupted the GenAI industry earlier this year? While DeepSeek took most of the headlines, Kimi K1.5 was one of the prominent names in the list. And the model was quite cool.

Let’s talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). For those readers who h

By mid-2025, the AI “arms race” is heating up, and xAI and Anthropic have both released their flagship models, Grok 4 and Claude 4. These two models are at opposite ends of the design philosophy and deployment platform, yet they

For example, if you ask a model a question like: “what does (X) person do at (X) company?” you may see a reasoning chain that looks something like this, assuming the system knows how to retrieve the necessary information:Locating details about the co

The Senate voted 99-1 Tuesday morning to kill the moratorium after a last-minute uproar from advocacy groups, lawmakers and tens of thousands of Americans who saw it as a dangerous overreach. They didn’t stay quiet. The Senate listened.States Keep Th

Clinical trials are an enormous bottleneck in drug development, and Kim and Reddy thought the AI-enabled software they’d been building at Pi Health could help do them faster and cheaper by expanding the pool of potentially eligible patients. But the
