Welcome to DU!
The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
Join the community:
Create a free account
Support DU (and get rid of ads!):
Become a Star Member
Latest Breaking News
Editorials & Other Articles
General Discussion
The DU Lounge
All Forums
Issue Forums
Culture Forums
Alliance Forums
Region Forums
Support Forums
Help & Search
General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsAI Eroding Cognitive Skills in Doctors: How Bad Is It?
https://www.medscape.com/viewarticle/ai-eroding-cognitive-skills-doctors-how-bad-it-2025a1000q2k2025 brought a strange convergence: College essays and colonoscopies both demonstrated what can happen when artificial intelligence (AI) leads the work.
First came the college data: An MIT team reported in June that when students used ChatGPT to write essays, they incurred cognitive debt and users consistently underperformed at neural, linguistic, and behavioral levels causing a likely decrease in learning skills.
Then came the clinical echo. In a prospective study from Poland published last month in The Lancet Gastroenterology and Hepatology, gastroenterologists whod grown accustomed to an AI-assisted colonoscopy system appeared to be about 20% worse at spotting polyps and other abnormalities when they subsequently worked on their own. Over just 6 months, the authors observed that clinicians became less motivated, less focused, and less responsible when making cognitive decisions without AI assistance.
For medicine, that mix sparks some uncomfortable questions.
What happens to a doctors mind when theres always a recommendation engine sitting between thought and action? How quickly do habits of attention fade when the machine is doing the prereading, the sorting, even the first stab at a diagnosis? Is this just a temporary setback while we get used to the tools, or is it the start of a deeper shift in what doctors do?
Like a lot of things AI-related, the answers depend on who you ask.
. . .
First came the college data: An MIT team reported in June that when students used ChatGPT to write essays, they incurred cognitive debt and users consistently underperformed at neural, linguistic, and behavioral levels causing a likely decrease in learning skills.
Then came the clinical echo. In a prospective study from Poland published last month in The Lancet Gastroenterology and Hepatology, gastroenterologists whod grown accustomed to an AI-assisted colonoscopy system appeared to be about 20% worse at spotting polyps and other abnormalities when they subsequently worked on their own. Over just 6 months, the authors observed that clinicians became less motivated, less focused, and less responsible when making cognitive decisions without AI assistance.
For medicine, that mix sparks some uncomfortable questions.
What happens to a doctors mind when theres always a recommendation engine sitting between thought and action? How quickly do habits of attention fade when the machine is doing the prereading, the sorting, even the first stab at a diagnosis? Is this just a temporary setback while we get used to the tools, or is it the start of a deeper shift in what doctors do?
Like a lot of things AI-related, the answers depend on who you ask.
. . .
1 replies
= new reply since forum marked as read
Highlight:
NoneDon't highlight anything
5 newestHighlight 5 most recent replies

AI Eroding Cognitive Skills in Doctors: How Bad Is It? (Original Post)
erronis
Monday
OP
erronis
(21,431 posts)1. A self-reply from the article to present a different perspective.
A Coin With Many Sides
On the surface, any kind of cognitive erosion in physicians because of AI use is alarming. It suggests some disengagement with tasks on a fundamental level and even automation bias over-reliance on machine systems without even knowing youre doing it.
Or does it? The study data seems to run counter to what we often see, argues Charlotte Blease, PhD, an associate professor at Uppsala University, Sweden, and author of Dr. Bot: Why Doctors Can Fail Us―and How AI Could Save Lives. Most research shows doctors are algorithmically averse. They tend to hold their noses at AI outputs and override them, even when the AI is more accurate.
If clinicians arent defaulting to blind trust, why did performance sag when the AI was removed? One possibility is that attitudes and habits change with sustained exposure. We may start to see a shift in some domains, where doctors do begin to defer to AI, she says. And that might not be a bad thing. If the technology is consistently better at a narrow technical task, then leaning on it could be desirable. The key, in her view, is the judicious sweet-spot in critical engagement.
And the social optics can cut the other way. A recent Johns Hopkins Carey Business School randomized experiment with 276 practicing clinicians found that physicians who mainly relied on generative AI for decisions incurred a competence penalty in colleagues eyes. They were viewed as less capable than peers who didnt use AI, with only partial relief when AI was framed as a second opinion.
On the surface, any kind of cognitive erosion in physicians because of AI use is alarming. It suggests some disengagement with tasks on a fundamental level and even automation bias over-reliance on machine systems without even knowing youre doing it.
Or does it? The study data seems to run counter to what we often see, argues Charlotte Blease, PhD, an associate professor at Uppsala University, Sweden, and author of Dr. Bot: Why Doctors Can Fail Us―and How AI Could Save Lives. Most research shows doctors are algorithmically averse. They tend to hold their noses at AI outputs and override them, even when the AI is more accurate.
If clinicians arent defaulting to blind trust, why did performance sag when the AI was removed? One possibility is that attitudes and habits change with sustained exposure. We may start to see a shift in some domains, where doctors do begin to defer to AI, she says. And that might not be a bad thing. If the technology is consistently better at a narrow technical task, then leaning on it could be desirable. The key, in her view, is the judicious sweet-spot in critical engagement.
And the social optics can cut the other way. A recent Johns Hopkins Carey Business School randomized experiment with 276 practicing clinicians found that physicians who mainly relied on generative AI for decisions incurred a competence penalty in colleagues eyes. They were viewed as less capable than peers who didnt use AI, with only partial relief when AI was framed as a second opinion.