Explore our Topics:

As AI rises, has the “Great De-skilling” started to hit healthcare?

As AI becomes more common in healthcare, rising deskilling—like impaired diagnostic skills without AI—poses growing risks.
By admin
Aug 15, 2025, 2:13 PM

Ask any high school teacher what they think of Chat-GPT as a tool in the classroom, and you’ll probably have to buckle in for the rant of a lifetime. Anecdotes abound of students using AI to circumvent the intolerable burden of thinking for themselves by using Chat-GPT to write essays and emails (sometimes without even removing their prompts and the chatbot’s direct replies).

Ask any business leader, however, and you’re likely to get a different reply.  In the corporate world, busy executives laud LLMs for taking away the boring, time-consuming tasks of communicating with colleagues, organizing schedules, and compiling PowerPoint decks. In one recent survey, 92% of businesses are increasing their investment in AI, and the number of employees who are using AI tools for more of their work than their bosses imagine has increased three-fold in recent months.

It’s no wonder, then that significant numbers of young people now believe there’s nothing wrong with relying on their new digital friend to solve math problems, do research, or complete writing assignments.  After all, it’s apparently the latest way to learn how to get ahead in their careers, and not being allowed to use it in school might appear to be unfair and contradictory.

 

Chipping away at the ability to think independently

But there’s a big problem looming for a society that outsources its thinking to AI models that are known to hallucinate in potentially dangerous ways.

Recent research from MIT (not yet peer reviewed) shows that young adults (18 to 39 years of age) who used Chat-GPT to write essays from SAT prompts showed notably lower brain engagement activity on EEG tests and “consistently underperformed at neural, linguistic, and behavioral levels” compared to participants who used Google or completed their tasks without external reference tools.

As the study continued, Chat-GPT users got even lazier, often resorting to pure copy-and-paste by the end of the study. The Chat-GPT group also expressed the lowest levels of ownership of their content and were the least likely to recall what they had “written.”

A separate paper from the Swiss Business School backs up these results, pointing to “cognitive offloading” as a culprit in a reduced ability to engage in critical thinking, especially among younger people with developing brains.

 

Conflicting perspectives on AI in healthcare

One would think, then, that an industry that demands rigorous academic achievement and expects terminal degrees to advance in coveted career paths would side with the exasperated English teachers on this one.  But that’s not always the case.

Despite the fact that clinicians have been fighting for years against giving up their autonomy and clinical intuition to “cookbook medicine” and overly restrictive guidelines, AI automation is finding a welcome home in niches across the entire care continuum.

From ambient scribes that automatically create documentation to EHR-embedded tools that write emails to models that “read the room” to encourage behavior change in patients, artificial intelligence is advancing its way into areas of the workflow that previous generations of providers have viewed as sacred components of a compassionate, human-to-human relationship between providers and their patients.

Healthcare organizations have been among the quickest to adopt AI tools, with 63% of healthcare and life sciences organizations already using AI in real-world use cases, according to the latest research from NVIDIA.

There may be benefits in terms of helping clinicians complete an overwhelming number of tasks more quickly and efficiently, but there are also drawbacks – and those negatives are starting to arise already among providers in the form of the same “de-skilling” afflicting high school students.

A new study published in The Lancet Gastroenterology & Hepatology shows that endoscopists in Poland exhibited significantly reduced ability to recognize adenomas during colonoscopies after routinely using AI to help them.

Between Sept 2021 and March 2022, 1,443 patients underwent non-AI assisted colonoscopy before and after the introduction of an AI adenoma detection tool.

The adenoma detection rate of standard colonoscopy decreased significantly from, 28.4% before AI to 22.4% after exposure to AI, an absolute difference of 6 percent. The degree of a clinician’s exposure to AI was one of the primary variables associated with differences in adenoma detection ability, according to the researchers.

 

A dire prediction already coming true?

The Lancet study was small in scale and narrow in scope (no pun intended), but it does provide some initial evidence that fears about relying too heavily on AI could become reality, especially without the right guardrails in place.

Skeptics have already voiced these concerns as healthcare enters the era of “self-referential learning,” where AI models are trained on historical medical decisions, and future medical decisions are based on the outputs of those AI models.  When end-users do not critically question and independently verify the results of AI tools, this loop can swiftly lead to degraded integrity of clinical decisions and a vanishing ability to recognize and correct the drift away from improper decision-making.

For example, a recent industry poll conducted by Wolters Kluwer found that a significant number of healthcare providers are actively afraid that the industry is poised to sink into this negative feedback loop.

In the survey, 57% percent of clinicians across job categories said their greatest concern with generative AI was the erosion of clinical skills caused by overreliance, and 55% worried about biases introduced by algorithms that aren’t being appropriately monitored and corrected before they produce some type of harm.

The Lancet study provides some support that these fears may not be unfounded. Moreover, that the industry must take meaningful action to address the risks and establish clear, standardized, and enforceable guardrails to prevent de-skilling from spreading.

 

Striking a balance between the benefits and drawbacks of reliance on AI

Leveraging AI’s acknowledged benefits while avoiding its risks will be a major challenge for the healthcare industry, especially as it faces the urgent need to reduce burnout and cope with massive staffing shortages.

Success will start with understanding which tasks can be automated with the fewest risks to cognitive integrity.  Filling out forms, summarizing documents, and surfacing high-priority information might be “safer” tasks than actively assisting with clinical decision-making without integrating detailed human review, the ability for a clinician to implement a hard stop, and other important redundancies.

Keeping humans directly in the loop – and training them how to continuously think critically about their role in the loop – will be essential for maximizing the benefits without exacerbating the risks.

While some degree of de-skilling may be inevitable as AI becomes more insidiously integrated into daily life, it may still be possible to avoid a comprehensive shift in the nature of human cognition if industry leaders make a concerted effort to protect and support the importance of maintaining meaningful human intelligence alongside the business benefits of the artificial version.


Jennifer Bresnick is a journalist and freelance content creator with a decade of experience in the health IT industry.  Her work has focused on leveraging innovative technology tools to create value, improve health equity, and achieve the promises of the learning health system.  She can be reached at [email protected].


 


Show Your Support

Subscribe

Newsletter Logo

Subscribe to our topic-centric newsletters to get the latest insights delivered to your inbox weekly.

Enter your information below

By submitting this form, you are agreeing to DHI’s Privacy Policy and Terms of Use.