Explore our Topics:

Healthcare wants AI. Its cybersecurity isn’t ready.

From ransomware attacks to regulatory blind spots, the infrastructure beneath artificial intelligence in medicine is far more brittle than its champions admit.
By admin
Aug 28, 2025, 2:32 PM

The language of medicine has always shifted with its tools. Stethoscopes gave rise to auscultation, MRIs to a new grammar of shadows and light. Now, with the rise of artificial intelligence, clinicians and policymakers are forced to speak in terms borrowed from software engineers: algorithms, training data, black boxes. A recent review in Cureus examines what happens when those terms collide with a fragile health-care system already beset by cost pressures, inequities, and mounting cyber risks.

The paper, authored by Abdullah Virk and colleagues, makes a simple but unsettling point: the promises of AI in health care are real, but so are the risks. Machine learning models can spot a lung tumor earlier than the human eye, guide robotic arms in operating rooms, or even counsel a patient through a late-night anxiety spiral. Yet each advance comes tethered to an infrastructure that was never built for this kind of digital dependence.

The allure of the algorithm

AI’s rise in medicine has been swift, almost breathless. In radiology, studies suggest algorithms can read images with accuracy that rivals trained specialists. In psychiatry, chatbots like Woebot offer therapy at a scale no clinician could hope to match. Health systems see in AI an answer to staffing shortages, administrative overload, and the exhaustion that drives doctors from practice.

The appeal is obvious: who wouldn’t want a system that promises more care, faster, and for less? But the history of health technology suggests that efficiencies often come with hidden costs. The spread of electronic health records two decades ago was billed as a revolution; instead, it burdened clinicians with “pajama time” data entry and spawned new avenues for billing complexity. AI may prove equally paradoxical.

The weakest link

The Cureus review dwells on the vulnerabilities beneath the glittering promises. These are not hypothetical. The 2024 cyberattack on Change Healthcare, a critical clearinghouse for insurance claims, left hospitals scrambling and cost billions in damages. A separate incident—the now infamous CrowdStrike software update—temporarily paralyzed hospital systems nationwide. Both events had little to do with AI itself, but they illuminate the fragility of the digital scaffolding upon which AI rests.

Layered atop those risks are questions about data. HIPAA, the 1996 law that serves as the backbone of U.S. health privacy, was never designed for a world where algorithms can reconstruct identities from supposedly anonymized datasets. Europe’s GDPR goes further, restricting automated decision-making, but even that framework strains under the opacity of modern machine learning. As Virk and his colleagues note, new regulations like the European Union’s AI Act may set a precedent for stricter oversight—but their implementation will be slow, uneven, and politically fraught.

Who’s to blame when machines fail?

Perhaps the most human question raised by the spread of AI is also the oldest: who is responsible when something goes wrong? The Cureus review points out that clinicians remain on the hook for AI-assisted decisions, even when the inner workings of those systems are unknowable. This legal gray zone leaves physicians in a bind: ignore the algorithm, and risk missing an insight; rely on it, and risk liability for its mistakes.

Ethicists have warned that without clearer rules, AI could accelerate a subtle erosion of trust between doctors and patients. When a diagnosis is wrong, patients may find themselves litigating not against a fallible human but against a faceless algorithm designed in another country, trained on data from another population. That distance makes accountability harder, not easier.

The bias in the machine

American medicine magnifies inequality, and AI is no exception. As Virk’s team emphasizes, most training datasets come from the United States and China. Populations from Africa, South America, and much of Southeast Asia are largely absent. That absence matters: an algorithm trained on white, urban patients may misdiagnose a rural Black patient in Mississippi—or a Nigerian immigrant in New York.

The danger is not simply technical. It is moral. If AI promises to democratize medicine but instead reproduces the blind spots of its creators, it will only deepen the divides it claims to bridge.

A blueprint for something better

Still, the review is not entirely grim. It sketches a set of reforms—algorithmic transparency, stronger data-protection laws, continuous vulnerability testing, ethics committees embedded in hospitals, and robust training for health-care staff. These are not radical ideas. They are pragmatic, the kind of guardrails that could prevent the next software bug from turning into a national crisis.

The broader lesson may be this: AI will not remake medicine overnight. It will seep into workflows, decision trees, and patient encounters in ways both mundane and profound. Regulators and clinicians alike must recognize that integration as a process, not a panacea.

The stakes

For patients, the stakes are intimate. Imagine an algorithm that catches your cancer early—or misses it entirely. For doctors, the stakes are professional and existential: will AI become a partner that lightens the load, or a shadow presence that undermines their authority? For health systems, the stakes are financial: billions in potential savings balanced against billions in potential losses from cyberattacks.

The Cureus review lands at a moment of uneasy anticipation. AI in health care feels inevitable, but inevitability does not guarantee safety. As history shows, technology rarely changes medicine without also changing the terms of responsibility, privacy, and trust.

If the stethoscope gave physicians new ears, AI threatens to give them a second mind. But whose mind will it be—the doctor’s, the patient’s, or the machine’s?


Show Your Support

Subscribe

Newsletter Logo

Subscribe to our topic-centric newsletters to get the latest insights delivered to your inbox weekly.

Enter your information below

By submitting this form, you are agreeing to DHI’s Privacy Policy and Terms of Use.