AI governance shortfalls among top patient safety risks for 2025
Insufficient AI governance is poised to be a major patient safety concern in 2025, according to ECRI’s annual report on hazards for patients within the health system. With more and more AI-enabled tools entering the administrative and clinical environments, patients are at increasing risk of adverse events and poor outcomes from the misuse or misinterpretation of AI-generated insights, the authors warn.
This emerging hazard can compound many of the report’s other top safety concerns, including misdiagnosing and/or medically “gaslighting” patients – especially if an algorithm’s results don’t line up with a clinician’s thoughts on a medical situation.
“We are currently facing challenges that seemed futuristic and improbable in 1999—the integration of artificial intelligence in clinical settings, the growing threat of cyberattacks on health data, and the viral spread of medical misinformation on social media platforms,” said the report, which is coproduced by the Institute for Safe Medication Practices (ISMP).
“And yet, we are still grappling with challenges that have plagued healthcare teams for years, such as missed diagnoses and healthcare-associated infections. This new era of patient safety requires heightened vigilance, new and adaptive strategies, and a commitment to fostering a culture of safety with health-literate practices that ensure the well-being of patients in an increasingly digital, complex, and interconnected world.”
AI’s role in patient safety needs greater scrutiny
AI has known risks in the healthcare setting, including the possibility of bias, hallucinations, and privacy concerns. Yet many health systems are forging ahead with integrating AI tools into their everyday processes – without necessarily establishing robust procedures for governance, monitoring, and remediation should a problem be identified.
As a result, patients could be a risk, ECRI says. Medical errors could increase if AI models are trained on poor quality data. Staff members could misinterpret results if they aren’t familiar with the limitations of AI-enabled capabilities. And as AI becomes more subtly infused into all aspects of digital health, it may be difficult for staff to track when AI is at fault for errors, making it challenging to trace back mistakes and correct them at the source.
Failure to develop and implement system-wide governance that prioritizes accuracy, equity, and accountability could be disastrous for patients, the report cautions, and may lead to increased medical errors and undesirable outcomes.
Action steps for improving AI governance
To ensure that health systems are taking an appropriate stance on governance, ECRI suggests the following actions:
- Form a multi-disciplinary committee to evaluate AI-enabled technologies and determine risks
- Ensure that organizational policies are as comprehensive as possible and align with relevant local, state, and federal regulations
- Train staff on the organization’s AI usage policy and ensure a clear chain of escalation should there be any questions or concerns
- Disclose the use of AI to patients and families when it touches their care, particularly when it comes to generative AI tools that assist with documentation or communication
- Engage patient representatives and solicit feedback when designing patient-facing communications and education around AI
- Implement an effective reporting system for AI-related incidents, errors, and adverse events and educate users on what constitutes a reportable event
- Emphasize the continued role of clinical judgement and human experience in the patient care process to prevent over-reliance on AI insights
Examining other top patient safety issues for 2025
In addition to the risks of artificial intelligence, this year’s report cites a number of other challenges facing clinicians and patients.
Diagnostic errors feature several times on the list in multiple forms. For example, “gaslighting” or dismissing patient and caregiver concerns is at the top of the list for 2025. Failing to listen, understand, and act on patient-provided information about their own health could lead to delays in appropriate diagnosis and treatment – as well as poor experiences for consumers who may feel disrespected or infantilized by their care providers.
Gaslighting can contribute to diagnostic errors among the “big three” diseases prone to being missed: cancer, vascular events, and infections, the report continues.
For example, a representative sample of closed malpractice claims, missed cancer diagnoses represented 46% of claims in the primary care setting, with the majority (76%) involving “errors in clinical judgment, such as a failure or delay in ordering a diagnostic test (51%) or failure or delay in obtaining a consult or referral (37%).” In the emergency department, the most common misdiagnoses were around major vascular events, such as heart attacks, as well as infections like sepsis.
The report also cites cybersecurity threats as a potential source of medical errors, as system outages during a cybersecurity event could lead to mistakes on the clinic floor.
Other leading concerns for the year include the spread of medical misinformation, the spread of substandard or falsified prescription drugs, the persistence of healthcare-associated infections in long-term care facilities, and deteriorating working conditions in community pharmacies that could lead to medication errors.
Overall, there is an urgent need for health systems to reexamine their clinical and technical governance processes to ensure strong procedures are in place – and that relevant staff members are consistently adhering to established guidelines.
Particularly with AI in the midst of rapid evolution, it is important to take an agile stance on continuously evaluating policies and making adjustments and enhancements accordingly to safeguard patients and support better outcomes.
Jennifer Bresnick is a journalist and freelance content creator with a decade of experience in the health IT industry. Her work has focused on leveraging innovative technology tools to create value, improve health equity, and achieve the promises of the learning health system. She can be reached at [email protected].