Explore our Topics:

AI tops ECRI’s 2025 list of health tech hazards

ECRI’s annual roundup of health technology hazards puts artificial intelligence right at the top of the list.
By admin
Dec 10, 2024, 1:55 PM

Artificial intelligence has its uses, but it can also be a very dangerous thing – especially when healthcare organizations rely too heavily on what computer models have to say, according to ECRI’s most recent list of health technology hazards.  

The annual rundown pins AI as the number one potential problem in 2025, cautioning that placing too much trust in AI tools without appropriately scrutinizing the outputs could lead to patient harm and disappointing business outcomes. 

“Biases present in the data used to train the AI model—or mismatches between that data and the target patient population—can lead to disparate health outcomes or inappropriate responses. Additionally, AI systems have been known to produce “hallucinations” (false or misleading responses) and to exhibit changes in performance over time due to factors such as data drift and the ‘brittleness’ of the AI model (an inability to appropriately adapt when confronted with novel conditions),” says the report.  

“Further, AI solutions can yield disappointing results if organizations have unrealistic expectations, fail to define goals, provide insufficient governance and oversight, or don’t adequately prepare their data for use by the AI application.” 

To prevent these types of harms, healthcare organizations need to be careful, considered, and educated when incorporating AI models into administrative or clinical processes. Technology and business leaders will need to develop clear goals for AI use, thoroughly assess and mitigate risks at every point along the workflow, and closely monitor the performance of their models for signs of bias, hallucinations, or degradation of the results over time. 

Doing so will reduce the likelihood that AI plays a detrimental role in outcomes – and may help provider organizations navigate the challenges presented by other items on the list, some of which are also increasingly tied into the AI ecosystem. 

For example, ECRI also warns providers that Hospital at Home and other home-based care programs may fail when patients are not able to correctly use personal monitoring devices and other technologies. Devices may be improperly configured, not work well in the physical home environment, or simply be too complex for patients to understand. 

To safely and effectively implement home care programs, providers need to adhere to key technology management practices, including providing adequate training to end users and caregivers. As AI becomes more deeply infused in devices designed for home care, this training and education may involving helping patients understand how to interact with AI-enabled interfaces, and even how to spot when an AI algorithm isn’t returning expected results. 

The third item for 2025 – cybersecurity vulnerabilities among third-party vendors – also has clear ties back to the potential dangers of the AI-driven ecosystem. While AI tools can be used to thwart cyberattacks, they can also be used to penetrate defenses, especially if healthcare organizations aren’t completely on top of their relationships between their internal systems and those of their business associates. 

ECRI suggests that organizational leaders thoroughly vet their vendors at the beginning of a service agreement, build strong defenses and redundancies into their systems, and regularly conduct cybersecurity drills such as incident response testing to evaluate their resiliency and recovery capabilities.  

AI can assist with these processes by helping to continuously monitor potential threats and provide alerts when attackers try to strike. But just like with clinical and administrative functionalities, organizational leaders must be careful that their AI cybersecurity tools are accurate and trustworthy. 

Other potential pitfalls heading into the New Year include hazards in the physical clinical environment, such as skin injuries from adhesive products, fire risks where supplemental oxygen is in use, and the potential for injury due to tripping over poorly managed infusion lines. 

The full list of health technology hazards for 2025 is available for download here. 


Jennifer Bresnick is a journalist and freelance content creator with a decade of experience in the health IT industry.  Her work has focused on leveraging innovative technology tools to create value, improve health equity, and achieve the promises of the learning health system.  She can be reached at jennifer@inklesscreative.com.


Show Your Support

Subscribe

Newsletter Logo

Subscribe to our topic-centric newsletters to get the latest insights delivered to your inbox weekly.

Enter your information below

By submitting this form, you are agreeing to DHI’s Privacy Policy and Terms of Use.