Joint Commission, CHAI release first guidance on responsible AI use in healthcare
The Joint Commission and the Coalition for Healthcare AI (CHAI) have quickly made good on their summer pledge to release industry guidance on responsible AI use in healthcare. Just months after announcing the collaboration, the two organizations have published the first installment of guidance, which focuses on “promot[ing] a shared understanding of responsible deployment and use of AI tools across healthcare organizations.”
“We understand how quickly AI is changing healthcare – and at a scale I’ve never seen in my time as a leader,” said Dr. Jonathan Perlin, president and CEO of Joint Commission, in an accompanying press release. “From the moment we announced our partnership with CHAI, we knew we wanted our partnership to reflect that fast-paced dynamic, while still delivering a thoughtful and streamlined guidance for healthcare organizations to self-govern with AI.”
The document, simply titled “Guidance on the Responsible Use of AI in Healthcare (RUAIH),” also acknowledges the speed with which the industry is changing and highlights the potential risks of algorithmic bias, data inaccuracies, or unforeseen impacts on the healthcare environment, which can in turn lead to patient safety issues and poor outcomes.
“The rapid pace of AI development can also outstrip the ability of healthcare organizations to keep up with necessary training and updates,” the guidance continues. “This knowledge gap may result in improper use of AI tools, further exacerbating the risk of patient harm. Moreover, overreliance on AI could potentially diminish the role of human judgment in clinical decision-making, leading to a depersonalization of care and potential ethical dilemmas.”
To prevent these situations from occurring, the guidance identifies seven components of responsible AI use in the healthcare setting and provides recommendations for organizations to follow in each area.
AI policies and governance structures
Formal governance structures will be essential for guiding implementation activities and establishing mechanisms for adverse event reporting and other feedback. Governance should be developed by a cross-functional team including representatives from the C-suite, legal/compliance, and cybersecurity, as well as clinical roles including physicians and nurses. The document also suggests that the fiduciary board of the healthcare organization should receive regular updates on the use and outcomes of AI tools to enforce accountability and transparency.
Patient privacy and transparency
Data privacy is paramount when implementing AI tools, especially in an era of constant cyberattacks. Privacy safeguards should be built into every aspect of the AI ecosystem – as should mechanisms, where appropriate, to disclose AI use to patients and give them the opportunity to provide consent.
Data security and data use protections
Similarly, organizations must prioritize data security and develop a clear understanding of relevant laws and regulations governing the security of data in the AI environment. Both privacy and security protections will intersect with efforts to thoroughly deidentify patient data used for training or reinforcing algorithms. Organizations will need to implement strong approaches to data encryption and access controls, as well as develop meaningful incident response plans.
Ongoing quality monitoring
Organizations diving into AI should have processes in place to monitor the quality of outputs over time and in various contexts. Regular validation and testing of tools, particularly those that claim to adapt or learn over time, will be crucial for maintaining high reliability and trust.
Voluntary, blinded reporting of AI safety related events
If a tool does begin to exhibit “drift,” or otherwise contributes to identified risks, organizations should have clear and accessible reporting pathways in place. Confidential, blinded reporting mechanisms may be the most effective for encouraging responsible whistleblowing and maintaining patient privacy. Organizations should treat AI-related adverse events the same as any other event and take a serious, thorough look at how an algorithm may have contributed to potential or actual harm.
Risk and bias assessment
Even better, organizations should engage in proactive risk and bias assessment in collaboration with developers and vendors to get one step ahead of potential risks. Bias, hallucinations, or other undesirable results in AI algorithms can be insidious, but developers should be able to provide information on how to keep an eye on certain underlying features of their tools to prevent negative results from propagating into the user environment. Leaders can use tools like the CHAI Model Card to collect information and guide assessment and monitoring activities.
Education and training
Last but not least, organizations should provide appropriate education and training on AI usage to staff members. Both use case-specific education and general training on AI literacy will help to upskill staff and smooth the change management process as workflows adapt to integrate AI capabilities.
Overall, the guidance provides a solid starting point for organizations that are still building their AI toolkits and want to ensure they stay on the right path to success.
The Joint Commission and CHAI are planning to launch an accompanying series of products throughout the remainder of 2025 and into 2026, including targeted playbooks and a voluntary AI certification open to The Joint Commission’s 22,000 accredited and certified healthcare organizations.
“The need is immediate, and we are eager to respond,” said Dr. Brian Anderson, CEO of CHAI. “This guidance and all subsequent playbooks are about keeping pace with the evolving field, not just by defining responsible AI, but by making it usable in hospitals and health systems across the country – no matter their resource level.”
Jennifer Bresnick is a journalist and freelance content creator with a decade of experience in the health IT industry. Her work has focused on leveraging innovative technology tools to create value, improve health equity, and achieve the promises of the learning health system. She can be reached at [email protected].