“Ethical AI” is top of mind as frameworks proliferate
It seems like every other day, an acronym-bearing healthcare stakeholder releases its take on a framework for the responsible development of artificial intelligence (AI). CHIME, CHAI, WHO, FDA, AMA, and others have all joined the party in recent months, sharing their visions for the future of an AI-driven world.
It’s certainly important for developers, purchasers, and end-users to have access to multiple perspectives on how to engage with these game-changing technologies, especially in a high-risk environment where poor implementation can put lives on the line.
But it can also be a little confusing when everyone is jumping in with their own ideas, hoping to end up as the one in the pilot’s seat when we finally finish building the plane while flying it.
Fortunately, some degree of consensus is emerging as these leading entities help the industry feel out the way forward, with words like “ethics,” “equity,” and “responsibility” featuring frequently in these documents, albeit with varying degrees of specificity around what those terms will actually mean in practice.
In general, however, experts agree that AI tools should follow the same ethical and moral principles that guide care delivery in general, and that machine learning models should promote the venerable Triple Aim of better outcomes, better experiences, and lower costs.
For example, UNESCO’s Global AI Ethics and Governance Observatory states that “protection of human rights and dignity” is at the core of its work on the matter, which itself is ”based on the advancement of fundamental principles such as transparency and fairness [and] always remembering the importance of human oversight of AI systems.”
The American Medical Association (AMA) is also leaning heavily into the topic, citing ethics as one of the three driving forces in the AI ecosystem, alongside equity and evidence. According to them, ethical AI is tied to an environment in which “patients’ rights are respected, and they are empowered to make informed decisions about the use of AI in their care.” Ethical AI can only exist in the context of strong governance, wherein roles and responsibilities are clearly defined, AMA says.
The World Health Organization (WHO) adds to the conversation by saying that national governments are primarily responsible for regulating AI models and ensuring ethical deployment of these tools – although in the real world of patient care, individual healthcare organizations will likely be in the hot seat when it comes to determining the fairness and equity of the specific suite of tools they choose to put in place.
This highly distributed nature of AI implementation means that risks of unintentional bias are everywhere, particularly when training data is not representative and inclusive of real-world populations. While the Office of Civil Rights (OCR) and CMS are already trying to put guardrails around this issue from a regulatory angle, it will be difficult to monitor and manage on the massive scale required.
As a result, individual organizations need to learn more about how to design an AI system with safety, fairness, and risk mitigation in mind, says the Coalition for Health AI (CHAI). In its extensive checklist for responsible AI implementation, CHAI notes that continuous human-backed monitoring of outcomes at multiple key points in the design, implementation, and deployment phases will be crucial for identifying and eradicating bias before it produces harm.
CHIME agrees with this assessment, devoting about half of its ten new AI principles to issues of equity, fairness, oversight, and safety. The organization points out that true equity includes ensuring that under-resourced healthcare providers have access to the same level of cutting-edge AI technologies as their peers so as not to exacerbate the growing “digital divide” between providers in underserved communities and their more affluent counterparts.
CHIME also stresses that collaboration, partnerships, and broad discussion of shared technical and ethical standards will be required for developing a future where AI can be reliably used for the good of all healthcare stakeholders, including patients and their families.
Participation from industry, including the Big Tech giants providing much of the underlying infrastructure for AI development, will be an essential part of this process. While many familiar names have already signed public pledges or launched AI ethics projects of their own, healthcare providers, payers, and patient advocacy groups will need to keep the pressure on these companies to hold them accountable for truly ethical actions now and in the future.
Doing so will help to make “ethical AI” a reality across the care continuum so that we can realize the visions laid out by these frameworks, roadmaps, and guidelines. By prioritizing strong ethics, coordinated governance, and responsible implementation of AI tools, we can deliver on the promises of a fair and impactful technical environment as we move forward into the next era of digital maturity.
Jennifer Bresnick is a journalist and freelance content creator with a decade of experience in the health IT industry. Her work has focused on leveraging innovative technology tools to create value, improve health equity, and achieve the promises of the learning health system. She can be reached at jennifer@inklesscreative.com.