FUTURE-AI: A new framework for ethical AI in healthcare
Consensus is a rare thing in modern society, but the healthcare community is doing a relatively good job of agreeing on something very important: the need to establish fair and ethical guidelines for the implementation of artificial intelligence.
Over the past few years, multiple professional societies, and industry coalitions have shared their perspectives on how to ensure that AI will be used appropriately to eliminate rather than reinforce biases, disparities, and risks to patient safety. These documents have all centered around principles of transparency, person-centeredness, and accountability with a focus on protecting human rights and keeping clinical oversight in the loop.
Now, a group of more than 100 experts from 50 different countries are adding to the discussion with the FUTURE-AI framework, published in BMJ this month, which they label as “the first structured and holistic guideline for trustworthy and ethical AI in healthcare, established through wide international consensus and covering the entire lifecycle of AI.”
The FUTURE-AI framework draws inspiration from the FAIR principles for data management (findability, accessibility, interoperability, and reusability) and expands upon these ideas to establish six guiding principles for AI in healthcare: fairness, universality, traceability, usability, robustness, and explainability.
- Fairness – AI should maintain the same performance across individuals and should support equitable care for all citizens. While perfect fairness may not be fully possible to achieve, tools should be developed with mechanism to identify, report, evaluate, and actively minimize unfair outcomes whenever possible.
- Universality – AI tools should be generalizable outside of their controlled development environment and should function predictably and appropriately in multiple settings. To support this, tools should be developed with standards in mind, including medical ontologies, data models, interface standards, and other technical standards, and robustly tested under different conditions to ensure transferability of capabilities.
- Traceability – Transparency is crucial for AI tools that touch patients in any way, and AI developers should prioritize mechanisms for documenting, auditing, and monitoring the complete trajectory of AI tools and the data that supports them. This will increase accountability, help answer still-evolving questions of liability, and help identify and resolve risks quickly.
- Usability – End users should be able to use an AI tool to achieve a clinical goal efficiently and safety in the real-world environment, the experts state. Not only should the models behind the tool be clinical useful and safe for their intended purposes, but its user interfaces should also be designed in an intuitive and accessible manner. Clinical users should be involved in the design and implementation process as much as possible, and organizations should prioritize robust training for all users to ensure appropriate use.
- Robustness – This principle refers to an AI tool’s ability to maintain expected performance and accuracy under a variety of conditions, including the unexpected. This is crucial in an environment where data quality is often less than ideal, and even small variations in training, prompts, or reinforcement learning can produce major variations in outputs. Creating robust AI tools requires developers to monitor the quality of source data, train models with representative and inclusive datasets, and continually evaluate results to ensure concurrence with expected outcomes.
- Explainability – “Black box” AI tools can pose significant risks to patient safety, and should be avoided in the clinical environment. Instead, developers and adopters should look for tools that provide clinically meaningful information about the logic behind AI decisions. This will enable users to understand the capabilities and limitations of their AI tools, as well as overrule AI suggestions if they do not align with their own clinical experience and knowledge. Developers and users should clearly define explainability needs early in the process and carefully evaluate results throughout the development journey.
The coalition of experts also suggests emphasizing involvement from stakeholders throughout the lengthy and complex development and deployment process. Healthcare professionals, patient representatives, expert ethicists, data managers, compliance and security professionals, and legal experts should all have a seat at the table to ensure all perspectives are taken into account.
By adhering to these principles, AI developers and adopters can support an AI ecosystem that is as safe, transparent, ethical, and trustworthy as possible for patients and end-users.
Jennifer Bresnick is a journalist and freelance content creator with a decade of experience in the health IT industry. Her work has focused on leveraging innovative technology tools to create value, improve health equity, and achieve the promises of the learning health system. She can be reached at [email protected].