What factors influence a patient’s acceptance of AI integration in care?
Healthcare is all about discovering what makes people tick – and what will keep them ticking in the most optimal way for as long as possible. While clinicians are pretty knowledgeable about the physical components of this complicated question, the answers are less clear when it comes to the mental and emotional parts of the equation.
What makes one person thrive with constant reminders about medication adherence while the same messages drive others away? Why do certain patients do exceptionally well with navigating the health system solo while others need a great deal of support? And why are some people all-in on integrating artificial intelligence technology into their care experiences while seemingly similar populations reject the use of AI to augment decision-making?
The last question has become increasingly crucial as AI gains prominence as a high-value tool for both administrative and clinical tasks. New healthcare technologies have a history of atrophying quickly without strong buy-in from all affected groups. And while the AI wave seems strong enough to push all objections aside at the moment, a major patient rebellion could still derail the best laid plans of AI evangelists.
To get ahead of potential problems, researchers are starting to look into how patients perceive AI technologies when applied to care. Early explorations of patient attitudes show a mix of open-mindedness and caution, with some surveys highlighting low rates of trust in the accuracy of AI models, concern over the blurring lines between AI- and human-generated communications, and a widespread desire to know exactly when, where, and how AI is being used throughout the care process.
Now, a new study by an international research team, published in Nature’s Scientific Reports, is getting even more granular by breaking down the patient personas associated with greater or lesser acceptance of AI in healthcare.
Perhaps somewhat ironically, the team used artificial intelligence to create an AI affinity score to gauge the general population’s AI attitudes.
They found that the overwhelming majority of people (97%) are aware of at least the basics of what AI can do, with nearly 60% having knowingly used AI-powered tools themselves for some tasks in their personal or professional lives.
They also found some significant differences in the way certain demographic groups responded to the idea of AI integration into healthcare.
For example, people who identified as either male or female tended to fall along a typical bell curve of affinity, with the majority of opinions being relatively neutral. But people who identified their gender as “other” were much less likely to think favorably of AI in healthcare. This could be due to previous poor experiences with the healthcare system, and a lack of belief that AI will fundamentally improve structural biases that influence their interactions with care providers.
Educational level also influenced affinity in an interesting way. Participants with an advanced level of education tended to be fairly neutral on the prospect of AI helping to guide their care, while those with a moderate level of educational attainment exhibited a notable bent toward being more positive. Meanwhile, people with the lowest level of education skewed toward being the most mistrustful, except for a curious bump of respondents who exceeded other groups in their enthusiasm for digital augmentation.
Lastly, the team found that global geography played a role. People in Asia showed the most consistent affinity for AI, while North American residents exhibited much more variety in their attitudes, with more North Americans generally reporting mistrust of AI when compared with their global counterparts.
“On the surface, it appears that perceptions of AI integration in healthcare generally lean towards neutrality but exhibit variation most significantly on level of education, and regional considerations that play pivotal roles in shaping attitudes on AI generally, especially it’s integration into healthcare systems. Therefore, it is important to account for these demographic nuances when addressing public perceptions and fostering trust in AI integrations,” the team wrote.
The authors note that positive patient experiences are correlated with better adherence and better outcomes, showcasing the importance of securing trust and acceptance of new tools ostensibly designed to foster improved relationships between providers and patients.
Artificial intelligence is still relatively new to the industry, and there is still a long way to go before it is integrated into the care process in a comprehensive, standardized, and impactful manner. It is important to engage in research like this early in the adoption process to guide the development of digital technologies, ensure alignment with ethics and patient attitudes, and encourage safe, meaningful adoption of AI tools across the care continuum.
Jennifer Bresnick is a journalist and freelance content creator with a decade of experience in the health IT industry. Her work has focused on leveraging innovative technology tools to create value, improve health equity, and achieve the promises of the learning health system. She can be reached at [email protected].