Explore our Topics:

Majority of patients want notifications when AI is used in healthcare

Only a small fraction of patients are comfortable with the idea of not knowing whether AI has been used as part of their care.
By admin
Dec 17, 2024, 2:27 PM

California may be onto something with its recent law requiring healthcare providers to label patient-facing content that is created by generative AI.  

In a new survey conducted by researchers at the University of Michigan and the University of Minnesota, researchers found that 95% of patients found it important to know when artificial intelligence is being used as part of their healthcare, with over half (62%) saying that it was “very true” they wanted to be told about the role of AI in their interactions with providers or payers. 

While the survey was small in its sample of 100 healthcare consumers, researchers made an effort to include individuals of different age groups, education levels, and ethnic/racial backgrounds to accurately represent the US population.   

They found significant differences in the way people of different generations and education levels feel about AI transparency, with younger people and those without advanced higher education appearing much more comfortable with not knowing whether AI was involved in their care. 

Close to 10% of people aged 18-29 said they didn’t need an AI notification, compared to less than half of that number among people aged 45 and older. A similar pattern emerged among those with a high school diploma, who were less concerned with AI notification, and those with post doctorate or professional degrees, who demanded more insight into the role of AI. 

“Our findings suggest that notification about AI will be necessary for ethical AI and should be a priority for organizations and policymakers,” the research team stated. “With this signal about the public’s preference for notification, the question for health systems and policymakers is not whether to notify patients but when and how.” 

California is tackling that issue with its new rule that all public-facing generative AI content come with a prominent disclaimer before, during, and after the interaction, depending on the format. Providers must also include information about how to get in contact with an appropriate human expert if the consumer wants to talk with a real person about their needs. 

However, the state is among the first to implement these requirements, and as the survey research team notes in their article, there isn’t yet a standard best practice for making consumers aware of how AI is involved in healthcare processes. 

But with these results echoing previous polls in finding incredibly strong demand for transparency among consumers, it’s clear that the industry needs to tackle the issue in a coordinated manner – and quickly.  

AI-generated content is designed to mimic human communications by nature, and it’s increasingly good at fooling even people who are on the lookout for it. 

For example, a recent experiment by content technology company Bynder found that only half of people can identify consumer-focused AI-generated content when presented with two articles on the same subject, one written by Chat-GPT and the other crafted by an expert copywriter.  

And when researchers from the US, UK, and Australia asked healthcare professionals to flag academic journal abstracts written by AI, the numbers were even worse, with just 43% of professionals being able to point out the AI blurbs. Surprisingly, performance was about 10% worse among those with prior experience reviewing journal abstracts, raising interesting questions about how AI content can “double bluff” people who might already be suspicious about its origins and further complicate issues of trustworthiness. 

In this complex environment, regulators will need to make it a priority to work closely with technology developers and health system implementors to figure out the best way to be clear about the use of AI-enabled tools, especially as it becomes more challenging to identify AI content that is becoming increasingly prevalent across the healthcare journey. 

During this phase of rapid evolution of AI-created content, it’s encouraging that patients maintain strong interest in transparency and understand the value of knowing the source of the materials shared by their providers. Hopefully, this will create sustained positive pressure on industry leaders to embrace transparency as a core principle of AI development and deployment. 


Jennifer Bresnick is a journalist and freelance content creator with a decade of experience in the health IT industry.  Her work has focused on leveraging innovative technology tools to create value, improve health equity, and achieve the promises of the learning health system.  She can be reached at jennifer@inklesscreative.com.


Show Your Support

Subscribe

Newsletter Logo

Subscribe to our topic-centric newsletters to get the latest insights delivered to your inbox weekly.

Enter your information below

By submitting this form, you are agreeing to DHI’s Privacy Policy and Terms of Use.