Explore our Topics:

Will CA’s GenAI disclaimer law bring comfort to healthcare consumers?

A new law that requires a disclaimer on generative AI communications aims to allay concerns for healthcare consumers.
By admin
Dec 10, 2024, 2:04 PM

A new law that requires a disclaimer on generative AI communications aims to allay concerns for healthcare consumers. 

Regulating the rapidly evolving world of artificial intelligence isn’t easy, but California has never shied away from pushing the boundaries of what can be enforced by law at the state level.  

Most recently, Governor Gavin Newsom (D) signed a bill that requires healthcare providers to clearly label all patient-facing communications created by generative AI (GenAI), including written, audio, and video content. 

Disclaimers must be prominently displayed at the beginning of any written communications (physical or digital), as well as at the beginning and end of audio messages and throughout the entirety of a video interaction.  Providers must also include instructions describing how the consumer can get in touch with an appropriate human staff member if they have questions or concerns. 

The only exception is if Gen AI-generated content is “read and reviewed by a human licensed or certified health care provider,” which may exclude some of the GenAI patient communication features available in EHR systems like Epic, which allow clinicians to draft messages with the intention of being refined and approved before sending.  

Clinics that violate the law will be subject to state action, while individual physicians found failing to comply will need to face the either the Medical Board of California or the Osteopathic Medical Board, both of which will need to decide on appropriate penalties. 

Navigating a divided landscape of public opinion on GenAI

Healthcare leaders generally fall into two schools of thought when it comes to transparency around the use of GenAI and other AI applications in the healthcare setting.  

For some, being open about AI gives consumers a chance to make informed decisions about the content of the message and lets them know that their providers are using every tool at their disposal to communicate fully and speedily. For others, labeling a message as AI-assisted may “cheapen” the content, and push mistrustful patients into feeling like their providers don’t care enough (or don’t know enough) about them to handcraft a response.  

As a result, some health systems already label their AI-generated content, while others feel more comfortable keeping that information to themselves. 

Both views have valid elements, especially as many patients themselves haven’t yet decided to what degree they’re comfortable with AI making its way into their care. 

 Surveys consistently reveal deeply split opinions that showcase ongoing uncertainty about how GenAI should be used during the care process.  

Healthcare consumers remain concerned that generative AI models aren’t trustworthy, with nearly half of respondents to a Wolters Kluwer Health poll stating that they would not trust the results if they knew GenAI was being used as part of their care. 

A separate survey by Bain & Company found that while 55% of consumers were more or less comfortable with the idea of GenAI taking notes during appointments, a whopping 79% were not ready for GenAI to play a role in treatment decisions or providing medical advice.  

And late last year, a survey by Medtronic and Morning Consult revealed that only a third of patients would prefer to work with a physician who uses AI, and even fewer (20%) would want their doctor to use AI extensively as part of the care process, indicating that providers, developers, and regulators may have a lot of work to do before earning the trust of the majority of patients. 

Implications for the future of GenAI in healthcare

The California strategy tracks with core elements of many national and international frameworks for the development of ethical AI, including the White House AI Bill of Rights. 

In that document, officials state that consumers “should know that an automated system is being used and understand how and why it contributes to outcomes that impact you” via “accessible plain language documentation, including clear descriptions of the overall system functioning and the role automation plays, notice that such systems are in use, the individual or organization responsible for the system, and explanations of outcomes that are clear, timely, and accessible.” 

By leading the way in transparency requirements for technologies used in the populous state, California could help shape the next generation of GenAI tools, which will all need to abide by the regulation if they are to be used within its borders. 

Instead of developing different products for California clients and customers in other states, developers may be more likely to simply integrate appropriate transparency features into their overall product offerings, which could bring an end to the debate by default. 

It remains to be seen whether or not a simple disclaimer makes consumers more comfortable with how GenAI is being integrated into everyday clinical care, but it probably won’t hurt.  Clearly, consumers value openness and honesty from their providers, especially in direct communications, and being up front about how content is being created and shared will likely strengthen relationships rather than diminish them. 

California will no doubt be an interesting testbed for consumer sentiment on the matter, especially as developers continue to release GenAI tools that reach into new and innovative areas of direct patient communications and care.  


Jennifer Bresnick is a journalist and freelance content creator with a decade of experience in the health IT industry.  Her work has focused on leveraging innovative technology tools to create value, improve health equity, and achieve the promises of the learning health system.  She can be reached at jennifer@inklesscreative.com.


Show Your Support

Subscribe

Newsletter Logo

Subscribe to our topic-centric newsletters to get the latest insights delivered to your inbox weekly.

Enter your information below

By submitting this form, you are agreeing to DHI’s Privacy Policy and Terms of Use.