Explore our Topics:

Closing the trust gap: Building patient confidence in AI tools

Clinicians are much more likely to show trust in AI than patients, and both groups want more certainty around the reliability of AI in real-world settings.
By admin
Jun 12, 2025, 11:35 AM

Few modern technologies have seen such rapid uptake as artificial intelligence, which is now inescapably embedded in almost every single digital experience across all industries. For once, healthcare is not an exception to the rule: provider and payer organizations have been just as eager as their peers in other sectors to quickly infuse their existing infrastructure with AI-enabled features. 

However, the healthcare industry still has its own unique challenges to address as it tries to modernize with the aid of AI, chief among which is the fact that patients aren’t just ordinary consumers looking for pleasant interactions with a product or service. They’re looking for immediate, tangible aid to extend their lives while staring down potentially serious injuries or diseases, and are often stressed, scared, and vulnerable when doing so. 

This high-stakes, emotionally charged component of healthcare is what makes AI adoption more problematic than in other areas of daily life. With only a few years of testing and validation under the belt for the vast majority of AI tools, it’s easy to see why patients (and their providers) might be nervous about relying too heavily on these constantly evolving technologies for life-and-death decision making. 

For AI to succeed in healthcare, both patients and providers need to trust that these models can do what they promise in a safe, effective, and accurate manner.  

Thus far, that trust has been largely missing from the equation. The vast majority of patients want to know exactly how AI is being used in their care, with close to 50% in one recent survey stating that they would not trust the results of their care if they knew their provider was using AI to assist them.  

Now, a new survey from Philips shows that not only are patients still skeptical about AI, but there is a significant trust gap between patients and their care providers that could create problems if adoption continues to outpace consumer readiness for an AI-powered health system. 

The data shows that only 48% of patients are optimistic that AI can improve healthcare at all, compared to 63% of healthcare professionals who believe that AI can support better patient outcomes. Patients over the age of 45 are even less sure about AI, with a mere 33% of older respondents expressing positive sentiments about these tools. 

The misalignment between patient and providers (and the executive teams who are pouring billions of dollars into AI infrastructure) could weaken already-tenuous patient-provider relationships and make it more difficult for care teams to effectively collaborate with patients on delivering proactive, coordinated, and holistic care. 

To make sure that providers and patients both get the most out of AI’s potential to revolutionize care processes, healthcare organizations will need to work closely with technology developers and patient advocates to design systems that are truly trustworthy – and develop the educational strategies required to demonstrate the value of AI to wary patient populations. 

Design person-friendly AI tools from the beginning

AI should be designed around the needs of both patient and providers in equal measure, Philips suggests, with multi-disciplinary stakeholder involvement from the very start of the process.  

Solutions should complement provider workflows, but also consider the way that patients interact with the healthcare system to support seamless, intuitive interactions with care teams. This may include clear and accessible explanations of how AI is being used in patient-facing contexts, or the ability to easily connect with a human instead of a chatbot if desired to complete a transaction. 

Implement strong governance during development and deployment

Governance is crucial for protecting the privacy, security, and safety of patients during the AI maturity cycle. Health systems must adhere to stringent principles of AI governance, including ethical and equitable use of AI-enabled tools, a commitment to transparency and explicability, and a clear chain of accountability and remediation when something goes awry. Considering that 85% of healthcare professionals in the Philips survey expressed concern about legal liability for AI use, early investment in robust governance will be essential for helping providers and patients embrace new capabilities. 

Deploy effective communication and education for patients

Governance guardrails will need to be conveyed to patients alongside other important information such as how AI is being used in their care and evidence of its reliability and effectiveness. The survey revealed that patients are open to receiving education about AI from their care teams, primarily from their physicians (79%) and nurses (72%), and feel more comfortable with AI when they know that their providers have oversight of these tools.  

Healthcare organizations will need to capitalize on the trust that patients have in their providers to shore up certainty around how AI is being deployed in the clinical environment. While physicians and nurses may not always have the time or resources to go in-depth with every patient, developing reusable educational materials co-authored by respected clinicians could help to smooth the way for many patient groups.  

Consider the entire healthcare ecosystem when choosing AI tools

Healthcare organizations have been burned in the past by implementing digital tools in isolation, and should take these lessons to heart when creating their AI strategies. Close collaboration internally, as well as with stakeholders across the whole care continuum, will be vital for seeing success with AI investments. Philips recommends that provider organizations collaborate with payers, patient groups, policymakers, regulators, researchers, and technology developers to drive aligned innovation and create a seamless pathway for patient care across disparate systems. 

Closing the trust gaps between patients, providers, and AI technology is possible with the right strategies and a strong commitment to making the transition easy for consumers. By being up front with patients about the value and risks of AI, and showcasing what the organization is doing to make sure the former outweighs the latter, healthcare providers can successfully bring patients along on the journey to an AI-enabled environment that prioritizes safe, equitable, and reliable results. 


Jennifer Bresnick is a journalist and freelance content creator with a decade of experience in the health IT industry.  Her work has focused on leveraging innovative technology tools to create value, improve health equity, and achieve the promises of the learning health system.  She can be reached at [email protected]. 


Show Your Support

Subscribe

Newsletter Logo

Subscribe to our topic-centric newsletters to get the latest insights delivered to your inbox weekly.

Enter your information below

By submitting this form, you are agreeing to DHI’s Privacy Policy and Terms of Use.