Americans cautiously warming to AI in healthcare
A recent survey by the Cleveland Clinic reveals mixed feelings among Americans regarding the use of artificial intelligence (AI) in healthcare, particularly for heart health. While 60% believe AI will improve heart care, only 22% have sought health advice from a chatbot or other AI technology.
Key findings:
- Three in five Americans (60%) believe AI will lead to better heart care.
- 72% believe health advice from a chatbot is accurate, but 89% would still seek a doctor’s confirmation.
- 65% are comfortable receiving heart health advice from AI, but only 22% have used it.
- 50% of Americans use technology like wearables to monitor health, with step count being the most tracked metric.
- 79% of those using health monitoring technology report positive physical or mental changes.
- 53% say they exercise more regularly after using wearables.
- 34% improve their eating habits and 27% prioritize de-stressing.
Overall, the survey highlights the growing acceptance of AI in healthcare, particularly for its potential to improve heart care. However, individuals remain cautious and prioritize doctor confirmation before acting on AI-based advice. The increasing use of health monitoring technology also shows promise in promoting healthier lifestyles. As AI and technology evolve, clear communication and collaboration between healthcare professionals and patients will be key to maximizing their benefits.
Experts Weigh In:
- Samir Kapadia, MD, Chairman of Cardiovascular Medicine at Cleveland Clinic: “AI has the potential to transform healthcare, especially in cardiovascular care. We want to educate patients about its role in assisting, not replacing, healthcare professionals.”
- Ashish Surraju, MD, Cardiologist at Cleveland Clinic: “AI can help process data from studies like echocardiograms, freeing up doctors for more complex tasks.”
- Dr. Surraju on patient-generated health data (PGHD): “While wearables provide valuable data, it’s crucial to consult with a physician for interpretation and medical decisions.”
Consensus: Caution on AI in healthcare
The results from the Cleveland Clinic survey join similar findings from Pew Research and Wolters Kluwer Health to paint a picture of American patients leery of AI in healthcare.
In the recent Wolters Kluwer survey, about half of respondents expressed concern that generative AI might produce false information that could affect their care — they would not trust advice from doctors who were using generative AI for clinical use. One of the primary drivers of distrust was their (86% of respondents) not fully understanding where generative AI models get their data or how the results are validated.
It appears attitudes on AI in healthcare have improved over the past year — during which AI exploded in all areas of life and industry. In the early 2023 Pew survey, 60% of Americans said they felt “uncomfortable” with AI being used as part of their healthcare and cited data security, care quality, and the patient-provider relationship as top concerns; 57% stated that the human element of healthcare might suffer from an infusion of artificial intelligence technology.
On improving care quality there were mixed views on whether AI was up to the task. About 40% said AI might reduce the occurrence of mistakes and errors, while only 30% think AI will improve care quality, and 38% don’t believe AI will make a significant difference.
Moving forward with AI in healthcare
While there appears to be a growing acceptance of AI’s potential in heart care, the ongoing skepticism observed in these three paints a complex picture. This highlights the need for a nuanced approach that bridges the gap between providers’ enthusiasm for AI’s efficiency and reduced clinician burden and patients’ concerns about accuracy and human interaction.
As providers wrestle with these two clashing forces, they will need to consider several key elements that could determine successful implementation:
- Transparency and Education: Healthcare providers must actively address concerns about data privacy, AI’s limitations, and its role as an assistant, not a replacement. Educational initiatives can empower patients to understand AI’s potential and limitations.
- Human-Centered Design: Prioritizing human oversight and integrating AI seamlessly into existing workflows can ensure patient trust and minimize disruption.
- Collaborative Development: Involving patients in the development and implementation of AI-powered tools can foster trust and ensure solutions address their genuine needs and concerns.
- Data Governance and Ethics: Robust data governance frameworks and ethical considerations are crucial to ensure responsible AI development and deployment in healthcare.
By fostering open communication, prioritizing human-centered design, and collaborating with patients, the healthcare community can navigate the crossroads of AI responsibly, maximizing its potential to improve heart health and overall care while respecting patient autonomy and trust. Only then can providers truly unlock the benefits of AI while ensuring it serves as a powerful tool that complements, rather than replaces, the irreplaceable human touch.