Why organizational readiness, not tech, determines AI success in healthcare
In healthcare’s race to adopt artificial intelligence, healthcare leaders are discovering that the technology itself isn’t the biggest hurdle. Instead, the human elements—from organizational readiness to individual trust—often determine whether AI initiatives thrive or falter.
“More than 50% of leaders are not aligned in their priorities,” explains Dr. Angel Mena, Chief Medical Officer at Symplr during a panel at ViVE 2025. “The reality is that before we implement any AI solution, we truly need to be aligned to solve the problems.”
This misalignment—not technological limitations—often derails promising AI initiatives before they even begin. Without clinical and leadership consensus on priorities, even the most sophisticated AI tools struggle to gain traction.
Building clinical trust at Cedars-Sinai
When Cedars-Sinai Medical Center partnered with K Health in 2023 to develop CS Connect, an AI-powered platform for urgent and primary care, Dr. Shoma Desai and her team encountered significant skepticism.
The concerns spanned several dimensions: quality (“Who are these virtual doctors?”), patient experience (“Are patients going to like talking to an AI chatbot?”), and patient safety (“What if there are hallucinations?”).
Rather than dismissing these concerns, Cedars-Sinai embraced them. “We brought them [clinicians] in at the very beginning. Let’s design this in a way that’s safe,” Dr. Desai noted during the panel. “All of our beta testers were people who had leadership roles alongside regular patients just to make sure they feel comfortable with it.”
Change management at Tampa General
Organizations may officially embrace AI, but individual team members can still resist change, shared Dr. Nishit Patel, VP and Chief Medical Informatics Officer at Tampa General during the panel. Tampa General had already implemented 7 generative AI applications at scale when Dr. Patel introduced an AI tool for processing insurance denials and appeals letters to his team.
While the implementation increased team productivity by 20% immediately and eventually automated 80% of the work, individual experiences varied dramatically.
When investigating why one employee showed decreased productivity, Dr. Patel discovered “truly an existential fear that when a tool like this is coming in and 80% of their work is getting augmented or semi-automated, they’re really worried about their job security.”
This insight revealed the critical distinction between organizational readiness and individual readiness—a factor many implementation strategies overlook.
“I like to compare AI to ultrasound. I’m an emergency physician, and way back when they introduced bedside ultrasound to us, it used to be something that you would send a patient to radiology to do,” Dr. Desai added. “People were trying to learn this new technology. I don’t think everyone understood it. It required a lot of learning and training for people well into their career. But what happened? It made us more efficient. We were using it for clinical purposes, making procedures more safe. At this point, people can’t do without the ultrasound in the emergency department.”
He believes AI will follow the same adoption curve—initially met with uncertainty but eventually becoming an indispensable clinical tool.
Identifying the right problems
Healthcare leaders emphasize starting with the right problems rather than forcing AI into workflows.
Dr. Rebecca Mixon, Chief Medical Officer at Color Health, described how her team had to “find the question under the question” when developing an AI solution for emergency room visits.
Clinicians at Color Health requested a tool that would decrease emergency room visits. However, when Dr. Mixon’s team dug deeper through conversations with clinical staff and operations personnel, they realized the healthcare system had already implemented a hospital-at-home program designed to keep appropriate patients out of the ER.
The real problem wasn’t that they needed a new solution to reduce ER visits broadly. Rather, they needed better identification of which specific patients would benefit from their existing hospital-at-home program. They needed an AI tool that could identify these candidates earlier, before they reached the point of needing emergency care.
By understanding the true need, they could “build the tool to deliver the right outcome” rather than solving the wrong problem.
The risk tolerance paradox
Dr. Patel highlighted an interesting contradiction in healthcare’s approach to innovation: “At the bedside, we make clinical decisions 100 times a day with sometimes 30% of the information we need… but when we’re thinking about technology deployment, particularly with AI, we set a very different bar.”
This risk aversion toward technology—despite regular clinical uncertainty—may hamper innovation. Dr. Patel suggests a more balanced approach: “We have to tolerate more risk than perhaps many health systems have historically wanted to do to be able to get the gains that we need to fix these massive problems in healthcare delivery.”
Practical implementation guide
For healthcare organizations looking to implement AI successfully, the panel offered consistent advice:
- Listen to frontline clinicians: They understand workflows and which use cases need support
- Identify clear problems: Start with specific challenges rather than technology-first solutions
- Create structured pilots: Build small implementation projects with clear objectives and metrics
- Accept calculated risk: Perfect shouldn’t be the enemy of better in healthcare innovation
- Address individual concerns: Pay attention to how team members perceive changes to their roles
As healthcare continues its AI journey, the technology itself will increasingly fade into the background. The true differentiator will be how well organizations prepare their people, processes, and culture for transformation.
The panel’s insights suggest that successful AI implementation in healthcare is primarily an organizational challenge rather than a technological one. By addressing the human elements—alignment, trust, individual concerns, and proper problem definition—healthcare organizations can create environments where AI tools enhance rather than disrupt patient care.
“I’m looking at this as a win for providers everywhere who’ve been burdened,” Dr. Desai concluded.