Exploring the skepticism maturity model in healthcare
Bleeding edge technology has always been analogous with a healthy degree of skepticism.
In the late 1800’s the popular cry, “Get a Horse!” greeted almost every automobile which appeared on the roads.
Many doubted that “television could have ever killed the radio star,” as was later popularized in the lyrics of the famous track by The Buggles.
And lest we forget in 1977, Ken Olsen, the founder and CEO of Digital Equipment Corporation, said, “There is no reason for any individual to have a computer in his home.”
While maybe not to the same retrospective effect, a “skepticism maturity model” is occurring today with emerging technologies in many industries. It’s still unclear whether in 2050 people will look back and think how crazy now-mainstream technologies seemed to be at that time. But it is interesting to look at the process by which skepticism leads to acceptance and eventually to dependency.
If I had to pick out a technology where this is occurring today it would be in the area of machine learning, artificial intelligence, and cognitive computing. As I’ve written in the past there is anxiety about how AI will affect the workforce. But the skepticism about these platforms goes much deeper. This is especially the case where, unlike recommending products or books to read, their application can literally have life-and-death implications
If we look at it as the “Healthcare AI Skepticism Maturity Model”, we can learn from vendors in this segment that there is an evolution of acceptance related to being able to trust the technology. At the onset, despite the zettabytes of data being curated and analyzed by the cognitive platforms, there is still skepticism about whether something critical might have been overlooked in the data collection processes. Needless to say, the degree of skeptic anxiety is less for industries where the stakes are lower if the data is slightly off. However, if the stakes relate to a rare disease or an infant being treated in the neonatal unit, the skepticism-to-accountability ratio increases dramatically. In a litigious society, the risk transcends the medical outcomes.
My discussion with vendors, doctors, and healthcare IT executives tells me that reassurance must be given on a monthly basis given that sophisticated AI platforms learn in exponential increments as a direct result of interaction with the practitioners using the cognitive systems. Essentially the technology and user are teaching each other, whether it be to point out false positives or to confirm that the cognitive learning findings were correct.
What’s most interesting, yet not surprising, is that there is a generational relationship to these machine learning technologies. It’s a blinding glimpse of the obvious that older people are more resistant to some technologies than their millennial counterparts. There are still many physicians and clinicians who lean more toward “Marcus Welby” than Gray’s Anatomy. However, in a segment where the years of real-life clinical experience can be the missing diagnostic link in the artificial intelligence algorithm, the implications related to trusting the system cannot be understated.
This human and algorithmic “swarming” aspect applies directly to the velocity with which buyers accept the prescriptive output and move into the less angst-driven stages of the Skepticism Maturity Model.
The technology branding side of my brain tells me that the barriers to entry in such highly complex high-stakes machine intelligence sectors are enormous. Add to this the cybersecurity implications to protect the privacy and accuracy of life-or-death data tells me this maturity model won’t be shifting any time soon.
But as the demographics change with the entry of millennials and Gen Z into the medical profession and the sampling sizes and diversity of the AI universes increase, we will see a “Less-skeptical Maturity Model” emerge.