Explore our Topics:

McKinsey’s 2025 tech trends report finds healthcare caught between AI promise and perils

McKinsey's latest tech trends report covers biggest developments in healthcare’s AI revolution, including agentic AI and cybersecurity risks.
By admin
Sep 4, 2025, 2:46 PM

This is part one of a two-part series analyzing the findings of the McKinsey Technology Trends Outlook 2025 report.

 

The latest Technology Trends Outlook from McKinsey & Company offers a sweeping view of 13 frontier technologies, from robotics and mobility to bioengineering and quantum computing. For healthcare, three areas stand out as urgent from this year’s report: the rise of agentic AI, the scaling of artificial intelligence across industries, and mounting concerns around digital trust and cybersecurity.

 Owing to the huge developments in AI-related technology, McKinsey’s fifth annual outlook does not treat artificial intelligence as a standalone wave, but rather the accelerant for most other domains. Following the money backs up this position; AI led all investment trends in 2024, with $124.3 billion in equity funding. Once experimental, generative AI has moved into mainstream enterprise use. For healthcare leaders, that means AI is no longer an “if” but a “how fast” and “how responsibly” decision.

 

Digital coworkers are getting up to speed

A newcomer to the annual report, agentic AI is a class of systems that can plan and execute multistep workflows (as opposed to simply generating text). These “virtual coworkers” can be trained to do everything from booking travel to drafting memos to, potentially, handling administrative processes in a hospital revenue cycle.

As McKinsey documents, job postings for agentic AI roles grew exponentially between 2023 and 2024, with companies raising $1.1 billion in equity investment to develop these systems. For healthcare, the implications are both promising and a little unnerving. Imagine an AI agent orchestrating care coordination tasks across electronic health record (EHR) systems, automatically scheduling follow-ups, ordering labs, and flagging anomalies, all without human intervention.

Pilot examples outside healthcare suggest extraordinary potential. McKinsey cites a banking use case in which multiagent systems boosted analyst productivity by 60%, and security firm Darktrace, already a healthcare vendor, uses autonomous AI agents to identify cyber intrusions in real time. If workflows like these prove reliable in clinical and administrative environments, agentic AI could become the most disruptive technology in hospitals over the next five years.

It’s not all upside, of course. Agentic AI brings risks around liability, trust, and safety. What happens if an agent makes an unauthorized clinical decision? How much oversight is required to ensure safe use in patient-facing scenarios? The report stresses the urgent need for governance frameworks to address these uncertainties before organizations scale deployments.

 

AI adoption races ahead of hospital integration

Beyond agents, McKinsey’s broader AI findings highlight how quickly generative and multimodal models are maturing. Nearly 80% of organizations surveyed report using AI in at least one function, and 92% of executives plan to invest more in the next three years.

For healthcare, one of the most consequential developments is the rise of smaller, domain-specific models. These compact algorithms, derived from massive “parent” systems, can run on lower-cost infrastructure. That means hospitals, especially those with limited budgets, may soon have access to advanced AI without needing hyperscale cloud contracts. Models tailored for clinical documentation, radiology, or pathology could be deployed directly within hospital networks, reducing data leakage risks and enabling local control.

The report also covers AI’s accelerating role in science. Google DeepMind’s AlphaFold 3 now predicts protein structures and molecular interactions with unprecedented accuracy, an advancement that has direct implications for drug development and personalized medicine. For academic medical centers and biotech partners, this kind of AI-enabled discovery is already shortening the timeline for identifying viable compounds.

Despite the AI-related breakthroughs being made in corporate and academic research, the numbers on the ground are less compelling; only 1% of organizations say their AI adoption is “fully mature.” Many health systems remain stuck in pilots or proofs-of-concept, struggling with integration into legacy systems and the messy realities of retraining staff.

 

Meanwhile: cyber risks grow and trust in AI wavers

AI is tipping the scales of opportunity, but cybersecurity is its counterweight. The McKinsey report groups digital trust and cybersecurity together, spanning basics like ID checks and encryption to cutting-edge defenses such as AI threat detection and blockchain tools.

Equity investment in this category reached $77.8 billion in 2024, up 7% from the prior year. Cyberattacks on healthcare organizations have surged in the past year, with the sector now one of the most targeted in the U.S. economy. Attackers are also beginning to use generative AI to design more convincing phishing campaigns and automate vulnerability scans, and defenders are racing to deploy AI-based monitoring to keep up.

For hospitals, where a ransomware attack can shut down EHRs, delay surgeries, and put patients at risk, the stakes are existential. McKinsey urges them to shore up the basics — tracking their tech, locking down logins, and patching holes — while also testing next-generation defenses.

Trust in AI itself is a separate issue. Public confidence in AI providers has slipped from 61% in 2019 to just 53% in 2024. In healthcare, that skepticism could hinder adoption of AI-powered clinical tools unless organizations demonstrate explainability, fairness, and transparency. McKinsey notes that companies prioritizing digital trust outperform peers financially, suggesting hospitals that invest in trust-building measures may see both reputational and operational benefits.

 

The rule makers are finally starting to make the rules

Governments are moving quickly to catch up to the breakneck pace of AI development. California’s AI Transparency Act, which takes effect in 2026, will require disclosure of AI-generated content, a provision that could spill over into medical documentation and patient communication. Meanwhile, U.S. federal regulators are drafting frameworks for AI governance in healthcare, echoing warnings from the National Academy of Medicine about bias, safety, and oversight.

Globally, the EU’s Markets in Crypto-Assets (MiCA) regulation, while aimed at finance, sets a precedent for how tokenization and blockchain systems might be governed, potentially relevant to health data exchange. At the same time, the rise of quantum computing threatens to upend current encryption standards and raises concerns about the future of health data security.

 

Healthcare is leaping forward. Can it stick the landing?

McKinsey’s 2025 outlook illustrates a rare convergence: AI systems are scaling, agentic models are proving real gains, and regulators are writing rules to stabilize adoption. For healthcare, this is an inflection point. Hospitals that pair innovation with governance and security can move beyond pilots and into lasting transformation. The aim must go beyond efficiency and reach for safer, more equitable care. Healthcare leaders have the opportunity now to define their industry’s direction for the next decade. They must act with urgency or risk squandering the potential created by AI breakthroughs.


Check out Part 2Healthcare at the Crossroads: McKinsey’s 2025 Tech Trends Point to Bioengineering, Mobility, and Sustainability


Show Your Support

Subscribe

Newsletter Logo

Subscribe to our topic-centric newsletters to get the latest insights delivered to your inbox weekly.

Enter your information below

By submitting this form, you are agreeing to DHI’s Privacy Policy and Terms of Use.