What the White House’s AI Action Plan means for digital health
On July 23, 2025 the White House released Winning the AI Race: America’s AI Action Plan, a 28‑page document that outlines how the federal government will accelerate U.S. dominance in AI. The plan is anchored on three pillars—accelerating innovation, building AI infrastructure and leading in international diplomacy and security. While the tone is geopolitically charged, many of the proposed policies could reshape digital health. Below is a breakdown of what matters most for healthcare CIOs, CMIOs and digital health executives.
A deregulatory stance with consequences for healthcare
One of the plan’s early themes is reducing regulatory barriers to allow private‑sector AI to grow. The administration rescinded the Biden‑era executive order on trustworthy AI and vows to restrict federal funding to states with stringent AI laws. It directs the Office of Management and Budget to review and revise regulations across agencies and, if necessary, limit funding to jurisdictions whose rules federal officials believe could impede AI investment. For healthcare organizations, this signals a potential realignment of compliance strategies: states such as Colorado and Utah have enacted AI laws governing “high‑risk” systems used in clinical settings. Legal observers warn that federal efforts to penalize states with stricter rules could create a compliance patchwork that affects eligibility for AI research funding.
Related article: Congress wants states to stop states from passing AI laws
The plan also recommends revising the National Institute of Standards and Technology (NIST) AI Risk Management Framework to remove mentions of misinformation, diversity, equity and inclusion (DEI) and climate change. The plan proposes updating guidelines to ensure that the federal government only awards contracts to developers whose systems align with Trump’s “anti-woke” policies.
Removing adoption barriers: Sandboxes and standards
Beyond deregulation, the Action Plan acknowledges that the bottleneck to harnessing AI “is not necessarily the availability of models” but the slow adoption within large, established sectors like healthcare. It proposes establishing regulatory sandboxes or AI Centers of Excellence where researchers, start‑ups and established enterprises can rapidly deploy and test AI tools. These sandboxes would be enabled by agencies including the FDA and Securities and Exchange Commission, providing healthcare innovators a way to trial algorithms without violating existing regulations like HIPAA. The plan also directs NIST to launch domain‑specific efforts in healthcare, energy and agriculture to convene public‑private‑academic stakeholders and develop national standards for AI systems. Such standards could influence everything from electronic health record integration to how AI‑driven diagnostics are evaluated.
Building the data foundation
High‑quality data are the lifeblood of AI. The plan argues that such data constitute a national strategic asset and warns that adversaries have raced ahead in amassing vast scientific datasets. To counter this, it calls for the United States to assemble massive, high‑quality datasets suitable for AI research, while respecting privacy and civil liberties. Recommended actions include directing the National Science and Technology Council to set minimum data quality standards for biological, materials science and other scientific modalities. Federally funded researchers would be required to disclose non‑proprietary datasets used in AI models. For digital health, these policies could accelerate the availability of curated, interoperable datasets for training clinical decision‑support tools and large language models.
Related article: Trump administration bets big on patient-controlled data, but privacy risks loom
Infrastructure and export ambitions
The plan’s second pillar focuses on infrastructure. It notes that America’s energy capacity has stagnated and that modern AI requires new chip manufacturing plants, data centers to operate them, and the power infrastructure to support them. To address this, it proposes streamlined permitting for data centers and semiconductor facilities, leveraging reforms to the National Environmental Policy Act and other statutes. Federal lands could be made available for data center construction, and security guardrails would aim to ensure that AI infrastructure remains free from foreign adversarial technology. While this may seem distant from healthcare, modern hospitals increasingly rely on cloud‑based AI services; faster permitting could reduce latency and costs for AI compute that underpins diagnostic imaging, population‑health analytics and telehealth.
The third pillar emphasizes international leadership. Key policies include a push to deploy U.S. AI technologies abroad by partnering with industry to deliver secure, full‑stack packages—hardware, models and standards—to allied nations. The plan also promotes rapid build‑out of data centers and encourages innovation by easing federal oversight. For digital health companies expanding abroad, this signals an effort to set global norms around AI safety and exports. Conversely, the geopolitical framing—countering Chinese influence in global AI governance—is likely to reinforce trade tensions and affect supply chains for medical device manufacturers.
Industry reactions: Civil liberties groups and policy experts weigh in
The Action Plan sparked sharp reactions from civil‑liberties and policy groups. The American Civil Liberties Union (ACLU) argued that the plan undermines Congress’s decision not to preempt state AI laws and would bar states and local governments from enforcing their own AI regulations for a decade.
ACLU senior policy counsel Cody Venzke warned that the plan undermines state authority by instructing the Federal Communications Commission to potentially override state AI laws and cutting off federal AI funding to states that adopt robust protections. The group also criticized directives to revise the NIST AI Risk Management Framework to eliminate references to diversity, equity and inclusion and misinformation, arguing that removing those safeguards could prevent developers from addressing discriminatory harms in high‑impact sectors such as employment, education and health care.