Explore our Topics:

Spring Health leads push for AI safety standards in mental health

New council launches open-source framework to evaluate clinical safety and ethics of AI tools in mental health.
By admin
Oct 30, 2025, 4:02 PM

Spring Health, an AI-powered mental health platform for employers, is pulling together an unlikely coalition of academics, clinicians, health systems, and corporate leaders to tackle the lack of standards governing the use of AI in mental health care.

On October 1, the company announced the creation of the AI in Mental Health Safety & Ethics Council, a new group charged with developing the first automated, open-source framework for evaluating whether AI-powered mental health tools are safe, ethical, and clinically sound.

The framework, VERA-MH (Validation of Ethical and Responsible AI in Mental Health), will be made freely available to the community. It is being pitched as a universal yardstick for tools like AI chatbots and therapy apps that currently reach millions of patients with little regulatory oversight and no standardized way to assess whether they provide effective care or cause harm.

“AI in mental health has incredible promise, but ensuring tools are safe, trustworthy, and clinically validated is urgent,” said Dr. Mill Brown, chief medical officer at Spring Health. “This group is establishing the guardrails the industry needs now.”

 

Leaders across sectors

The council’s 11 founding members span psychiatry, data science, ethics, technology, and employee benefits. They include Nina Vasan, founder and director of Brainstorm: The Stanford Lab for Mental Health Innovation; Tim Hahn, a professor of machine learning and predictive analytics in psychiatry at the University of Münster; and Nicholas Jacobson, a Dartmouth professor and creator of Therabot.

Health system leaders include Don Mordecai, national mental health director at Kaiser Permanente, Doug Nemeck, behavioral health chief medical officer at Evernorth Health Services, and Hans Hage, chief product officer, consumer, at UnitedHealthcare. Fred Thiele, Microsoft’s vice president of global benefits & mobility, and Lilly Wyttenbach, managing director, head of global wellness, at JPMorgan Chase, will add an employer perspective to the council.

Rounding out the roster are Yale psychiatrist John Krystal, Harvard business ethicist Julian De Freitas, and Dr. Nils Opel, Professor of Affective Disorders at Charité – Berlin University Hospital. By blending clinical, academic, and corporate viewpoints, the group is designed to appeal to every corner of the mental health ecosystem: patients, providers, payers, employers, and policymakers.

 

The oversight problem

AI-driven mental health tools, from chatbots to therapy companions, are proliferating quickly, but there is no shared framework to test their safety or effectiveness. Regulators offer little guidance, leaving health systems and employers to vet vendor claims themselves. This patchwork approach is risky, particularly as LLMs grow capable of handling complex, multi-turn conversations about sensitive topics like suicide or medication. Without validation, the risks range from ineffective care to actively harmful advice.

Spring Health, valued at $3.3 billion, has positioned itself as a champion of responsible AI. The company released its own AI principles in April and floated the VERA-MH concept earlier this year, modeling it after benchmarks like GAIA, which evaluates general AI assistants. Unlike previous academic frameworks, such as the 2024 FAITA-Mental Health model, VERA-MH emphasizes automation and cross-sector governance. The goal is to move beyond one-off reviews toward continuous, real-time monitoring of AI systems as they evolve.

 

Testing tools in real-world scenarios

The council is designing VERA-MH to assess how AI performs in emotionally complex situations where errors can have real consequences. That means testing how systems handle suicide risk, answer medication questions, and navigate cultural nuance. The council envisions an automated system that can provide clear, clinically grounded safety metrics, tools that IT teams and benefits leaders can use when making procurement and implementation decisions. By releasing it as open source, they aim to create a common language that can keep pace with the rapid evolution of AI.

For healthcare IT professionals, the implications are significant. Rather than conducting costly, bespoke assessments of vendor claims, organizations could rely on standardized, automated evaluations. Employers, which are increasingly distributing AI-powered mental health tools through benefits programs, would gain a reliable way to judge their safety and impact before rolling them out to employees.

 

The struggle for broad adoption

Efforts to standardize digital mental health tools are not new. FAITA-Mental Health, a voluntary framework published in 2024, evaluated tools across six domains: credibility, user experience, user agency, equity and inclusivity, transparency, and safety and crisis management.

By contrast, the backers of VERA-MH argue that automated evaluations, multi-stakeholder governance, and industry backing will give it staying power and real-world relevance. Industry observers note parallels with other de facto standards in healthcare IT, such as HITRUST certification, which became a common security benchmark once major players adopted it.

To bolster legitimacy, Spring Health has pledged an independent governance structure. Council members were chosen through a transparent nomination process and will be bound by conflict-of-interest policies. The council will meet regularly and launch VERA-MH through an open development process, inviting additional healthcare, academic, and advocacy groups to join.

 

Without standards, risks grow for everyone

Mental health access challenges have fueled demand for AI-powered support, from chatbots offering therapy-like conversations to predictive models flagging patients at risk. The market is expanding fast, but quality varies dramatically.

Without guardrails, patients could encounter tools that misdiagnose, miss warning signs, or provide culturally insensitive advice. For payers and providers, the lack of standards creates procurement headaches and liability risks; for employers, it raises questions about whether benefits dollars are being spent responsibly.

If widely embraced, VERA-MH could reshape how these decisions are made. Health systems and insurers might require vendors to meet its benchmarks before integration. Employers might demand compliance before including tools in benefits packages. Regulators, too, could eventually point to VERA-MH as an industry reference point.

 

Balancing collaboration and progress

The council sees its work as a multi-year, community-driven effort. The plan is to move beyond general principles toward concrete technical, clinical, and operational standards. Members stress that the project is intended to help channel innovation responsibly, not stifle it.

Success is far from guaranteed, and the healthcare industry has a long history of failed attempts to impose voluntary standards. The council’s diverse membership and backing from Spring Health, a billion-dollar industry leader, gives it an edge, but VERA-MH will not gain traction unless it can keep pace with AI technology that is now advancing faster than the regulatory process.


Show Your Support

Subscribe

Newsletter Logo

Subscribe to our topic-centric newsletters to get the latest insights delivered to your inbox weekly.

Enter your information below

By submitting this form, you are agreeing to DHI’s Privacy Policy and Terms of Use.