Explore our Topics:

Could our health data be used to make bioweapons?

Google drops AI ethics rules after Trump kills Biden's AI order, raising fears of unrestricted AI in bioweapons development.
By admin
Feb 19, 2025, 11:00 AM

On his first day back in office, President Trump rescinded President Biden’s executive order on AI. The repeal lifted several ethical mandates meant to regulate AI development and curb its potential misuse, particularly by major technology companies.

Two weeks later, Google quietly revised its AI framework, eliminating a longstanding commitment to avoid using AI in ways “likely to cause overall harm.” 

The revision marked a stark departure from Google’s earlier stance. In 2018, faced with public outcry and internal protests, the company had declined to renew its contract with the U.S. Department of Defense for Project Maven—an initiative using AI to enhance drone strike accuracy. At the time, critics warned that such technology would lead to AI-powered autonomous weapons, essentially creating robotic killers.

“Having that removed is erasing the work that so many people in the ethical AI space and the activist space as well had done at Google,” said former head of Google’s ethical AI team Margaret Mitchell to Bloomberg. “More problematically, it means Google will probably now work on deploying technology directly that can kill people.”

Google is not alone in reversing course. Last summer, OpenAI similarly removed its commitment not to use AI for “military and warfare.” These moves suggest a broader industry trend away from AI safety pledges and toward more aggressive, unrestricted AI applications in military settings.

AI and the new age of bioweapons

While much of the focus on AI in warfare has revolved around autonomous weapons, a more insidious threat is emerging: AI-driven bioweapons. Bioweapons are defined as microorganisms—such as viruses, bacteria, or fungi—or their toxic byproducts that are deliberately engineered to cause disease and death. AI’s ability to analyze vast genomic datasets makes it a powerful tool for designing such weapons. Just as AI is used to map genetic vulnerabilities for targeted therapies in medicine, it could also be used to identify weaknesses in human DNA, enabling the creation of highly selective and devastating bioweapons.

According to a study published in Frontiers in Artificial Intelligence, AI’s intersection with biotechnology presents catastrophic risks, particularly in the hands of malicious actors. The convergence of AI with genetic editing tools like CRISPR raises serious biosecurity concerns, as it could enable the rapid development of deadly pathogens with minimal resources. 

For example, AI-driven synthetic biology tools could theoretically be used to design a virus that combines the contagiousness of measles, the lethality of smallpox, and the incubation period of HIV, making it nearly impossible to contain before widespread devastation occurs.

Will AI be used to create bioweapons?

The question is no longer whether AI can be used to develop bioweapons, but whether it will—and whether tech companies will be complicit in their creation. Given Google’s softened ethical stance and the removal of previous safeguards, experts worry that the company and others may now play a direct role in military AI applications, including bioweapons research.

Google’s official statement on February 4 attempted to frame the issue as a matter of national security and global competition: “We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”

But critics argue that this is little more than a veiled admission that AI will be developed for military purposes under the guise of protecting democracy. In essence, Google and its peers are signaling that they will do as they please—without the ethical constraints they once championed.

The now-repealed executive order sought to impose some oversight by requiring tech companies to disclose information about AI systems that could pose national security risks. With those safeguards gone, the responsibility falls on both public institutions and private companies to self-regulate—something history suggests is unlikely.

What about HIPAA?

These developments raise troubling questions about the convergence of AI, military applications, and healthcare data. Through various health-focused initiatives, Google has amassed vast amounts of sensitive medical information. One initiative in particular, Project Nightingale, famously raised serious questions about data privacy and ethics. 

Project Nightingale was a secret 2019 partnership between Google and Ascension, one of America’s largest healthcare providers, that gave Google access to the complete health records of roughly 50 million Americans without notifying patients or doctors and without giving patients the ability to opt-out. The project allowed Google to access deeply personal medical information including lab results, hospitalization records, diagnoses, and birth dates. While technically legal under HIPAA, which permits hospitals to share data with business partners without patient consent, the project raised serious ethical concerns about patient privacy, consent, and the increasingly blurred lines between Big Tech and healthcare. Patients found out about the partnership along with everybody else when The Wall Street Journal broke the news.

Healthcare professionals and privacy advocates were particularly troubled by Google’s potential to combine this sensitive medical data with its existing vast troves of user information from search, email, and other services – creating incredibly detailed personal profiles that could be used for purposes far beyond direct patient care. The controversy surrounding Project Nightingale highlighted a critical gap between what’s legally permissible under aging healthcare privacy laws and what many consider ethically acceptable in an era of big data and AI.

The World Health Organization (WHO) Guidelines on Ethical Issues in Public Health Surveillance states that, “Individuals have an obligation to contribute to surveillance when reliable, valid, complete data sets are required and relevant protection is in place. Under these circumstances, informed consent is not ethically required.”

“The basic argument is that individuals have a moral obligation to contribute when there is low individual risk and high population benefit,” argued Cason Schmit, Assistant Professor of Law at Texas A&M, in a 2019 blog post

However, no one knew what Project Nightingale actually did and there was no regulatory person or body to determine if it fit the description of “low individual risk and high population benefit.” Google only provided a vague description of Project Nightingale after public outrage.

The WHO created their guidelines in the name of public health, with the idea that our collective contribution to science can curb or cure disease. But there’s a crucial difference between contributing health data to public research institutions bound by strict oversight and sharing it with private behemoth companies that are actively dismantling their ethical guardrails. 

As tech giants continue to accumulate both biological data and advanced AI capabilities without meaningful oversight, the line between healthcare innovation and potential bioweapon development becomes increasingly blurred, leaving the public to grapple with a sobering reality: the same companies that hold our most intimate health information are simultaneously dismantling their own ethical restrictions on how that information can be used.


Show Your Support

Subscribe

Newsletter Logo

Subscribe to our topic-centric newsletters to get the latest insights delivered to your inbox weekly.

Enter your information below

By submitting this form, you are agreeing to DHI’s Privacy Policy and Terms of Use.