The barriers to reading, writing, and editing DNA are falling fast. A scientist can now order synthetic gene sequences from manufacturers and have them within days — soon, it could be common to produce them right in the lab using a benchtop DNA synthesizer. High school students are learning CRISPR gene-editing techniques. Artificial intelligence (AI) platforms trained on biological data are accelerating experimentation and generating sequences that don’t exist in nature. 

The hope is that these developments will lead to new breakthroughs in healthcare, agriculture, energy, and more. The fear is that they will lower the threshold for profound misuse of biotech, while simultaneously increasing the scale of what bad actors can accomplish. The risks aren’t hypothetical — in early February, the FBI raided an alleged illegal biological lab operation in a Las Vegas home.

The challenge for governments and industry is mitigating the current and known risks posed by these emerging capabilities, while still enabling legitimate research and innovation — all while anticipating how to defend against biological threats that might not exist today, but could soon. 

Here’s how to do it.

Step one: Screen everything

Bad actors who want to deploy a bioweapon are going to need a bioweapon to deploy, which means they need to get their hands on a biotoxin or pathogen. In the past, they would either have to find it in nature — isolating anthrax from soil, for example — or steal it from a secure research facility that already did the digging. But now they can try to get synthetic DNA. 

Synthetic DNA is genetic material that is chemically manufactured, rather than extracted from a living organism. Specialized companies produce synthetic gene fragments for researchers, pharmaceutical firms, and biotech startups, who use them to develop everything from vaccines and diagnostics to new therapies and engineered microbes. 

Meticulously screening all requests is one of the best ways to ensure bad actors don’t get access to synthetic DNA.

Screen customers

For synthetic DNA manufacturers, the process starts with making sure they understand who is doing the ordering. A customer being a known scientific entity is a good sign the order isn’t nefarious, says Marcus Graf, a biosecurity expert and current head of the Center for Research and Transfer (ZFT) at OTH Regensburg: “It’s easy if it’s the [National Institutes of Health]. It’s more difficult if it’s called SynBio Good Company.”

If the name pops up on a sanctions list, that’s an obvious red flag. Otherwise, a provider will want to find out: Who is this person? What do they typically research? Do they work for a recognized entity? Some manufacturers might refuse to fill orders for someone who wants to receive nucleotides at a residential address, or who is using a personal, rather than institutional, email.

During customer screening, manufacturers should also consider whether it makes sense for that particular customer to be ordering the requested sequence. If it doesn’t, can they explain their intended use case? 

“If someone has a publication record in fungal biology, and all of a sudden, [they are] ordering viral sequences, that’s worth having a more detailed conversation,” says James Diggans, VP for policy and biosecurity at Twist Bioscience, a leading synthetic DNA manufacturer. “It is really about trying to identify patterns of irregularity — things that just look like they should be evaluated a little more carefully.”

Screen sequences

Alongside client screening, manufacturers should check the order for “sequences of concern” — ones that could be found in a pathogen or toxin. 

Many companies affiliated with the International Gene Synthesis Consortium (IGSC), an industry-led biosecurity organization that advocates for common screening tools, employ a “traffic light” model for this. Green indicates the sequence is no threat, and red is a flashing warning. Yellow is somewhere in the middle — the sequence might come from a potential pathogen, for example, but is not itself risky. A 2015 report from the J. Craig Venter Institute (JCVI) found that about 5% of orders to IGSC companies were “hits” — that is, a close enough match to a sequence in the pathogen database that the order required extra due diligence.

This is where all that customer verification can come in handy, as it can explain why a researcher needs a potentially dangerous sequence. “[F]or each single sequence, [we] evaluate whether that sequence meets any of the criteria found on any of the lists,” said Kevin Flyangolts, CEO and founder of Aclid, a firm that provides biosecurity screening tools to DNA synthesis companies that belong to the IGSC. “[We] then look across the entire order as well to see if there’s anything that, if we merge together, if we piece together, could meet criteria on one of the lists.”

Close the global gaps

These screening practices are far from universal. A global map compiled by the International Biosecurity and Biosafety Initiative for Science (IBBIS) has identified 69 DNA companies that definitely screen orders and 16 that do not. The practices of more than 600 others remain unknown, as of December 2025.

Closing those gaps will require policy, not just voluntary best practices. “It’s now become clear to basically everyone that it will take policy solutions to level the playing field,” says Joshua Monrad, program director and co-founder of Sentinel Bio, a nonprofit organization that researches, funds, and scales solutions to mitigate biosecurity risks and pandemic threats. The goal, he adds, is both widespread adoption of screening processes and higher-quality, more consistent follow-through. 

A handful of governments have made efforts toward formal screening requirements, but it remains a patchwork of guidelines that lack real enforcement mechanisms. 

In 2024, the Biden administration’s White House Office of Science and Technology Policy (OSTP) issued a framework requiring federally funded purchasers of synthetic nucleic acids to buy only from providers that attest they follow customer and sequence screening guidelines — a policy that has had global influence. 

But experts say it still leaves major compliance gaps, relying largely on self-attestation rather than formal verification or certification. The Trump administration issued a May 2025 executive order that sought to revise or replace the 2024 order and effectively encourage providers to implement “comprehensive, scalable, and verifiable” screening mechanisms, though as of January 2026, updated guidance had yet to be released. Still, Congress acted this February, with Sen. Amy Klobuchar (D-MN) and Sen. Tom Cotton (R-AR) introducing bipartisan legislation to mandate synthetic DNA screening. The proposed law reinforces those mandates with audits and actual penalties, including hundreds of thousands of dollars in fines for both individuals and entities. 

The ultimate aim is for countries and providers to share a common definition of legitimate customers and sequences of concern.

Outside the U.S., regulatory approaches remain uneven. In 2024, the United Kingdom issued voluntary screening guidance for synthetic nucleic acid providers, and the government has said it will pursue mandatory screening, if necessary. The European Union’s proposed Biotech Act also outlines screening safeguards. Many countries do follow implicit screening practices because of export controls and license requirements for toxic materials. An informal and voluntary collection of dozens of countries known as the Australia Group also works to coordinate export controls on sensitive dual-use materials and technologies to prevent the spread of biological or chemical weapons.

Experts argue that mandatory, internationally harmonized screening standards would not only lower the risk of catastrophic biotech misuse or accidents, but also level the playing field for commercial DNA providers, reducing the incentive to cut corners to save money — according to the JCVI’s 2015 report, screening a green sequence costs a DNA manufacturer just $1.25, while taking the time to process one flagged as yellow or red increases that figure to $209 and $598, respectively. Nations would benefit economically, too. A December 2025 study from the Centre for Long-Term Resilience found the benefits of mandatory screening — reduced risk of misuse of synthetic DNA — would save the U.K. £150 million per year (nearly $205 million) over two decades. International harmonization on screening could net the U.K. £970 million (approximately $1.3 billion) per year over that time period because of the reduced risks. 

The ultimate aim is for countries and providers to share a common definition of legitimate customers and sequences of concern. Screening won’t eliminate every biosecurity risk, but experts see it as one of the most powerful near-term steps to keep synthetic DNA out of the wrong hands — and to substantially curtail an entire threat vector.

Step two: Address the DIY loophole

Harmonized, worldwide screening of synthetic DNA orders would be a high hurdle for a would-be bioterrorist to surmount, but someday soon, they might not need to order their synthetic DNA from an outside source — they could just make it themselves.

Benchtop DNA synthesizers are mini-fridge-sized machines capable of producing synthetic gene fragments. Currently, they’re expensive, only capable of synthesizing short segments, and most still need an operator with some degree of expertise. These technical and practical obstacles prevent unfettered on-demand, in-house DNA printing. However, costs are coming down, and the technology is only getting better, which is why many experts see securing benchtop synthesizers as a crucial piece of the biosecurity puzzle. “It’s so easy to buy the equipment, find someone well-trained, and look up the protocols to DIY,” says Graf. “The biggest risk is the DIY option.”

One way to minimize this risk is to replicate a version of the screening frameworks used by synthetic DNA manufacturers in the benchtop synthesizers market, and make them mandatory. “There’s a bunch of policy stuff that can be done to make sure that we can reduce some of these risks,” says David Manheim, a biosecurity expert and founder and head of policy research at the Association for Long Term Existence and Resilience (ALTER).

DNA synthesizer companies could be required to know who they’re selling their technology to and how the machines are going to be used. Newer models could be pre-programmed based on research needs, limiting what kind of genetic material a user could produce. Or, they might be programmed with internal blocks that prevent users from printing sequences of concern. The machines could also be set up to alert the manufacturer if, for example, a lab that claimed it planned to use the machine to study plant fungi suddenly started printing sequences that looked like they could be found in the Ebola virus. 

If a bad actor buys a benchtop synthesizer secondhand, that could break the customer-manufacturer chain. But the older the benchtop model, the more complex it will likely be to operate. Unless the person buying the secondhand synthesizer is highly skilled, they’ll probably require some level of customer support. Most rogue actors probably won’t take that risk.

Step three: Plan for AI novelty

For now, it is relatively hard for a bad actor to obtain the kinds of genetic sequences that could be used to synthesize a dangerous toxin or pathogen. But some experts worry that AI could eventually expand what is possible by helping design entirely new biological threats. If those sequences are novel enough, they could slip past manufacturers’ screening protocols and only come to our attention after real harm has been done.

Large language models — the types of AI foundational to chatbots like ChatGPT — have guardrails designed to prevent them from responding to explicitly malicious requests, but research shows they can be circumvented. 

Those AIs aren’t as big of a concern for biosecurity experts as AI tools designed specifically for biology, though. These powerful systems are trained on vast biological datasets — including genetic and protein sequences — to predict how molecules fold, function, and interact. In 2024, the scientists behind AlphaFold, Google DeepMind’s protein-structure prediction system, were awarded the Nobel Prize in Chemistry. Such tools have the potential to unlock major breakthroughs in medicine and biotechnology, but experts worry they could also be used to help design novel pathogens or toxins capable of evading existing screening systems.

“AI may lower the knowledge barriers, in terms of what a nefarious person or state might need to do,” says Michael Imperiale, professor emeritus of microbiology at the University of Michigan, who chaired the National Academies of Sciences, Engineering, and Medicine’s study “Biodefense in the Age of Synthetic Biology.”

The key is to understand current vulnerabilities and institute safeguards that can keep pace as technology evolves.

Forecasting how quickly such capabilities may develop is difficult. A 2025 RAND study found that expert uncertainty grows sharply beyond the late 2020s, particularly if AI systems improve faster than expected. 

For now, the key is to understand current vulnerabilities and institute safeguards — like robust nucleic acid screening — that can keep pace as technology evolves.

Last year, Microsoft, along with Twist and other collaborators from the International Gene Synthesis Consortium, conducted a red-teaming exercise to test whether AI-designed variants of concern — sequences resembling toxins or viral proteins — could evade existing screening systems. The study found that some sequences could. But in many cases, the resulting designs still maintained enough similarity to known pathogens to trigger screening hits, and when researchers identified blind spots, they were able to patch detection software to close the gaps.

Flyangolts, who participated in the study, says companies are constantly “looking and monitoring for how these tools are evolving, but at least right now, we are seeing relatively good results with existing screening systems.”

The pace of change is rapid enough that this may not remain true indefinitely — or even just a few years from now. But as Diggans noted, even on short timelines, AI tools are more likely to generate isolated proteins than fully engineered viruses. “That is something that is on a longer-term horizon,” he says. Still, he argues, the world should prepare now by moving beyond using known pathogens as the sole reference point and instead focusing on how proteins behave.

“When something binds to a ribosome and cuts it in a certain way and disables the ribosome like ricin toxin does, that’s bad,” says Diggans. “Whether you’re ricin or something else, if you do that thing, the outcome is bad. And so that’s the kind of thing we should be able to detect.”

If we aren’t yet seeing an AI-driven revolution in biology for good, the same limits likely still apply when it comes to biology for bad — at least for now.

And as many experts emphasize, designing a novel sequence is only the beginning. The bad actor then has to actually create it, and biology remains finicky. Even if they can synthesize DNA from an AI-generated design, it may fail as soon as it enters a cell. “Most of the proteins that AI tools generate don’t end up actually working in the lab for one reason or another,” says Flyangolts. “Maybe they preserve the structure, but, for whatever reason, it doesn’t function correctly, or maybe the protein is not super stable.”

Still, researchers are beginning to demonstrate the potential of these tools. In 2025, scientists at Stanford’s ARC Institute used a generative model called Evo 2 to design viable bacteriophages — viruses that infect bacteria — with “substantial evolutionary novelty.” The researchers noted they considered biosecurity in their experiment, and that the model included built-in constraints.

A December 2025 United Nations INHR report recommended further collaboration with AI developers on biosafety, including red-teaming studies, regular safety evaluations, and national frameworks for AI use, alongside renewed calls to mandate screening for synthetic nucleic acids and benchtop devices. Experts have also floated the possibility of embedding a traceable code in novel-designed proteins.

“Simply printing out the DNA for that virus doesn’t give you an active virus,” says Patrick Boyle, a bioengineering expert and independent consultant. “The reality is we’re not surrounded by new therapeutics that have raced through clinical trials because they are AI-designed.”

That may be coming, but if we aren’t yet seeing an AI-driven revolution in biology for good, the same limits likely still apply when it comes to biology for bad — at least for now.

Step four: Reinforce the foundations

The most robust screening systems and regulatory safeguards won’t eliminate biosecurity risks entirely. No single layer of defense is foolproof, especially in a world where biotechnology and AI are advancing quickly, potentially transforming the nature of those biothreats. 

But while any one safeguard may have gaps, if enough layers are in place — screening, oversight, surveillance, public health response — even if one fails, another may still catch what slips through.

The old playbook still matters. 

Advances in biotech will require policies like harmonized screening for synthetic genetic material and emerging DNA synthesis technologies. They will require new checks and safety evaluations for AI-enabled tools, and agreed-upon protocols that evolve as the science does. Policymakers, researchers, and industry will need to stay nimble as the status quo shifts in ways society can’t yet fully predict.

“One is, keep an eye out for when the status quo is changing,” says Manheim. “And the other thing is all of the obvious things that we should be doing to address biorisk. There are lots of them. But having functioning public health is super valuable for responding to outbreaks.”

In other words, the old playbook still matters. The things that work to defend against bioweapons are often the same things that work to defend against infectious disease more broadly: strong surveillance systems, resilient healthcare infrastructure, rapid diagnostics, vaccine capacity, and public trust.

Being prepared for the next pandemic will protect people, whether the source is nature, a lab accident, or deliberate misuse. That means broad access to healthcare and the ability to detect and respond quickly when something unusual begins spreading. It also means sustaining the scientific ecosystem that makes preparedness possible in the first place. Public investment in biomedical research, vaccine development, and disease surveillance is not optional — it is part of the baseline defense. Cuts to research funding or public health capacity can weaken deterrence against both malicious actors and nature itself.

The same is true internationally. Outbreaks do not respect borders, and early warning depends on global cooperation: shared data, coordinated surveillance, and trusted institutions that can respond before local crises become global catastrophes. Reductions in cross-border disease monitoring make new threats harder to trace and contain. Preparedness is not just a matter of national security, but of collective capacity.

The challenge is no longer only scientific, but political.

Biological weapons are sometimes called the “poor man’s nuke” because they are often considered easier to build than atomic bombs. But biology has always been an inconvenient weapon to wield: what can kill your enemy might also kill you. That uncontrollability is part of the reason almost every country in the world is a party to the Biological Weapons Convention (BWC), an international treaty prohibiting the use and stockpiling of bioweapons.

The BWC turned 50 in 2025, and while it has always been imperfect — most notably because it lacks strong verification mechanisms to guarantee compliance — it has helped sustain a powerful global norm against weaponizing biology. No large-scale, widely confirmed state bioweapon attack has been documented since its creation.

Rapid advances in gene synthesis and AI-enabled biology have raised questions about whether that norm will continue to hold. The challenge is no longer only scientific, but political: international cooperation, transparency, and trust will be necessary to strengthen safeguards like nucleic acid screening and oversight of AI-enabled biodesign.

At the United Nations General Assembly in September 2025, U.S. President Donald Trump spoke about the need for stronger verification mechanisms for the BWC. But that kind of work could be more difficult in a world where international institutions are weakening, and cooperation is in retreat. The U.S. itself has recently withdrawn from dozens of major international agencies, including the World Health Organization. 

The U.S., specifically, has often stood outside the rules-based international order it helped create, and the Trump administration’s desire to work on biosecurity is at odds with its aggressive approach to other global institutions and alliances. This age of “wrecking-ball politics,” as the Munich Security Conference called it, risks destroying the defenses the world already knows how to build: functional public health systems, scientific investment, and international norms and treaties that reinforce the idea that biology should be used to heal, not to harm.

The norms against weaponizing biology are still holding. But in an age of synthetic DNA, decentralized manufacturing, and increasingly powerful AI tools, the new guardrails won’t work if the old ones are pushed to collapse.

This article is part of Big Think’s monthly issue Biology’s New Era.