Skip to content
Unit 3Biological WeaponsChapter 5: Scientific advances and the misuse of biology
Chapter 5

Scientific advances and the misuse of biology

Picture of several petri dishes in a blue and pink lighting.

Petri dishes

Source: Wellcome Library, London/CC BY 4.0

Research in biology and biomedicine is essential to global health. It provides insights into disease agents, their transmission and how we can treat them. But these same insights can also be repurposed to intentionally cause harm. The biological risk landscape is becoming more complicated and more challenging. Some of the trends underpinning this development were underway before COVID-19, but the pandemic has significantly accelerated them.

There are five key trends that are relevant here:

  • Increased numbers of maximum microbiological containment laboratories
  • Growth in high-risk research, such as manipulations of potential pandemic pathogens
  • Exacerbation of the risks posed by technological convergence
  • Increased use of legal and illegal tools, such as industrial espionage, cybertheft, academic infiltration and early-stage investment to tap into bioinnovation ecosystems
  • The rise of biological disinformation

These trends mean that in the near to medium term, it is technically possible for biological weapons to emerge that are capable of causing greater harm than before, that are more accessible to more people, that can be used for more precisely targeted attacks and that can be harder to attribute.

Emerging research areas with high misuse potential

Not all research is of concern. Various efforts have been made, particularly in the United States, to characterise biological research with particularly high misuse potential.

Examples of such ‘dual-use research of concern’ that have been identified include experiments that:

  • manipulate the pathogenicity, virulence, host specificity, transmissibility, resistance to drugs or ability to overcome host immunity to pathogens;
  • synthesise pathogens and toxins without cultivation of microorganisms or using other natural sources;
  • identify new mechanisms to disrupt the healthy functioning of humans, animals and plants;
  • develop novel means of delivering biological agents and toxins.

Starting in the early 2000s, several high-profile experiments raised concern amongst observers by:

  • making mousepox more deadly (2001);
  • synthesising poliovirus from scratch (2002);
  • reconstructing the extinct 1918 flu virus (2005).

More recent examples highlight the risks of technological convergence. In 2022, for example, a drug development company which uses AI to search for new, non-toxic molecular structures that can be used as drugs, demonstrated how easy it was to reprogramme its algorithm to actively search for toxic molecules. The result was an AI-trained algorithm that identified hundreds of new compounds even more toxic than known chemical warfare agents.

As a consequence, entire fields of biological research are now raising concerns. These include:

  • ‘gain-of-function’ studies, where potentially pandemic pathogens are artificially mutated and ‘enhanced to create even more potent strains of some of the world’s deadliest diseases;
  • synthetic biology, which aims to engineer biology and which is likely to make it possible to create dangerous viruses from scratch in the near future;
  • neurobiology, which may improve the operational performance of troops through neuropharmacological agents that enhance functions such as perception, attention, learning, memory, language, thinking, planning and decision-making; or which may degrade enemy performance through incapacitating biochemical agents or so-called ‘non-lethal’ weapons.

Security risks

There are three principal scenarios the security community has concerns about:

  1. Under the guise of legitimate research, highly skilled and trained biologists use their knowledge to create biological agents or genetic constructs for illegitimate ends.
  2. Militaries or state-sponsored groups of states exploit legitimate scientific advances for hostile purposes.
  3. Increases in legitimate and sophisticated life sciences and life science infrastructures increase national capacities to threaten or carry out a biological attack.

Today, responsible science and bioinnovation is as important as ever and it is widely recognised that scientists, and especially scientists doing high-risk life sciences research where outcomes could – accidentally, inadvertently or intentionally – potentially significantly impact society, have a professional obligation to engage.

In a Bulletin of Atomic Scientists article titled ‘Scientific blinders: learning from the moral failings of Nazi physicists’, Talia Weiss writes:

Scientists and engineers […] today […] may feel they have little in common with physicists working in the service of the German government during WWII. […] Yet researchers working on military and cutting-edge technologies are confronting the same questions that faced nuclear physicists under the Third Reich: As scientists, how can we avoid making (or stumbling into) decisions that do more harm than good? And when is it our responsibility to question, object to, or withdraw from a research project?

Talia Weiss in a Bulletin of Atomic Scientists article titled ‘Scientific blinders: learning from the moral failings of Nazi physicists’

These are questions every responsible scientist must ask him or herself.

The role of data and AI

Biological data is becoming increasingly digitised and collated in large datasets. Statistical methods, algorithms, machine learning and computational power are significantly changing how that genomic data is analysed, both in terms of how it is classified and in terms of how it is used to make predictions. What are some of the security risks of these developments?

The integration of AI and machine learning into biology opens up new possibilities for understanding how genetic differences shape the development of living organisms, including ourselves. It also opens up new possibilities for understanding how these differences make us, and the rest of the living world, susceptible to disease, and this comes with risks.

For example, we can use AI and the advanced pattern recognition it offers to predict effective enhancement of pathogens that make them even more dangerous. Artificial intelligence could make it easier to design bacteria and viruses with enhanced pathogenicity, or with expanded host ranges. It could also make it easier to design pathogens with altered transmission routes, ones that are resistant to available counter-measures or that have the ability to evade an immune system response. Artificial intelligence can also be used to predict and design novel pathogens that never existed before, pathogens tailored to target mechanisms critical in the immune system or the microbiome, for example. And AI could be used as a means to predict and design new toxic compounds or new toxic proteins such as ricin.

Another way in which AI could increase risks in the life sciences is by identifying key genetic components of a disease manifestation and enabling manipulation. Or it could provide insight into the susceptibility of a population, or subpopulations, to particular diseases – potentially allowing more targeted biological weapons focused on genetic groups.

Large language models, or chatbots, pose yet another type of risk. The first bio-focused chatbot, or biomedical chatbot, BioGPT, was released by Microsoft in January 2023. Trained on millions of biomedical research articles, it aims to support biologists, life scientists and clinicians in various advanced research scenarios, and could, for example, help to develop new drugs more quickly. By comparing millions of clinical cases, it could also, for instance, help to identify the best medical treatment for each patient.

But these opportunities are also accompanied by biosecurity risks. Chatbots increase accessibility to existing knowledge and capabilities, and as such may lower the barriers to biological misuse. They can also identify specific avenues to biological misuse. They can generate ideas and help plan how to attain and modify pathogens, and they can help plan how to disseminate biological agents.

There are plenty of risks, but at the same time, there are also significant limitations to AI and machine learning in the life sciences. So, while AI and deep learning will significantly impact biology and life sciences, we are still at an early stage and need to better understand potential uses, and limitations, of AI in these fields.

Biological disinformation

Disinformation is a set of carefully constructed false messages leaked to an adversary’s communication system in order to deceive the decision-making elite, specific communities or publics. There are significant geostrategic motives for disinformation campaigns, and these have been around for a long time.

Disinformation is most influential when spread through traditional media or endorsed by groups and individuals with high levels of community respect. Depending on the country, this could be political or religious leaders, judges, members of the military or other trusted members of the target community. The digital age has meant that other routes to reach audiences have become increasingly accessible. Social media, amplified by bots and trolls, has enabled disinformation campaigns to spread throughout global audiences cheaply, remotely and in real time.

Some of the consequences of deliberately fanning false narratives are that it:

  1. foments and exacerbates divisive political fissures;
  2. erodes trust between citizens and elected officials and their institutions;
  3. popularises foreign government policy agendas and narratives;
  4. creates general distrust or confusion over information sources;
  5. undermines citizen confidence in democratic governance.