Developing a culture of security for synthetic biology

I was recently interviewed by Andrew Snyder-Beattie from the Future of Humanity Institute. I am putting edited excerpts from this interview below with the hope of engaging a discussion.  

Andrew: What  should be done to address the risks of synthetic biology?  What are the roadblocks preventing these things from being done?

Jean: I think that a big part of the problem is that security in relation to synthetic biology has been mostly a public relation exercise. Everyone involved, the government and the practitioners, feel the need to show they are doing something in response to concerns expressed by the general public and the media. However, there is a huge gap in the perception of the risk. The general public is scared and overestimate the risk. Industry and practitioners are in denial because serious threats have never materialized. The government keeps an eye on it mostly through education and outreach but probably considers this as a minor risk compared to other risks faced by the nation.

I think several mistakes have been made so far in the way this problem has been handled.

  • Too much focus has been placed on catastrophic consequences of synthetic biology like the rogue scientist who might synthesize a bad bug in his garage and create a pandemic with catastrophic consequences for a country or human kind. The problem with this is that it is very very unlikely to happen. So, it is very difficult to get resources to address a virtually non-existent risk.
  • Too much focus has been placed on the gene synthesis industry, in the US. They have been designated as the scapegoat. They are bearing the entire responsibility, which puts them in a very uncomfortable position, both from a marketing and business perspective. This is short-sighted because gene synthesis can be done in any lab. A would-be terrorist would have to be pretty dumb to order a biological weapon from a gene synthesis company. However, many students in top academic institutions have access to the expertise and the resources necessary to do gene synthesis in house. It is not sure that their institutions would have the means of detecting their illicit activities.
  • Too much emphasis has been put on gene synthesis. Looking at sequence data circulating on a computer network or data produced by sequencing instruments would provide valuable information as well. Gene synthesis is just one step in a workflow.

Today life science practitioners lack notions of security. If anything, I think there is a culture of denial of security issues. Instead of focusing on catastrophic scenarios, we should focus on making people understand security issues they face in their daily job. When I get a plasmid from another scientist, do I trust that the plasmid I get what it is supposed to be or shall I sequence it entirely? Am I willing to bet six months of work that the sequence is correct before making the decision of sequencing it because I could not get anything out of it? If experiments don’t work, what assumptions am I going to challenge? My ability to do the work? The intent of the person who sent the plasmid? The ability of the person sending the plasmid to identify the material shipped out of the lab?

Or when I sequence a sample, should I be curious to know what else may be in my sample that I may not be aware of? If I get NGS data from a human sample, should I try to identify the reads that don’t map the human genome for instance? Should I be worried if my sample include viral sequences that I did not expect? When I get a bacterial culture, should I try to make sure that it is free of bacteriophage, which could shut down my fermentors for months?

I think we should be totally obsessed with making sure we know exactly what are the samples we have in our labs. And sequencing makes this increasingly possible.

If we can demonstrate that security could result in increased productivity today, then we could make a business case for taking it seriously. And the solutions developed to handle the small problems would go a long way to limit the risk of a catastrophic biosecurity accident.

Andrew: In the long run, how do you hope we address the risks of synthetic biology?

Jean: We need to develop a culture of bio-security in the life-science community at large comparable to the culture of computer security that has emerged over the last 10 years among the general population of computer users. Today anyone who uses a computer has some basic understanding of security issues associated with user authentication, computer viruses, or networks vulnerabilities for instance. Anti-viruses, firewalls, password management system have been become mass market products. We want to achieve the same level of security awareness in the life science community at large. We should not focus exclusively on synthetic biology. We need to approach security in a broader way than is currently the case. Bio-security is currently mostly focused on policies relatives to select agents. Most biology labs don’t work with select agents and yet, we all have to deal with security issues.

What’s your perpsective on the security of the life science research infrastructure? Do you think that synthetic biology raises specific issues? What security issues would you like to see addressed in a future blog posting? 

Comments are closed.