Somewhere a brilliant but troubled biotech engineer is doing CRISPR in his garage. He has all he needs: a computer, a fridge, a centrifuge, some animal cages, and an assortment of microorganisms in tubes, which he has labeled and stored until he’s ready. Today he will use a gene-editing technique to make a deadly, fast-spreading bacterium. Oh, and he plans to unleash it upon the world tomorrow. He just needs to make a few finishing touches.
Why is he doing this?
Maybe he’s gone mad. Maybe he’s lonely and wants to get revenge on the world. Maybe he read Ted Kaczynski’s manifesto and thinks humans are a plague. In some sense, it doesn’t matter. Out of a thousand other brilliant gene researchers, he has broken bad. And nobody really knows what he’s working on in that garage. He is as invisible to his neighbors as he is to the girls he likes.
What on earth are we going to do about this young man?
How do we stop people like him from unleashing mass death?
And if we are going to stop him, who is the “we?”
Today, more and more people have access to technological means to wreak havoc on the world. As more people have access to exponential technologies, some subset of them could be out there in the dark working on the next existential threat.
So what are we going to do?
For most people, the answer is linear, even logical: regulation. It’s plausible enough. Certain kinds of activities are riskier than others, so ordinary people are going to have to trust and empower authorities to provide regulatory oversight. Sounds simple. Advocates of this kind of regulation are not arguing that risky research should be banned. As we stipulated in our own scenario, 999 out of 1,000 are not monsters at all but up to good things. Some of their work will be welcome medical breakthroughs.
So maybe some people should be allowed to engage in activities that create existential risks. Otherwise, such activities should be tightly controlled by regulators in licensed, transparent environments. And, of course, government ought to supply that regulatory oversight; or so goes that rationale.
A handful of people have begun to study existential threats like the ones described above. One such individual is philosopher Nick Bostrom who in the policy summary of his “The Vulnerable World Hypothesis”, writes:
“In order for civilization to have a general capacity to deal with “black ball” inventions of this type, it would need a system of ubiquitous real-time worldwide surveillance. In some scenarios, such a system would need to be in place before the technology is invented.”
After a unipolar surveillance regime is put in place, Bostrom thinks that dangerous materials that could go to the development of existential threats would have to be supplied by a “small number of closely monitored providers.”
Regulating the Regulators
In a separate article titled “Fawning Over Fauci,” I suggested the media better investigate a situation that is not very different from the one I imagined in the opening vignette. However, the major difference is that there wasn’t some kid in a garage in this real-world scenario. There were government-sanctioned scientists in a research center — The Wuhan Institute of Virology — who used largesse dispensed by our own government.
Indeed, one of the best ways to provide oversight in various research endeavors is to control the funding sources for such research. I have suggested that it is plausible that the infectious diseases branch of the National Institutes of Health (NIH/NIAID), run by none other than Anthony Fauci, was responsible for funding research into zoonotic viruses of the sort that includes Covid-19.
In other words, without Fauci and his agency’s regulatory failure, there might have been no pandemic.
Let’s assume that Anthony Fauci and the functionaries at the NIAID presided over the funding of dangerous research, which was to be tightly controlled and regulated (if not outright banned). Let’s stipulate that such research did lead to a pandemic that has already killed millions of people. And as the virus mutates, it evades not only vaccines, but all manner of bureaucratic mandates. It could soon be endemic.
In this scenario, though, all of the criteria for reasonable regulation ought to have been satisfied. Yet we still got mass death. In other words, there was neither a mad scientist nor a monstrous incel, at least not as far as we know. It could have been as simple as bureaucratic incompetence combined with negligence at one of the labs serving at the NIH’s behest.
For now, I’ll leave aside questions about whether or to what extent the Chinese government knew about the research and could have co-opted it for nefarious purposes. Despite the Communist Party’s sorry track record, the most likely explanation is that this was a terrible accident. We simply can’t say. Nor are we ever likely to find anything but lies coming out of Beijing (or Washington for that matter).
But one thing is clear: there is currently no way to regulate the regulators. Instead, we have no choice but to live with them. Otherwise, they are entirely unaccountable. They alone hold power to take such enormous risks, presumably in the name of science.
The Problem of Power
When it comes to the idea of government, most people suffer from both a great blind spot and a failure of imagination.
The blind spot is a refusal to believe the state is itself the greatest of all existential threats to humanity. Whether in Hollywood’s depiction of corporate baddies or general concerns about gigantism, most people can’t or won’t appreciate the fact that nation-states hold all the records for mass killing. Compare individuals and corporations to that record. It ain’t even close. Yet most people want desperately to believe the state’s job is to protect us. Unicorn governance. Again, the state is the greatest source of violence in human history.
The failure of imagination lies in a widespread inability to see how it might be possible for humanity to mitigate existential threats without the linear model of state control. Whether we’re talking about “reasonable regulation” or “turnkey totalitarianism,” the linear model originates in Hobbes’s Leviathan rationale, which holds most people in its thrall. Simply put, the Leviathan rationale prompts us to entrust a powerful monopoly to protect us and work in our interests.
But then, somehow, we have to oblige that powerful monopoly to stay in its place. The problem is, it rarely does. As Edmund Burke wrote:
In vain you tell me that [government]is good, but that I fall out only with the Abuse. The Thing! The Thing itself is the abuse! Observe, my Lord, I pray you, that grand Error upon which all artificial legislative Power is founded. It was observed, that Men had ungovernable Passions, which made it necessary to guard against the Violence they might offer to each other. They appointed Governors over them for this Reason; but a worse and more perplexing Difficulty arises, how to be defended against the Governors?
Checks and balances last for a while. But as soon as they fail, the proxies of that powerful monopoly seize yet more power. Any remaining checks and balances are crushed under Leviathan’s weight, well, unless Leviathan can no longer swim in an ocean of red ink. By then, it might be too late.
The Nihilism of the Vulnerable World
Thinkers such as Nick Bostrom aren’t wrong about the world’s vulnerability to exponential technologies in the hands of bad actors. What they too often forget is that politics selects for arrogance and sociopathy. Politicians and technocrats are no angels, despite how badly we might wish them to be. Even if we find the occasional wise leader to hold the ring, the ring invariably gets passed along. There is always a sociopath waiting. And that’s why the upshot of Turnkey Totalitarianism is deeply problematic, even though there are evil geniuses among the citizenry. Acknowledging all this threatens to leave us in nihilism. After all, wasn’t it very likely a small group of government technocrats and regulators who unleashed the Covid-19 pandemic?
My friend and mentor, entrepreneur Chris Rufer reminds us that the best defense against violence isn’t a panopticon or a global superstate.
“The best defense against violence is to minimize the number of people in the world who are willing to use it,” Rufer said. And I think he’s right.
I suspect it can’t hurt to have more people of basic morality checking up on each other, too. I admit, though, that preemptive morality can only reduce the number of black balls in the existential threat bucket. But that’s something. So we must start to think of morality not as a set of abstract rules but rather as an active, continuous practice to be set alight in everyone.
And we must practice morality even as we admit to ourselves that the risks of our extinction will never be zero.