he first principle is that you must not fool yourself—and you are the easiest person to fool.” This quote from Richard Feynman’s 1974 Caltech commencement address is a welcome earworm repeating in my head whenever I’m thinking about probabilities and decision-making.
At the time of this writing, at least as it pertains to science, most of us are only thinking about SARS-CoV-2 and COVID-19. What policies will do more good than harm to individuals and society in the short- and long-term? You are only fooling yourself if you don’t think there’s a high degree of uncertainty about the best path forward.
In these times, alternative and opposing opinions on the problems and solutions surrounding the pandemic need to be heard, not silenced. Yet, popular platforms may not be allowing this to happen. YouTube’s CEO, for example, at one point reportedly said, “anything that goes against WHO recommendations” on the COVID-19 pandemic would be “a violation of our policy,” and those videos would be removed. Twitter also updated its policy, broadening its definition of harmful content to include, “content that goes directly against guidance from authoritative sources of global and local public health information.” Facebook updated its site, “Removing COVID-19 content and accounts from recommendations, unless posted by a credible health organization.”
I completely understand these positions and, on balance, they probably do more good than harm, however, they may come at a cost down the line (or even right now). If history is any guide, censoring any opinions that contradict institutional authorities or the conventional wisdom often doesn’t end well. However, as the German philosopher Hegel put it, the only thing we learn from history is that we learn nothing from history. While COVID-19 presents us with a particularly thorny case of decision making based on scientific uncertainty, this issue is perennial in science. (If you want to read a good article arguing for debate and alternative viewpoints specific to the case of COVID-19, check out this one co-authored by Vinay Prasad and Jeffrey Flier in STAT, the former who I am scheduled to interview in the coming months.)
We are our own worst enemies when it comes to identifying any shortcomings in our hypotheses. We are victims of confirmation bias, groupthink, anchoring, and a slew of other cognitive biases. The worst part is that we are often unaware of our biases, which is why we’re the easiest people to fool. As painful as it seems, considering problems and solutions from a perspective that contradicts our own is one of the best ways to enhance our decision-making. But thinking this way, deliberately and methodically, is a practice, and though it’s really hard, it is necessary in order to sharpen our cognitive swords.
In the early 19th century, the Prussian army adopted war games to train its officers. One group of officers developed a battle plan, and another group assumed the role of the opposition, trying to thwart it. Using a tabletop game called Kriegsspiel (literally “wargame” in German), resembling the popular board game Risk, blue game pieces stood in for the home team—the Prussian army—since most Prussian soldiers wore blue uniforms. Red blocks represented the enemy forces—the red team—and the name has stuck ever since.
Today, red teaming refers to a person or team that helps an organization—the blue team—improve, by taking an adversarial or alternative point of view. In the military, it’s war-gaming and real-life simulations, with the red team as the opposition forces. In computer security, the red team assumes the role of hackers, trying to penetrate the blue team’s digital infrastructure. In intelligence, red teams test the validity of an organization’s approach by considering the possibility of alternative hypotheses and performing alternative analyses. A good red team exposes ways in which we may be fooling ourselves.
“In science we need to form parties, as it were, for and against any theory that is being subjected to serious scrutiny,” wrote the scientific philosopher Karl Popper in 1972. “For we need to have a rational scientific discussion, and discussion does not always lead to a clear-cut resolution.” Seeking evidence that contradicts our opinion is a sine qua non in science.
Popper pointed out that a scientist’s theory is an attempted solution in which she invested great hopes. A scientist is often biased in favor of her theory. If she’s a genuine scientist, however, it’s her duty to try and falsify her theory. But she will inevitably defend it against falsification. It’s human nature. Popper actually found this desirable, to distinguish genuine falsifications from illusory ones. A good blue team keeps the red team honest.
Generally, the more we can introduce and consider opposing views into our thinking, the more we can rely on the knowledge we’re trying to build. But—and this is a very, very big BUT—not all opposing views are equal. Recognizing the difference between scientific (worthy of debate, though often still incorrect over time) claims and pseudoscientific (not worthy of debate, as the very foundations on which they sit are not pulled from the disciple of science or the scientific method) claims is crucial and a failure to do so makes the following exercise futile. At no time is this distinction between “good” science worthy of debate and “junk” science worthy of the skip/delete button simultaneously more important, and more difficult, to appreciate than it is today, where we find the barrier to signal (e.g., good science) and noise (e.g., bad science, or pseudoscience) propagation to be essentially non-existent. How do we differentiate between reasonable and baseless views? This is trickier than it seems, because if we’re not vigilant, we may simply dismiss opposing views as quackery just because they happen to contradict our opinion.
In his 2016 Caltech commencement address, Atul Gawande highlighted five hallmark traits of pseudoscientists: (1) conspiracy, (2) cherry-picking the data, (3) producing fake experts, (4) moving the goalposts, and (5) deploying false analogies and other logical fallacies. “When you see several or all of these tactics deployed,” said Gawande, “you know that you’re not dealing with a scientific claim anymore.” Learning how to dismiss some ideas while embracing others is a topic that deserves far more ink than spilled here, but now, more than ever, we’re awash with ideas and opinions that shouldn’t be taken seriously.
There are legion examples of this, and I would suggest you pick one, and go through the exercise. Let’s consider here the claims that the Apollo moon landings were hoaxes, staged by the U.S. government and NASA. How many of the five boxes, above, get checked in an effort to explain this claim?
- Conspiracy: The moon landings were faked because of the space race with the Soviet Union, NASA funding and prestige, and/or distracting public attention away from the Vietnam War.
- Cherry-picking: Any appearance of potential photographic and film oddities are evidence of a hoax while rebuttals can be ignored or dismissed since they’re obscuring the “truth.” (Check out this slideshow of several iconic “hoax photos.”)
- Fake experts: Amateurs examining pictures, seeking, and finding, anomalies that necessitate comprehensive knowledge of photography and lunar terrain (which they lack).
- Moving the goalposts: NASA should be able to provide pictures of the Apollo landing sites to confirm the event—but when such pictures surface—NASA must’ve faked them.
- Logical fallacies: If any of the footage (or any other evidence of the moon landings) appears faulty, it must be fabricated—no other possibilities exist.
Alternatively, the tens of thousands of individuals that worked on, or were involved with, the Apollo program did not all conspire to fake six crewed moon landings between 1969 and 1972. The supposed oddities of photographs and film can be logically explained and the totality of the evidence is consistent with genuine moon landings. These explanations, and this evidence, come from people skilled and knowledgeable in their respective fields (i.e., experts). No moving of goalposts or logical fallacies required. (I wrote about my fascination with conspiracy theorists and the moon landing in a previous email.)
What are some things you can do to incorporate red teaming into your mental models? Deliberately assigning people to a red team, or even red-teaming your own opinion, is a way of “gamifying” adversarial ideas that otherwise may seem too intellectually painful to confront. Getting into the habit of performing a premortem on your ideas—envision what can go wrong before the start—is another effective way to test them. Reading literature and consuming media related to good (and bad) critical thinking and reasoning helps. (I included a list of some of my favorite reading materials in this post.) Participating in a journal club can help you consider alternative views from yours, and see things in a new light. (We do this internally and are considering filming them on Zoom for our podcast subscribers.)
Red teaming is more of a mindset to maintain tonically, rather than an obscure tactic to pull out for special occasions. Charlie Munger, Warren Buffett’s right-hand man, encapsulated this mental model during his 2007 USC Law School commencement address: “I’m not entitled to have an opinion on [a] subject unless I can state the arguments against my position better than the people do who are supporting it.”