This is Part 2 of 3 of some notes about communication and climate change. Part 1, “Speaking Pidgin”, is here.
Sometime in 2014, Ebola become politicized.
The political right was calling for travel bans and strict quarantines in an attempt to prevent the spread of the virus in the United States, which scientists insisted didn’t seriously threaten developed countries. The political left highlighted that this was all overreaction, which was also very racist, to the actual threat the disease posed, and that we shouldn’t infringe upon the civil liberties of citizens. In a post in 2014, blogger Scott Alexander wrote:
How did this happen? How did both major political tribes decide, within a month of the virus becoming widely known in the States, not only exactly what their position should be but what insults they should call the other tribe for not agreeing with their position? There are a lot of complicated and well-funded programs in West Africa to disseminate information about the symptoms of Ebola in West Africa, and all I can think of right now is that if the Africans could disseminate useful medical information half as quickly as Americans seem to have disseminated tribal-affiliation-related information, the epidemic would be over tomorrow.
Fast forward to the spring of 2020: after the dust settled from the initial confusion of the pandemic, a different pattern emerged regarding the public’s perception and alignment around COVID-19.
The left was pro-lockdown, pro-mask, and pro-vaccine, with the right taking the opposite stances. Fascinatingly, these stances are the opposite of how the political tribes aligned on Ebola in 2014. How did this happen?
Alexander offered three theories of how the issue became politicized in 2014:
It’s random. Someone folks in a given political tribe begin to support one position, so their allies feel the need to support them and their opponents feel the need to oppose them, and everyone falls in line in short order.
“Fear of disease is the root of all conservatism.” Evolutionary biology, as well as proponents of Moral Foundations theory discussed in the last post, has found a lot of evidence linking conservatism to regional germ loads and sensitivity to disgust, respectively. E.g., the most liberal countries in the world are in Northern Europe and the most liberal states in the U.S. are in the North where viral loads are lower, while conservative and authoritarian politics tend to predominate in Africa and the Southern United States where viral loads are higher.
“Everything in politics is mutually reinforcing.” There exist political Grand Narratives that inform the world view of each political tribe. Policy stances, then, correlate with these Grand Narratives, either because the factors that push people toward the narrative also push them to accept a given position or because they adopt a given position in order to bolster their overall narrative.
We can clearly rule out a couple of Alexander’s explanations. See this footnote1 for a debrief on the whole argument, but in short: the “fear of disease” Moral Foundations argument seems to fail here; conservative media outlets, as well as the President himself, repeatedly downplayed the seriousness of the sickness and consistently compared it to getting the flu, i.e., not a big deal. The argument that the protectionism of quarantines and lockdowns was more aligned to the right’s Grand Narrative also failed; the emphasis has simply changed. The parts of the Grand Narrative that are being highlighted went from “scary foreigners bringing a virus from overseas to infect god-fearing Americans” to “scary big government enforcing civil rights restrictions and harming the economy for a pandemic engineered by scary foreigners.” That leaves the randomness argument… which very well might just be the correct answer but it is hard to isolate its causality.
Whatever the origins of these alignments, the pandemic stances have become so entrenched on both sides that these cultural memes are completely out of control of those in power. We have a surreal video of Republican Senator Lindsey Graham getting booed by supporters for saying “if you haven’t had the vaccine, you ought to think about getting it because if you’re my age… [interrupted by boos] I didn’t tell you to get it! You ought to think about it!”
Meanwhile, on the left, an article from the Atlantic observed in May 2021, “Even as scientific knowledge of COVID-19 has increased, some progressives have continued to embrace policies and behaviors that aren’t supported by evidence, such as banning access to playgrounds, closing beaches, and refusing to reopen schools for in-person learning.” Early, the author noted, ‘Some conservatives refused to wear masks or stay home, because of skepticism about the severity of the disease or a refusal to give up their freedoms. But this is a different story, about progressives who stressed the scientific evidence, and then veered away from it.’”
Why did liberals, who love to say that they are “pro-science”, abandon the scientific consensus in favor of policies that are instead “an expression of political identity”?
What happened to the left’s apparently good epistemology, and why has the right lacked good epistemology throughout the whole pandemic?
I see the source of this bad epistemology as the politicization of science. Politicization of science, as defined in a paper by Bolsen and Druckman (2015), occurs when “actors emphasize the inherent uncertainty of science to cast doubt about the existence of scientific consensus.” Politicization creates a bias toward maintaining the status quo by increasing anxiety about the future and undermining consensus.
Politicization of science is doubly pernicious because the promotion of false uncertainty not only creates inaccurate beliefs about the topic at hand, but also degrades overall trust in the scientific method and how people determine what is true or what kind of evidence is credible across the board—it creates a global worsening of epistemology. You can see the mechanism in action in this article about “anti-maskers”:
Most people I talked to noted government officials’ confusing messaging on masks in the pandemic’s early days. They insist that they’re not conspiracy theorists and that they don’t believe the coronavirus is a hoax, but many also expressed doubts about the growing body of scientific knowledge around the virus, opting for cherry-picked and unverified sources of information found on social media rather than traditional news sources. They often said they weren’t political but acknowledged they leaned right.
A related mechanism to politicization is motivated reasoning. Motivated reasoning describes how people view evidence as stronger when it is in favor of their prior beliefs and discount evidence when it opposes their existing beliefs. Framed by Julia Galef in The Scout Mindset: when you see a given piece of evidence, if it confirms your existing position you think “can I believe this?” while if it contradicts your existing position you think “must I believe this?” Ideally, people will instead take the outside view and ask: “is it true?”
Some folks on the left have been insisting on restrictions, mandates, and public policy that are hygiene theater, like closing playgrounds or extensively cleaning surfaces. Global sales of surface disinfectants surged by 30% to $4.5 billion in 2020, despite there being scientific understanding since mid-2020 that airborne transmission was the dominant form of transmission and that surface transmission posed very low risk. I think that this behavior can be partly explained by liberals wanting to continue signaling and therefore engaging in motivated reasoning. Liberals were in favor of the scientific consensus when it was confirming their signaling behavior, but once evidence came out suggesting that it was time to hang up the disinfectant spray and hand sanitizer, they asked “must I believe this?” rather than “is this true?”
Finally, let’s talk about climate change.
According to the aforementioned paper by Bolsen and Druckman (2015), people are most likely to seek accuracy in their beliefs, rather than pursuing motived reasoning, when the consequences of policy choices are more localized. In other words, when people think an issue will directly impact them, they are more likely to seek accuracy rather than confirmation of existing beliefs. If this view is correct, then as the realized and potential impacts of climate change increase in a given community so should the general level of credence in climate change.
This theory only works, of course, if people attribute the harms they are experiencing to climate change in the first place. I think that this caveat may be more important that one might think at first consideration. Consider the following two Yale Climate Opinions Maps:
“Global warming will harm me personally” versus:
“Global warming will harm people in the US.”
A majority of people in every state other than Wyoming, West Virginia, and North Dakota think that global warming will harm people in the US, while only in California. Hawaii, and D.C. do at least 50% of people think that they will personally be harmed.
These graphs do not paint a contradictory picture per se; the majority of the country that thinks people in the US will be harmed by global warming could all think that only Californians, Hawaiians and Washingtonians will be victims, in which case this would be a relatively accurate aggregate assessment.
With that said, based on my understanding of the expected impacts of climate change, the above is not an accurate view. Folks along the entire eastern seaboard should expect worse tropical storms and hurricanes; folks throughout the entire West should expect worse droughts and fires.
I think these graphs paint a picture of the uphill battle that environmentalists face: most people believe climate change is going to be harmful, but not to them. Until the threat becomes localized, people will continue to fall prey to motivated reasoning, creating inaccurate maps of their world—just like the discordance of the maps above.
I am likely already experiencing harm from climate change. Since moving to Colorado, I get to experience the wrath of Western fire seasons first-hand. Fires appear to have been getting worse since the turn of the century, and symptoms of climate change like changing precipitation patterns, faster-melting snowpack, drought, and higher temperatures make fires more likely and severe. I bring up this personal anecdote because I was having a conversation with my dad recently that aligned with the above theory of localization.
Around the time when Denver had the worst air quality in the world, my dad remarked that, with the smoke reaching parts of the East Coast and my and my sister’s reports of how bad the fires were from CO and CA, it sure did seem like things were getting worse, and maybe this climate change thing had some real merit to it. Localization of threats in action; when it’s a fire in California, it’s the “others” at risk of harm, but once you start to smell the smoke yourself or your kids live there, the threat becomes tangible.
This post has gotten pulled in more directions than I originally intended, so let’s try and summarize what I’ve been rambling about. First, I discussed the political alignment surrounding the 2014 Ebola outbreak and then noted how the magnetic poles of political opinion flipped when it came to the 2020 pandemic.
I underscored how certain pandemic positions on the right and left illustrate failures in how we distinguish what is true, both related to either a distrust or a disregard of science. I blame both on motivated reasoning and the politicization of science. Extending these two concepts to the topic of climate change, it’s clear that the efforts to get people to make accurate maps of reality is a difficult one. It’s really hard to get people to ask “is it true?”
In Part I, I hoped that we could create a better discourse, and more accurate reality maps, by reaching across the moral aisle and speaking in a pidgin to communicate with others using their moral language and vocabulary. I think Part II clearly illustrates that it isn’t that simple.
The challenge is also about how people distinguish fact from opinion—we’re in an epistemological breakdown. I’ve been trying to think of how to extend the language metaphor from Part I; if folks speaking different moral languages explains some of the poor communication and disagreement, what does it mean when two people look at the same piece of evidence and draw different conclusions? What does it mean when people are using different thresholds (“can I believe this?” vs “must I believe this?”) to update their beliefs?
The best I can come up with is that it is like two people from very different cultures meeting: even if they share a common language, it doesn’t mean that they will be able to understand each other when coming at a complicated issue because they will have different axioms informing their way of thinking. Speaking the same moral language, then, is necessary but not sufficient. You also need to be able to understand the cultural logic underlying the decisions people make.
In short: I’m not sure how to address the problems caused by the politicization of science. But overall, I am concerned. It seems like the only way to get the above opinion maps to be in accord is for the harms to become so localized that people are forced to seek accuracy in their worldview.
The fires will have to get even worse in Wyoming; the hurricanes will have to get stronger in Florida; Texas will have to endure more energy shocks; the entire West will have to have worse drought. And at that point, when the majority of people believe and want to take action, it will be too late.
Thanks to Saul for a sanity check and Molly for copy editing.
The “fear of disease” argument seems to fail here; conservative media outlets, as well as the President himself, repeatedly downplayed the seriousness of the sickness and consistently compared it to getting the flu, i.e., not a big deal. The argument that the protectionism of quarantines and lockdowns was more aligned to the right’s Grand Narrative also failed; the emphasis has simply changed. The parts of the Grand Narrative that are being highlighted went from “scary foreigners bringing a virus from overseas to infect god-fearing Americans” to “scary big government enforcing civil rights restrictions and harming the economy for a pandemic engineered by scary foreigners.” That leaves the randomness argument… which very well might just be the correct answer.
Of course, there are notable differences between 2014 Ebola and 2020+ COVID-19. For one, the party in power changed from Democrat (2014) to Republican (2020) back to Democrat (2021). Perhaps the right answer is simple realpolitik; the stances promoted by party elites are those that they think they can use to gain power, and they can rationalize logic for the positions later to the masses. I think this is unlikely to be fully true, since that explanation would require propaganda effectiveness to make Rupert Murdoch’s mouth water, but it’s obviously true that political actors try to highlight events that help them and minimize those that hurt them.
Another difference is that the COVID-19 pandemic is way worse than 2014 Ebola—14,000 deaths from Ebola in 2014 vs 4.55 million as of press time. Probably the nature of these diseases and the scale of the threat warranted different responses that could still be aligned with the respective Grand Narratives of each political tribe.
Finally, one could (correctly) argue the right’s Grand Narrative has changed post-Trump. Perhaps the cult of personality that formed around Trump, the post-Truth attitude of fake news and anti-science, and the conspiracy theory-riddled atmosphere of QAnon made these responses logically follow from some new Grand Narrative. This explanation is perhaps more likely, but the subjectivity of the “Grand Narrative” explanation makes it hard to test.
If localized harm is the primary driver of public opinion on expected climate impact, how do we reconcile the competing timelines of impact (harm) vs action? If the harms of climate change are lagged, which they certainly seem to be, is there a way to shift public opinion before someone's house is on fire or underwater?
To me, the timelines seem diametrically at odds with one another. It is not until we reach an extreme threshold of harm that opinion begins to change, at which time action is taken to reconcile the harm, but it is likely too late to see an immediate (or more importantly, meaningful) change to the harm.
I do hope/think that there is a solution to this error in reasoning that seems rooted in flawed human perception of future harm (re: COVID pandemic). In preparation for a job interview, my brother passed an idea by me; use AR technology to simulate localized air quality given an increasingly worsening climate. Unfortunately, the "polarization of science" (which perhaps extends to a "polarization of technology"?) does not seem to help the cause here - but if people can begin see the impacts of climate change as a personal, apolitical, and legitimate threat, maybe we've got a shot...