Normalization of Censorship: Evidence From China

Conventional wisdom holds that censorship on the Internet is a somewhat futile endeavor. As John Gilmore put it, "The Net interprets censorship as damage and routes around it." More concretely, we have the Streisand Effect, where Barbara Streisand attempted to remove photos of her house from the Internet, and, in the process of doing so, only drew more attention to them. Because of this, most political scientists hold that authoritarian regimes censor as little as possible, limiting their censorship to information directly threatening to the government and information encouraging collective action, which could turn into anti-government protests. Censoring more than that is thought to lead to backlash, as ordinary Internet users would encounter censorship in their everyday usage, and seeing the heavy hand of the state on a day-to-day basis, encourages rebellion.

Yet anecdotal evidence from China indicates otherwise. Casual browsing of Chinese social media indicates that the Chinese government censors far more than the minimum necessary to suppress anti-government and collective action information. Is this perception actually true? And, if so, how does the Chinese government get away with censoring so much without engendering a backlash? These are the questions the author of "Normalization of Censorship: Evidence From China" (my outline) set out to answer.

First, he sought a quantitative answer to the question of whether China censors more than absolutely necessary. Conventional wisdom holds that the Internet behind the Great Firewall is relatively freewheeling, so long as one stays away from criticizing the government or encouraging collective protests. Turning to the WeChatScope database of censored articles, the author demonstrates that this isn't the case. Using a supervised machine learning process, he classified the articles in the database into political and non-political categories. In order to avoid undercounting political stories, the author applied the categories sequentially, with the political categories going first. Furthermore, he explicitly excluded anything having to do with the government or politics from the classification rubrics for the non-political categories. Finally, because the WeChatScope database covers censored articles posted to newsfeeds of politically-focused accounts, the author felt that, if anything, the dataset would be biased in the opposite direction — towards showing that the central government was mostly censoring politics and protest.

As he expected, the author found that the Chinese government was censoring far more just politics and protest. Of the 15872 censored articles that WeChatScope collected from March 2018 to May 2020, only about 40% were related to politics and collective action. The remaining 60% were "harmless" articles, having mostly to do with entertainment or business. Even when he extended the definition of "politics" to include business and the economy, he found that approximately 43% of articles were non-political, still a substantial minority. The idea that you can get away with saying whatever you want on the Chinese Internet, so long as you stay away from criticizing the government, appears to be a myth.

How can the Chinese government get away with censoring so much without a backlash from its citizens? The author's hypothesis is that by censoring broadly, the Chinese government desensitizes people to seeing stories censored. Furthermore, by censoring a bit of everything, the government dilutes the normal signal that censorship sends, which is that a particular piece of information is threatening to the government. Another mechanism for normalizing censorship is allowing discussions about censorship itself. For example, fans of Xiao Zhan, a Chinese pop star, often call for their opponents to be censored in arguments on social media. These calls for censorship and other open discussions of what has and hasn't been censored are allowed, which is hypothesized to turn the act of censorship into a normal act of government policy, rather than an extraordinary step taken to suppress threatening information.

To test this hypothesis, the author conducted a survey. He recruited 612 participants from an online platform similar to Amazon's Mechanical Turk, in China, and split them up into a control group and test group. Both groups were presented with a sequence of 10 story previews from the WeChatScope dataset, with only some marked as censored. This was done to emulate the manner in which WeChat censors stories on its platform, where it leaves the article headline up, but presents the user with a notification indicating that the story has been taken down when they click through to read the article. The control group only saw censorhip marks on stories that were classified as having to do with politics or protest. The test group saw, in addition to the censorship marks on the political stories, censorship marks on several non-political stories as well. Both groups were then asked about their opinions of the government and whether they thought government control over the internet was normal. To try to get around social desirability effects, respondents were asked about the performance of local and central governments separately.

The author found that the test group, which saw both political and apolitical articles censored, reported a higher satisfaction with the performance of the government (both central and local) and greater acceptance of Internet censorship. He found this notable, given the relatively high baseline level of support for the government. He concludes that censoring non-political stories in addition to censoring political stories may make citizens more accepting of censorship, instead of engendering a backlash.

So what do I think? First, I should note that the author himself points out several limitations to the study. He notes that it's unclear whether there's a minimum or maximum limit to the normalization effect. Does normalization kick in when the state censors anything beyond political stories? On the flip side, is there a level of censorship that will engender backlash even when it's applied to all content categories equally? Are there certain categories of non-political content, such as pop culture, or pornography, whose censorship can more efficiently lead to normalization? Finally, the way the study presented censorship wasn't exactly the same as it is on WeChat. Censored WeChat stories don't have a label on them indicating they've been censored. Instead, the user has to click on the story in order to see that it has been taken down. Perhaps seeing censorship labels on stories that they found uninteresting made respondents more approving of censorship.

To these concerns, I add several of my own. My biggest concern is with the survey design. The survey makes heavy use of priming. The hypothesis is that survey respondents primed with examples of censored political and apolitical articles will be more approving of the government than respondents primed with only examples of censored political content. However, as we've already seen, priming studies very often don't replicate. This study has much in common with some of the most notorious examples of priming replication failures, such as the study that found that undergraduates primed with words that made them think of elderly people walked more slowly.

Second, it is well known there are issues with using online labor platforms, such as Mechanical Turk, to get survey data. Online labor platform participants are often motivated to fill out as many surveys as they can in as short a period of time in order to maximize their earnings. The author argues that, because political surveys are rare in China, there aren't very many professional political survey takers on these platforms, thus the respondents are representative of Chinese Internet users as a whole. I don't find this counterargument convincing. Even if these participants aren't experts at filling out political surveys, they still may have significant experience with taking surveys in general, which may result in biases when they fill out questionnaire responses as fast as possible.

Finally, even if the author's results are true, it remains to be seen how applicable they are to the world as a whole. The issue of WEIRD psychology is a well known one. Might a similar effect apply to China? With regards to censorship in particular, China has never known an uncensored Internet. How applicable are findings from a population that has always known censorship for a hypothetical authoritarian government that is trying to lock down previously uncensored Internet access?

I found this article frustrating to read. The hypothesis makes a lot of intuitive sense. Desensitization is a well understood psychological phenomenon, and it makes sense that when people encounter censorship in a neutral, non-threatening setting, they would become inured to it. However, the study design, with its heavy use of priming and lack of realism, does far less to support the hypothesis than the study advertises. Even if the findings are true, it's unclear whether they're broadly applicable or whether they're a China-specific phenomenon that the author has stumbled upon. However, the fact of widespread Chinese censorship does appear to be well substantiated, and that was my main take-away from the article. Contrary to prior popular belief, and in line with current anecdotal evidence, China censors far more than the minimum necessary to suppress protests and criticism of the government. Whether this widespread censorship causes acceptance of censorship is, unfortunately, a question that still remains to be answered.