{"id":222710,"date":"2020-06-17T18:07:39","date_gmt":"2020-06-18T01:07:39","guid":{"rendered":"https:\/\/chinadigitaltimes.net\/?p=222710"},"modified":"2020-06-19T15:34:24","modified_gmt":"2020-06-19T22:34:24","slug":"social-media-propaganda-campaigns-show-mixed-results","status":"publish","type":"post","link":"https:\/\/chinadigitaltimes.net\/2020\/06\/social-media-propaganda-campaigns-show-mixed-results\/","title":{"rendered":"Social Media Propaganda Campaigns Show Mixed Results"},"content":{"rendered":"

Since the COVID-19 pandemic erupted in Wuhan in January before spreading around the world, the Chinese government has increasingly utilized foreign social media networks\u2014including several that are blocked in China\u2014to spread its propaganda and disinformation about the origins of the virus and its own flawed response to it. Recently, the European Union named China as a source of disinformation<\/a> around COVID. Chinese authorities have used a variety of tactics online<\/a> to spread their message, from mass accounts on Twitter designed to amplify official accounts <\/a>to personal accounts of “wolf warrior” government officials <\/a>and state media Facebook pages. While more prevalent than in years past, such messaging has so far gained little traction among global social media users.<\/p>\n

Twitter recently shut down more than 170,000 accounts<\/a> which it said were linked to the Chinese government<\/a>. ASPI obtained the takedown dataset from Twitter, and wrote a 62-page report analyzing the behavior of those accounts. While much of the discussion of Chinese government disinformation and propaganda campaigns focuses on coronavirus, ASPI found that the most discussed topics by the accounts removed by Twitter were Hong Kong protests and exiled tycoon Guo Wengui<\/strong><\/a>, followed by COVID-19 and the Hong Kong protests. From the report’s synopsis (read the full report here<\/a>):<\/p>\n

This activity largely targeted Chinese-speaking audiences outside of the Chinese mainland (where Twitter is blocked) with the intention of influencing perceptions on key issues, including the Hong Kong protests, exiled Chinese billionaire Guo Wengui and, to a lesser extent Covid-19 and Taiwan.<\/p>\n

[…] Our analysis includes a dataset of 23,750 Twitter accounts and 348,608 tweets that occurred from January 2018 to 17 April 2020 (Figure 1). Twitter has attributed this dataset to Chinese state-linked actors and has recently taken the accounts contained within it offline.<\/p>\n

[…] Based on the data in the takedown dataset, while these efforts are sufficiently technically sophisticated to persist, they currently lack the linguistic and cultural refinement to drive engagement on Twitter through high-follower networks, and thus far have had relatively low impact on the platform. The operation\u2019s targeting of higher value aged accounts as vehicles for amplifying reach, potentially through the influence-for-hire marketplace, is likely to have been a strategy to obfuscate the campaign\u2019s state-sponsorship. This suggests that the operators lacked the confidence, capability and credibility to develop high-value personas on the platform. This mode of operation highlights the emerging nexus between state-linked propaganda and the internet\u2019s public relations shadow economy, which offers state actors opportunities for outsourcing their disinformation propagation. [Source<\/strong><\/a>]<\/p><\/blockquote>\n

In Forbes, Davey Winder wrote about the action by Twitter<\/strong><\/a>:<\/p>\n

The fake news network attributed by Twitter to China is, the disclosure said, a new one. It consisted of two interlinked parts: a highly engaged core network of 23,750 accounts and another 150,000 “amplifier” accounts to boost the reach of the disinformation being published. Twitter said that the core network was “largely caught early and failed to achieve considerable traction on the service.” Tweeting mostly in a variety of Chinese languages, the fake news network was engaged in a campaign to spread a geopolitical narrative that was favorable to the Communist Party of China.<\/p>\n

The kind of fake news narratives among nearly 350,000 analyzed tweets focused on the Hong Kong political situation, but China’s handling of the COVID-19 pandemic also featured in the disinformation campaign. [Source<\/strong><\/a>]<\/p><\/blockquote>\n

On Facebook, the Chinese government has taken a more direct approach by posting from state media accounts (“white propaganda”) rather than through a shadow network of supporting accounts as on Twitter. For Harvard Kennedy School’s Misinformation Review, Vanessa Molter and Renee Diresta write about Facebook pages set up to spread the Chinese government’s message<\/strong><\/a>, which together give Beijing an audience of at least 100 million followers. They write that since January, over 33 percent of content on these pages was related to COVID-19. They also found three recurring behaviors from these pages: “focusing a significant share of coverage on positive stories, adjusting narratives retroactively, and using ads to spread messaging.”<\/p>\n

While much of the study of state-sponsored online influence has focused on bots and subversive accounts, this essay focuses instead on the white propaganda capability of the People\u2019s Republic of China on social media, and examines how it has been leveraged in an information conflict around the 2020 novel coronavirus pandemic. Understanding how overt online propaganda properties are developed and leveraged to shape international public opinion provides us with a more complete grasp of the narrative manipulation capabilities available to well-resourced state actors, and suggests potential gaps in tech platform misinformation policies.<\/p>\n

[…] China has extensive and well-resourced outwardly-focused state media capabilities (Brady, 2015), which it employs for its public diplomacy strategy (Chang and Lin, 2014). These channels, such as the CCP\u2019s properties on Facebook (which is banned in China), relay the government\u2019s messaging to other countries\u2019 governments and citizens. Since 2003, building and buying media properties has been part of the CCP\u2019s explicit effort to ensure that it has the capacity to \u201cnudge\u201d foreign governments and other entities into policies or stances favorable to the party4. In periods of unrest or crisis, these properties are put to use to propagate state messaging (Shambaugh, 2017).<\/p>\n

Understanding the ways in which online propaganda shapes public opinion \u2013 particularly given the rising prevalence of social networks as sources of news, and the capabilities that social media offers for targeting, repetition, and audience-building \u2013 is critical to understanding how influence and manipulation play out in modern politics (Woolley & Howard, 2017). It is, however, a challenging undertaking because of the difficulty of isolating any particular account or post as the precipitating factor in shaping an opinion. A debate persists on the impact of online disinformation and misinformation even in the literature on the most widely-studied operations, such as those carried out by Russia\u2019s Internet Research Agency (IRA)5. [Source<\/strong><\/a>]<\/p><\/blockquote>\n

\n

2\/ Chinese state media emphasize happy stories – recovered patients, new hospitals built, and lots of #EverydayHeroes<\/a>.<\/p>\n

Our manually coded subset of Facebook posts by Chinese state media and U.S. media shows a significant slant towards positive stories in the former: pic.twitter.com\/uPY4uEGGeZ<\/a><\/p>\n

— Vanessa Molter (@vanessa_molter) June 11, 2020<\/a><\/p><\/blockquote>\n