Safeguard or Shackle? Public Views on Surveillance

Last week, The New York Times launched The Privacy Project, a major, months-long examination of issues in the U.S. and in general. Among the first wave of articles was an op-ed by author Jianan Qian on the Chinese public’s responses to surveillance technologies like the facial recognition systems operating in Xinjiang and beyond.

I came to the United States for a master’s degree in 2016, and I am not sure whether the experience of living overseas has increased my sensitivity about matters of privacy or not. I do remember waking up one morning in 2017 to a story that had gone viral in my WeChat feeds overnight: “A BBC reporter challenged China’s network — and it took only seven minutes to capture him!” I couldn’t tell which terrified me more: China’s all-encompassing network of cameras, or that my countrymen were proudly cheering them on.

[…] Many people in China seem to be happy about the physical security promised by the surveillance network. Our mind-set, long ago, was wired to see safety and freedom as an either-or choice. I remember a conversation I had during my first year in the United States, when I was visiting Chicago. A Chinese student told me he dared not walk alone in the city after 5 p.m. When I told him I’d recently taken a trip to a nearby 7-Eleven at 10 p.m. by myself to get a beer, he was shocked. “Why?” I asked him. “It’s a safe neighborhood.” Still pale, he said, “You only got lucky.”

[…] The other reason that my people seem not to worry about the violation of their privacy is that they believe they are law-abiding citizens. “Only criminals need to be afraid,” they say. But I’ve heard other stories.

In 2018, Wang Qian, a young single mother from Zhejiang Province, hanged herself after she lost her savings on a peer-to-peer lending platform called PPMiao. In her suicide note, which briefly appeared on social media, she confessed her frustrations: Because she’d been a victim of a financial fraud — and therefore had grievances — she was now considered a threat to public safety and order. [Source]

Huawei’s Hong-Eng Ko put the public safety argument more bluntly this week, arguing that “if privacy wins, criminals win.” The acceptance Qian describes is mirrored in widespread public support for China’s various emerging social credit systems as mechanisms of accountability for untrustworthy behavior, as found by Genia Kostka through surveys and Manya Koetse through analysis of social media discussion. Social credit scholar Shazeda Ahmed also touched on the issue in a recent examination of the ecosystem at Logic magazine, noting that “although a few Chinese consumer protection organizations have become attuned to the ways that tech firms’ use of customer data can lead to privacy abuses, they have not taken up the issue of blacklists because the practice is treated as socially acceptable in China. […] People who are wholly unaffected by blacklists may view them favorably, as proof that the government is proactively combating the laolai phenomenon [i.e. “people who have the means to repay debt they owe but choose not to”]. Yet there needs to be a critical analysis of the that centers the perspectives of those who are most directly affected.”

Ahmed previously examined broader privacy issues in China in a ChinaFile piece with her MERICs colleague Bertram Lang, noting that public attitudes are more mixed when it comes to corporations rather than the government. Baidu CEO Robin Li’s claim that Chinese people are happy to trade privacy for convenience sparked angry responses last year, for example, and there have been persistent concerns about data collection by Chinese tech giants’ apps. In a recent review of lessons drawn from the first year of compiling his ChinAI Newsletter, Oxford University’s Jeffrey Ding also argued that the public mood regarding privacy and AI technologies such as facial recognition is more mixed than often believed:

Chinese people — including regular netizens, data protection officers, philosophy professors — care about AI-related ethics issues, including privacy. Let’s dispel once and for all with this fiction that there are no discussions of AI ethics happening in China. It is perfectly reasonable to highlight differences in Chinese notions of AI ethics or the degree to which privacy is important to Chinese consumers, but it is absolutely dehumanizing to say Chinese people don’t care about privacy.

Chinese tech giants clash fight over user privacy violations, as evidenced by Tencent asking the Ministry of Industry and Information Technology to intervene in a dispute between Tencent and Huawei on alleged user privacy infringements of the Honor Magic phone. After a yearlong investigation, China’s Shandong Province brought a major case in July of 2018 on infringements of personal information against 57 individuals and 11 big data companies, which revealed a debate over how to interpret a new national personal information protection specification. The Nandu Personal Information Protection Research Center has assessed 1550 websites and apps for the transparency of their privacy policies.

Finally, Chinese thinkers are engaged on broader issues of AI ethics, including the risks of human-level machine intelligence and beyond. Zhao Tingyang, an influential philosopher at the Chinese Academy of Social Sciences, has written a long essay on near-term and long-term AI safety issues, including the prospect of superintelligence. Professor Zhihua Zhou, who leads an impressive lab at University, argued in an article for the China Computer Federation that even if strong AI is possible, it is something that AI researchers should stay away from. [Source]

Recent reports on a more mundane system monitoring street cleaners in one Nanjing district demonstrated public unease with surveillance of ordinary citizens, rather than criminal suspects. From Masha Borak at Abacus:

“We’re on this road working, and it positions us. You’re here 20 minutes without moving, and it knows it.”

This is how a street cleaner in one Chinese city introduced a new piece of surveillance equipment that the local sanitation company slapped on her wrist to track her work. The smartwatch doesn’t just track her position. Staying 20 minutes in one place sounds an alarm: “Add oil!” the watch chants, using a popular Chinese phrase of encouragement.

[…] If workers fail to move after the alarm goes off, the team leaders can look up their GPS location on the screen and go out to find them, one worker said.

[…] In 2018, Amazon filed two patents for a wristband that tracks warehouse employees and monitors performance. While Amazon has described it as time-saving, those acquainted with the work conditions in its warehouses may see it as dystopian.

Judging from the online backlash to Nanjing’s new smartwatch initiative, many in China share a similar sentiment. The reactions have been overwhelmingly negative, with one Weibo commentator calling the smartwatches “the shackles of the working people.”

“You can monitor suspects, addicts, or production, but why ‘monitor’ hard-working sanitation staff?,” another Weibo commentator asked. [Source]

SupChina’s Jiayun Feng collected several more examples of negative reactions. At What’s On Weibo, Gabi Verberg wrote that the watches’ alarms had been deactivated in response to online criticism:

On Weibo, the hashtag “Smartwatch Automatically Yells ‘Jiayou’” (#智能手表自动喊加油#) received over 2,5 million views, with the majority of commenters strongly rejecting the new approach.

Most commenters on this issue argued that the implementation of the smartwatch is “immoral” and that the Nanjing workers are “treated as criminals.” Many others also pointed out that the workers, often senior citizens, should be able to rest for more than 20 minutes.

In light of the new policy, many people on social media also referred to the infamous fictional character Zhou “Bapi” (周扒皮). In the novel The Killing Wind, this landlord Zhou would stick his head into the henhouse stirring up the roosters to wake his laborers up earlier, so they would start working.

Some netizens came with an alternative solution, suggesting that the leaders of the company should wear the smartwatches themselves instead. [Source]