Digital Asia Hub Executive Director on Privacy in Asia

On Thursday, February 2nd the Executive Director of the Digital Asia Hub think tank, Malavika Jayaram, delivered a talk entitled “ in Asia” at the Wikimedia Foundation in San Francisco. Jayaram explained emerging attitudes towards and -infringing technological developments primarily in India and China, covering topics such as India’s nationwide biometric identification system and China’s new social credit system. Currently in a pilot testing phase, the is a big data initiative in which Chinese citizens’ web browsing, online shopping, social media, gaming, educational, and other personal data are used as inputs in deriving credit scores. Not only are these meant to someday replace traditional credit scores, but they are also used to confer benefits such as waived deposits for apartment and car rentals.

Jayaram cited a Chinese government planning document that characterized the social credit system as being able to “allow the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step.” She raised questions about the “unidirectional idea of trust” inherent in such a project, in which citizens place an enormous amount of trust in tech companies and the government to appropriately use and secure their personal data. Moreover, she pointed out that the nontransparent, closed nature of the social credit system could lead to weighting of internally-generated government and commercial data which citizens would be unable to view or contest. Arguing that “if it’s free, you are the product,” Jayaram pointed out that it is often the poorest and most vulnerable members of society who bear the highest costs and risks of big data collection initiatives. Contrary to the notion that digital record-keeping measures encourage inclusion, Jayaram finds that the logic behind social credit and similar systems is “if you can’t be counted, you don’t count.”

Jayaram noted that in several conversations with Chinese citizens, her questions about why people are acceptant of censorship have been met with an unexpected response: a seemingly positive outcome of Google, Facebook, and other major U.S. tech companies’ services being blocked in China has been that domestic companies were pressured to innovate in order to meet local demand for search engines and social media outlets. By this line of reasoning, Chinese tech companies ultimately produced what are seen in China as superior products compared with those of their foreign competitors, even if they come at the expense of users’ privacy protections. This point of view—along with opinions Jayaram has heard in India and other parts of Asia that privacy is a luxury, a Western value, or part of a trade-off for access to technology—reflects the kind of mentality that she hopes to challenge in an effort to “make privacy cool again.”

One of Jayaram’s and Digital Asia Hub’s goals is to reframe discussions around privacy in newly capitalizing Asian states. Suggested approaches included presenting privacy as a collective rather than an individual value, in accordance with new research on group privacy. Other solutions may involve establishing due process for big data and revising technological default options. An additional consideration that is specific to China involves the notion of “saving face” as a catalyst for securing digital privacy, which the University of California, Hastings School of Law researcher Jill Bronfman has written about in the paper “Saving Face: Unfolding the Screen of Chinese Privacy Law.” Jayaram likewise referenced Graham Greenleaf’s schematization of how privacy laws vary across Asia in his book “Asian Data Privacy Laws: Trade and Human Rights Perspectives.” Though she questioned whether Asian societies are becoming complacent in accepting the loss of privacy as part of adopting new information technologies, Jayaram expressed optimism that there will be many opportunities for changing ideas about digital privacy practices across Asia in the future.