Concerns Mount Over Adoption and Export of Biometric Surveillance

At The Financial Times on Friday, Yuan Yang and Madhumita Murgia offered an overview of facial recognition technology’s adoption in China, public reception, and widespread export:

What do Uganda’s police force, a Mongolian prison and Zimbabwean airports have in common? All three are in the process of testing facial recognition systems and all three have used Chinese technology to do it. At least 52 governments are doing the same thing according to research by the Carnegie Endowment for International Peace.

[… "Chinese] companies are particularly well-suited to provide [advanced surveillance capabilities],” says [Carnegie fellow Steven] Feldstein, “but also they are willing to go to markets that perhaps western competitors are less willing to go to.”

[… Huawei], which was blacklisted for allegedly posing a threat to US national security this year, has supplied surveillance equipment — including facial recognition — to roughly 230 cities worldwide stretching from western Europe to large swaths of Asia and Sub-Saharan Africa. It supplies more countries with AI video surveillance than anyone else according to Carnegie.

[…] But the question of who is driving the surveillance rollout is not straightforward. “I would beware of the idea that Africa is a blank slate, where the Chinese arrive bringing their oppressive ways,” says Iginio Gagliardone, author of China, Africa, and the Future of the Internet. “Companies are spinning their products to fit the political demands of African elites.” [Source]

Last week, the Australian Strategic Policy Institute released an AI- and surveillance-focused expansion of its ambitious Mapping China’s Tech Giants project, which now tracks the global footprint of 23 companies and other entities through more than 26,000 data points. ASPI’s Danielle Cave commented on the FT report:

Subsequent tweets highlight examples from the Philippines, Ecuador, Pakistan, and elsewhere.

At The Wall Street Journal on Friday, Liza Lin and Newley Purnell focused on another new report (also cited in the FT piece) on the global proliferation of video and facial recognition surveillance:

The report, from industry researcher IHS Markit, to be released Thursday, said the number of cameras used for surveillance would climb above 1 billion by the end of 2021. That would represent an almost 30% increase from the 770 million cameras today. China would continue to account for a little over half the total.

Fast-growing, populous nations such as India, Brazil and Indonesia would also help drive growth in the sector, the report said. The number of surveillance cameras in the U.S. would grow to 85 million by 2021, from 70 million last year, as American schools, malls and offices seek to tighten security on their premises, IHS analyst Oliver Philippou said.

Mr. Philippou said said government programs to implement widespread video surveillance to monitor the public would be the biggest catalyst for the growth in China. City surveillance also was driving demand elsewhere.

[…] Chinese companies Hangzhou Hikvision Digital Technology Co. Ltd. and Dahua Technology Co. are the biggest camera manufacturers by far, accounting for almost 38% of total installations, according to the report. But there are major non-Chinese names in the business as well, including South Korean maker Hanwha Techwin, and Panasonic Corp. of Japan. [Source]

The report notes that the U.S. currently has more security cameras per capita than China, although only 3% of those in the U.S. are part of city surveillance schemes. Both countries ranked poorly in a recent Comparitech study of biometric data handling around the world. China received the worst score of the included countries, while the U.S. ranked fourth from bottom. Comparitech’s Paul Bischoff wrote that these and other low-ranking countries showed "a concerning lack of regard for the privacy of people’s biometric data. Through the collection, use, and storage of biometric data, these countries use biometrics to a severe and invasive extent."

China only managed to scrape back one mark for its lack of a biometric voting system. However, the voting system is very heavily controlled, which perhaps rids the need for biometric voting. It also scored maximum points across all of the other categories for:

  • Using biometrics in passports, ID cards, and bank accounts.
    Not having a specific law to protect citizens’ biometrics.
  • Its extensive nationwide biometric database is currently being expanded to include DNA.
  • Its widespread and invasive use of facial recognition technology in CCTV cameras. As our previous study, Surveillance States, found, facial recognition cameras are now being used to track and monitor the country’s Muslim minority, Uighurs, among other things. Beijing is also trialing facial recognition technology at security checkpoints on the subway so it can divide travelers into groups, something they’re hoping to expand to include buses, taxis, and other travel services. And, at the time of writing, China has also introduced facial recognition checks for anyone getting a new mobile phone number.
  • Its lack of safeguards for employees in the workplace. Companies have even been permitted to monitor employees’ brain waves for productivity while they’re at work.
  • The majority of countries require a visa to enter China and all of the visas issued contain biometrics. Fingerprints of anyone entering China are also taken. [Source]

At Sixth Tone, Cai Xuejiao reported on recent public opinion data from China:

A survey conducted by Nandu Personal Information Protection Research Center — a think tank affiliated with the Southern Metropolis Daily newspaper — revealed that 80% of respondents were concerned about their personal information being leaked due to a lack of security. The research institute surveyed 6,152 people between October and November to explore public attitudes toward the application of facial recognition at transport hubs, schools, residential complexes, and shopping malls.

[…] Despite the technology’s increasing usage, a majority of survey respondents said they were concerned about financial fraud and “deepfakes,” or manipulated videos that can potentially be used to spread misinformation. More than 73% said they would prefer alternatives to sharing their facial data, and 83% said they wanted a way to access or delete the data.

Last month, a law professor in the eastern Zhejiang province filed a landmark lawsuit against a local safari park for implementing a mandatory face-screening measure. He accused the park of collecting unnecessary personal data and not letting visitors opt out.

[…] Privacy concerns aside, many of the survey respondents — between 60% and 70% — also agreed that facial recognition is convenient and ensures safety. [Source]

For more on privacy concerns in China, see a World Economic Forum post from last month by New York University’s Winston Ma Wenyan—describing the public backlash against corporate data gathering and handling, the beginnings of official regulation, and its likely effectiveness—and more from CDT.

In his ChinAI newsletter this week, Oxford University’s Jeffrey Ding highlights a recent WeChat post by Tsinghua law professor Lao Dongyan, who argues against the adoption of facial recognition on the Beijing subway system. Although "it’s not a piece representative of all discussions of the ethics of facial recognition in China," he writes, "it does go farther than any other piece by a Chinese scholar I’ve seen in its strong opposition to facial recognition technology." From the newsletter’s summary of his full translation:

The essay is structured into four arguments against the use of facial recognition in the Beijing Subway as well as rebuttals to four possible counterarguments. The four arguments:

  • The relevant organizations and institutions have not proven the legitimacy of their collection method for sensitive personal information
  • The legitimacy of the new facial recognition measure is undercut without a hearing of the public’s views (e.g. the Beijing subway undertook a broad solicitation of the public’s views on a fare adjustment a few years earlier)
  • The standards for how the Beijing subway will conduct screenings are not transparent, could be arbitrarily set, and could be discriminatory.
  • There is not enough evidence to show that the use of facial recognition in subways can improve transport efficiency; even if there is evidence to prove this, efficiency itself is not a sufficient basis for implementation.

[…] Her conclusion sticks the landing: “If this society has not yet fallen into a state of persecution and paranoia, it is time to say enough on security issues. The hysterical pursuit of security has brought to society not security at all, but complete suppression and panic." [Source]

Like facial recognition, DNA collection and analysis has been a longstanding focus of concern regarding biometric data, its security, and its potential abuse, particularly in Xinjiang. At The New York Times last week, Sui-Lee Wee and Paul Mozur reported on an area of convergence between the two, as researchers in China and elsewhere attempt to generate accurate facial images from DNA samples.

In the long term, experts say, it may even be possible for the Communist government to feed images produced from a DNA sample into the mass surveillance and facial recognition systems that it is building, tightening its grip on society by improving its ability to track dissidents and protesters as well as criminals.

Some of this research is taking place in labs run by China’s Ministry of Public Security, and at least two Chinese scientists working with the ministry on the technology have received funding from respected institutions in Europe. International scientific journals have published their findings without examining the origin of the DNA used in the studies or vetting the ethical questions raised by collecting such samples in Xinjiang.

[… E]xperts widely question phenotyping’s effectiveness. Currently, it often produces facial images that are too smooth or indistinct to look like the face being replicated. DNA cannot indicate other factors that determine how people look, such as age or weight. DNA can reveal gender and ancestry, but the technology can be hit or miss when it comes to generating an image as specific as a face.

Phenotyping also raises ethical issues, said Pilar Ossorio, a professor of law and bioethics at the University of Wisconsin-Madison. The police could use it to round up large numbers of people who resemble a suspect, or use it to target ethnic groups. And the technology raises fundamental issues of consent from those who never wanted to be in a database to begin with. [Source]

On Twitter, Wee and Mozur described their findings, the reporting process, and Xinjiang officials’ efforts to disrupt it:

In a follow-up report, the two focused on these ethical issues:

Two publishers of prestigious scientific journals, Springer Nature and Wiley, said this week that they would re-evaluate papers they previously published on Tibetans, Uighurs and other minority groups. The papers were written or co-written by scientists backed by the Chinese government, and the two publishers want to make sure the authors got consent from the people they studied.

Springer Nature, which publishes the influential journal Nature, also said that it was toughening its guidelines to make sure scientists get consent, particularly if those people are members of a vulnerable group.

[…] When Western journals publish such papers by Chinese scientists affiliated with the country’s surveillance agencies, it amounts to selling a knife to a friend “knowing that your friend would use the knife to kill his wife,” said Yves Moreau, a professor of engineering at the Catholic University of Leuven in Belgium.

[…] The science world has been responding to the pressure. Thermo Fisher, a maker of equipment for studying genetics, said in February that it would suspend sales to Xinjiang, though it will continue to sell to other parts of China. Still, Dr. Moreau said, the issue initially received little traction among academia. [Source]

Elsewhere, IPVM recently reported on the use of Intel and Nvidia chips in ethnicity detection systems in Xinjiang, noting that "Intel promptly condemned the usage while NVIDIA remains silent to IPVM inquiries."

In an essay at Nature last week, Moreau described China as "the most striking case" in a global trend of DNA database adoption. "With stringent safeguards and oversight," he argued, "it is legitimate for law-enforcement agencies to use DNA-profiling technology. But these uses can easily creep towards human-rights abuses."

A much broader array of stakeholders must engage with the problems that DNA databases present. In particular, governments, policymakers and legislators should tighten regulation and reduce the likelihood of corporations aiding potential human-rights abuses by selling DNA-profiling technology to bad actors — knowingly or negligently. Researchers working on biometric identification technologies should consider more deeply how their inventions could be used. And editors, reviewers and publishers must do more to ensure that published research on biometric identification has been done in an ethical way.

[…] Over the past eight years, three leading forensic genetics journals — International Journal of Legal Medicine (published by Springer Nature), and Forensic Science International and Forensic Science International: Genetics Supplement Series (both published by Elsevier) — have published 40 articles co-authored by members of the Chinese police that describe the DNA profiling of Tibetans and Muslim minorities, including people from Xinjiang. I analysed 529 articles on forensic population genetics in Chinese populations, published between 2011 and 2018 in these journals and others. By my count, Uyghurs and Tibetans are 30–40 times more frequently studied than are people from Han communities, relative to the size of their populations (unpublished data). Half of the studies in my analysis had authors from the police force, military or judiciary. The involvement of such interests should raise red flags to reviewers and editors.

In short, the scientific community in general — and publishers in particular — need to unequivocally affirm that the Declaration of Helsinki (a set of ethical principles regarding human experimentation, developed for the medical community) applies to all biometric identification research (see go.nature.com/34bypbf). Unethical work that has been published in this terrain must be retracted. [Source]

Launching off from Moreau’s mention of Chinese genomics giant BGI, science writer Mara Hvistendahl recapped the company’s history and involvement in Xinjiang on Twitter, concluding that "BGI’s work in Xinjiang deserves a LOT more scrutiny."

CDT EBOOKS

Subscribe to CDT

SUPPORT CDT

Browsers Unbounded by Lantern

Now, you can combat internet censorship in a new way: by toggling the switch below while browsing China Digital Times, you can provide a secure "bridge" for people who want to freely access information. This open-source project is powered by Lantern, know more about this project.

Google Ads 1

Giving Assistant

Google Ads 2

Anti-censorship Tools

Life Without Walls

Click on the image to download Firefly for circumvention

Open popup
X

Welcome back!

CDT is a non-profit media site, and we need your support. Your contribution will help us provide more translations, breaking news, and other content you love.