Navigating Chinese AI Research Collaboration: Cases from Australia, Microsoft

Western institutions’ AI and surveillance research ties with China, particularly those with partners implicated in ongoing rights abuses in Xinjiang, have been coming under mounting scrutiny. This week, South China Morning Post’s John Power examined one such case: a joint research center established by the University of Technology Sydney and Chinese tech firm CETC in 2017.

Last month, UTS announced it would end a joint project, and expedite the finalisation of two others, following a report by Human Rights Watch which found that a CETC subsidiary had developed an app used in the mass of the Uygur community in Xinjiang. While insisting its could not have been used in Xinjiang due to the timelines involved, UTS said it decided to cancel the most contentious project, related to public security video analysis, due to “concerns about potential future use”.

[…] Ian Hall, an international relations professor at the Griffith Asia Institute in Brisbane, said that universities faced the dilemma of balancing ethical considerations with valuable funding and other resources from Chinese entities. “Universities don’t really want this to end – there is money at stake and collaborative work boosts scores in key international ranking exercises,” Hall said.

“But my sense is that universities don’t want to police this alone, because there is no guarantee that if one university does the right thing, all others will do the same. It is, after all, a competitive business.”

[…] The agreement between UTS and CETC, a copy of which was obtained by This Week in Asia under freedom of information laws, suggests the university did not evaluate the potential ethical issues in partnering with a state-owned defence company, security and rights experts say. [Source]

Similar cases including an agreement between the University of New South Wales and state-owned Chinese data mining firm GTCOM were highlighted last week by the Australian Broadcasting Corporation’s Four Corners, drawing on a recent report on GTCOM by Samantha Hoffman at the Australian Strategic Policy Institute:

Professor John Fitzgerald, who served as a chair on DFAT’s -China Council, said Chinese companies were capitalising on ’s science and technology expertise.

“Australia’s science and technology priorities are being set by the Chinese Government because we enter into collaborations that have really been designed to support China’s goals, not ours,” he said.

“Many universities are very happy to proceed with whatever it is … because of the money and prestige involved.

[…] Australian universities have also collaborated with Chinese defence universities.

Researchers at the Australian National University (ANU) have worked on dozens of such studies, including a 2019 study on covert communications with the China’s National University of Defense Technology, which was blacklisted by the US four years ago. [Source]

have also come under heightened scrutiny. Google has faced repeated accusations over its research ties, which critics have equated with collaborating with China’s military. The OpenPower Foundation it leads together with IBM was also reported this summer to have supported development of surveillance technology by Shenzhen-based Semptian. In April, The Financial Times reported on three papers “co-written by academics at Microsoft Research Asia in Beijing and researchers with affiliations to China’s National University of Defense Technology,” which dealt with technologies potentially applicable to censorship and surveillance.

Microsoft’s CEO defended MSRA’s work in an interview with the BBC earlier this month, arguing that ending it would do more harm than good.

“A lot of AI research happens in the open, and the world benefits from knowledge being open,” he said.

“That to me is been what’s been true since the Renaissance and the scientific revolution. Therefore, I think, for us to say that we will put barriers on it may in fact hurt more than improve the situation everywhere.”

[…] “We know any technology can be a tool or a weapon,” he told the BBC.

“The question is, how do you ensure that these weapons don’t get created? I think there are multiple mechanisms. The first thing is we, as creators, should start with having a set of ethical design principles to ensure that we’re creating AI that’s fair, that’s secure, that’s private, that’s not biased.”

[…] He said he felt his company had sufficient control over how the controversial emerging technologies are used, and said the firm had turned down requests in China – and elsewhere – to engage in projects it felt were inappropriate, due to either technical infeasibility or ethical concerns. [Source]

Several other notable observers have warned against overreaction to research concerns. Elsa B. Kania at the Center for a New American Security, for example, has commented that despite “increased blurring of boundaries between academic and military-oriented research in China […] I deeply believe that the openness of the American innovation ecosystem is among our greatest competitive advantages, and global partnerships in research, including, in some cases, with Chinese counterparts, can be a core element of that. So personally, I wouldn’t advocate that such academic collaborations be curtailed, and I think that the balancing of risk and benefit ideally ought to occur on a case-by-case basis.”

At Macro Polo this week, Matt Sheehan described the Microsoft center’s pivotal role in China’s AI development, and considered the effects of its hypothetical closure through the lens of an influential paper by four Chinese-trained computer scientists who worked there. (One is now at Facebook, two more at Chinese computer vision firm Megvii, and one at Beijing-based autonomous vehicle startup Momenta.) Examination of such cases, Sheehan argues might “help to craft better policies that rely more on the scalpel than the hammer.”

The case I examine here is the single most-cited paper in AI research over the past five years: Deep Residual Learning for Image Recognition. Often abbreviated as “ResNet,” the 2015 paper is not just the most-cited AI paper based on Google Scholar metrics. With 25,256 citations between 2014 and July 2019, it’s the most-cited paper in any academic field during that time.

[…] MSRA has been perhaps the single most important institution in the birth and growth of the Chinese AI ecosystem over the past two decades. The lab served as a training ground for many future leaders of China’s then-embryonic AI ecosystem, with alumni that include Alibaba’s CTO, Baidu’s President, the head of technology strategy at Bytedance, and the founders of several unicorn AI startups. The Chinese media has even compared MSRA to the “Whampoa Academy of the Chinese internet“—a reference to the legendary military academy that churned out prominent army commanders for both the Kuomintang and the Chinese Communist Party.

[…] But what if [concerns over China’s future capabilities in AI] led US policymakers to force to close the lab in, say, 2012?

[… It’s] possible they would have continued that research at a Chinese institution. In 2013, Baidu founded its Institute of Deep Learning—the first of its kind in China—and it would have been a potential destination for those researchers. If ResNet was developed there (or at a Chinese university lab), it likely would have still been published openly. But what if it had been developed at a government or military-affiliated lab instead, one where open source collaboration is not part of the DNA? [Source]

Tweets

SUPPORT CDT

Google Ads 1

CDT EBOOKS

Giving Assistant

Amazon Smile

Google Ads 2

Life Without Walls

Life Without Walls

Click on the image to download Firefly for circumvention

Open popup
X

Welcome back!

CDT is a non-profit media site, and we need your support. Your contribution will help us provide more translations, breaking news, and other content you love.