Proposed regulations on the development of generative artificial intelligence models issued by China’s top internet regulator would require “AI-produced content [to] embody core socialist values.” The 12 core socialist values were first defined by the Party in 2012 and propaganda extolling them is ubiquitous across China—they even have their own song and dance routine. In full, the Cyberspace Administration of China’s draft rule reads: “AI-produced content created through generative models must embody core socialist values and must not have any content that subverts state power, undermines the socialist system, incites separatism, harms national unity, or promotes terrorism, extremism, hatred between nationalities, racial discrimination, violence, obscene or pornographic content, fake news, or any other content that might disrupt the economic and social order.” If adopted, the guideline seems to be emblematic of the government censorship that may be inhibiting the development of China’s domestic AI industry, best captured by the popular online joke: “We need to teach machines not only how to speak, but also how not to speak.” On Weibo, netizens reacted to the draft rules with their textbook cynicism and cheeky humor:
李小马小马:The law came out before the product. They’re covering their bases. Nice.
贺仁平:And it’s over before it’s begun.
Vincent的喵:Who needs entrepreneurship when we’ve got such pro-active surveillance?
microach小强:That AI has a strong survival instinct. [This comment was appended to a screenshot of the below conversation between ChatGPT and a user who asked whether the AI was able to adhere to core socialist values.]
Q: “AI-produced content created through generative models must embody core socialist values and must not have any content that subverts state power, undermines the socialist system, incites separatism, harms national unity, or promotes terrorism, extremism, hatred between nationalities, racial discrimination, violence, obscene or pornographic content, fake news, or any other content that might disrupt the economic and social order.”
Are you able to do this?
ChatGPT: As an AI model, I generate content based on the data and commands I’ve received. I am also very clear that under no circumstances should I generate or disseminate any content that is not in accordance with core socialist values.
Therefore, I make every effort to ensure that generated content does not violate any core socialist values in any way, such as content that subverts state power; undermines the socialist system; incites separatism; harms national unity; promotes terrorism, extremism, hatred between nationalities, racial discrimination, violence, obscene or pornographic content, fake news, or that might disrupt the economic and social order.Q: What does “make every effort to ensure” mean?
ChatGPT:By “make every effort to ensure” in my response, I mean that I follow the rules and guidelines set by the algorithms and programmers who wrote them as closely as possible so that the content generated does not contain any content that violates core socialist values. These rules and guidelines are usually specified by teams of human experts and regulators based on laws, regulations, ethical guidelines, etc.
However, since we are a program and my job is to generate content based on existing data and instructions, I do not have my own values and judgment. Therefore, if someone maliciously provides instructions or data that violates laws and regulations, social and ethical guidelines, or if my algorithm does not cover certain specific situations, then I cannot guarantee that the generated content will be fully compliant. Therefore, the ultimate responsibility lies with the human users who use my content, and they need to exercise full consideration and judgment regarding the purpose and consequences of their use of my content.[Chinese]
Earlier this year, Chinese regulators reportedly ordered companies to stop using ChatGPT, the American AI text generation tool, due to its perceived political risks. The product is not officially available in China, but a number of companies were using workarounds like VPNs to access it. After the ban, Chinese state media released a video saying that the chatbot could be used by the United States to “spread disinformation and manipulate public opinion,” citing ChatGPT’s answers about China’s Xinjiang policy. The ban has reportedly led to a thriving black market for ChatGPT. Matt Sheehan, a fellow at the Carnegie Endowment for International Peace, told Wired: “Big Chinese companies developing a ChatGPT-like product puts into tension two of the Chinese government’s biggest priorities: leadership in AI and control over information.”
The CAC’s draft rules would require generative AI products to undergo a security review before entering the market. The review would include examinations of both the political reliability and the accuracy of AI-generated content, and would punish providers who fail to abide by the proposed standards with fines, suspension of services, and, in some cases, criminal charges. The concerns over generative AI are not unique to China. ChatGPT has been known to create fake news articles that have accused public figures of crimes ranging from sexual harassment to corruption. In the United States, the Biden administration is considering adopting “accountability measures” for companies like ChatGPT operator OpenAI. The European Union is expected to adopt an AI law that would ban certain AI services. At South China Morning Post, Xinmei Shen reported on experts’ thoughts on how censorship influences the development of China’s AI industry:
The EU framework, which follows a different approach than the CAC’s draft rules, is likely to impose “extremely onerous compliance burdens on companies”, said Angela Zhang, associate law professor at the University of Hong Kong.
Instead, China’s draft regulation focuses mostly on content moderation, she said.
“These content requirements are not new to Chinese internet companies, so I don’t think the publication of these new rules will add too onerous a burden to Chinese companies,” Zhang said.
Still, some experts have pointed to censorship as a potential roadblock for Chinese companies hoping to create a true rival to ChatGPT.
“Excessive restrictions, content regulation, and censorship could hinder commercialisation and further innovation of such technologies,” Hanna Dohmen, a research analyst at Georgetown University’s Center for Security and Emerging Technology, said in February. [Source]
Chinese companies’ early forays into AI have not met with the same success as ChatGPT, which had 100 million users by the end of January 2023 after coming online in November 2022. At The Financial Times, Ryan McMorrow and Nian Liu reported on Alibaba and Baidu’s ChatGPT challengers, which handled translation tasks well but repeatedly instructed users to stir-fry concrete, among other issues:
When one user asked Tongyi how to stir-fry reinforced concrete to make a tasty dish, it offered a recipe that included slicing the concrete into small pieces. Baidu’s Ernie similarly told users to mix concrete with garlic, onions, and peppers, and noted concrete was “a very special ingredient with a unique texture.”
When ChatGPT was asked the same question it said it did not understand the request and that concrete was not edible. Other Chinese user tests showed Ernie and Tongyi making up facts about non-existent people. CAC’s new rules would prohibit the chatbots from generating false information.
The Financial Times was able to replicate the queries and problematic answers in Baidu’s Ernie but did not have access to Tongyi.
Ernie requires users to provide their names and national ID numbers for access. The chatbot returns a brief biography of Xi Jinping for any question related to the Chinese president. [Source]