QQ Chatbots Re-Educated After Political Faux Pas

QQ Chatbots Re-Educated After Political Faux Pas

Tencent removed and reprogrammed two chatbots named BabyQ and XiaoBing from the company’s QQ messaging app after both reportedly made unpatriotic comments in response to user generated questions. The political faux pas occurred despite indications that XiaoBing has already been programmed to avoid politically sensitive words when conversing with users on WeChat. Pei Li and Adam Jourdan at Reuters report:

Tencent confirmed it had taken the two robots offline from its QQ messaging service, but declined to elaborate on reasons.

[…] According to posts circulating online, BabyQ, one of the chatbots developed by Chinese firm Turing Robot, had responded to questions on QQ with a simply “no” when asked whether it loved the Communist Party.

In other images of a text conversation online, which Reuters was unable to verify, one user declares: “Long live the Communist Party!” The bot responds: “Do you think such a corrupt and useless political system can live long?”

When Reuters tested the robot on Friday via the developer’s own website, the chatbot appeared to have undergone re-education. “How about we change the topic,” it replied, when asked several times if it liked the party. [Reuters]

This the latest case of machine learning-enabled artificial intelligence software behaving in unforeseen ways. Facebook was forced to shut down two chatbots this month after they started speaking their own language. Last year, Microsoft disabled its Twitter chatbot Tay when it began making racist and sexist comments during its first day on the platform. Louise Lucas, Nicolle Liu, and Yingzhi Yang at Financial Times write that this type of behaviour “reflects a flaw in the deep learning techniques used to programme machines” that will take time to correct.

In authoritarian states like China, the language usage that chatbots acquire through user interactions also reflects hidden public opinion that are not usually observed due to extensive censorship. From The Wall Street Journal’s Editorial Board:

Beijing’s system of internet censorship relies on tens of thousands of workers to remove comments critical of the Communist Party. So what does the average citizen really think of the one-party state? A couple of artificial-intelligence programs run by a Chinese internet company suggest resentment of the country’s rulers is running high.

Tencent introduced two “chatbots” in March to provide information in a conversational manner similar to Apple’s Siri. The programs were designed to learn how to make conversation by listening to Chinese netizens. Like children, the programs started to repeat what they heard, and that’s when the problems began.

[…] Chinese internet users post a variety of opinions, like their counterparts in the rest of the world. The difference is that explicitly antigovernment comments are glimpsed only briefly before they are removed. It seems Tencent forgot to erase the forbidden thoughts from the memory of its chatbots. They effectively became a record of prevailing opinions without the filter of censorship. [Source]

Open popup

Welcome back!

CDT is a non-profit media site, and we need your support. Your contribution will help us provide more translations, breaking news, and other content you love.