Short video platform TikTok’s enormous popularity among young Westerners, together with its ownership by Beijing-based AI entertainment firm Bytedance, has brought steadily mounting scrutiny in recent months. Critics suggest that the parent company’s susceptibility to Chinese government pressure could bleed over to TikTok, with censorship flowing outward and sensitive user data flowing back. With U.S. legislators circling and a national security review on the way, the company has been stepping up its lobbying and PR efforts in response. The Washington Post’s Tony Romm and Drew Harwell report that TikTok CEO Alex Zhu will visit Washington DC next week:
The planned trip — confirmed by multiple people familiar with the matter who spoke on the condition of anonymity because they were not authorized to discuss it on the record — reflects TikTok’s race to maintain the app’s explosion in popularity at a time when U.S-China relations are frayed and U.S. officials are wary about the inroads Chinese companies are making into the technologies where the United States has long been the unchallenged leader.
The trip could bring Zhu, who is based in Shanghai, face-to-face with some of the app’s harshest critics. He has sought a meeting with Republican Sens. Josh Hawley (Mo.), Tom Cotton (Ark.) and Marco Rubio (Fla.), each of whom have questioned the app’s independence from Beijing.
[…] “It’s difficult to see a way forward for TikTok without a complete separation from its Beijing-based owner,” Cotton said in a statement to The Washington Post.
[…] The app’s leaders stoked some lawmakers’ ire last month by skipping a congressional hearing chaired by Hawley probing its ties to China. And its parent company faces an investigation by an arm of the federal government that reviews foreign business deals for national-security concerns. [Source]
In a recent profile at The New York Times, Zhu defended the company’s independence on both censorship and privacy, claiming that “if China’s top leader, Xi Jinping, personally asked Mr. Zhu to take down a video or hand over user data[…,] ‘I would turn him down.'”
This week brought news of two privacy-focused U.S. lawsuits against the company. In one, filed last week in the Northern District of California, TikTok is accused of having “vacuumed up and transferred to servers in China vast quantities of private and personally-identifiable user data.” From Katie Paul at Reuters:
The documents identify the plaintiff as Misty Hong, a college student and resident of Palo Alto, California, who downloaded the TikTok app in March or April 2019 but never created an account.
Months later, she alleges, she discovered that TikTok had created an account for her without her knowledge and produced a dossier of private information about her, including biometric information gleaned from videos she created but never posted.
According to the filing, TikTok transferred user data to two servers in China – bugly.qq.com and umeng.com – as recently as April 2019, including information about the user’s device and any websites the user had visited.
[…] The lawsuit also claims that source code from Chinese tech giant Baidu is embedded within the TikTok app, as is code from Igexin, a Chinese advertising service, which security researchers discovered in 2017 was enabling developers to install spyware on a user’s phone. [Source]
The other lawsuit, which was settled on undisclosed terms the day after its filing in Illinois on Tuesday, alleged that “because the App had virtually all privacy features disabled by default, there were serious ramifications, including reports of adults trying to contact minor children via the App.” The claims are similar to those over which TikTok paid $5.7 million to the FTC earlier this year. The Verge’s Makena Kelly reported on the rapid settlement on Thursday:
“TikTok is firmly committed to safeguarding the data of its users, especially our younger users,” a TikTok spokesperson told The Verge. “Although we disagree with much of what is alleged in the complaint, we have been working with the parties involved and are pleased to have come to a resolution of the issues.”
TikTok also declined to give details of the settlement.
The plaintiff’s complaint alleges that the Musical.ly app (now known as TikTok) failed to provide the proper safeguards to prevent children from using the app. If a minor under the age of 13 created an account, the app requested that they fill in personally identifying information like their name, phone number, email address, photo, and bio. That information would be publicly available for other users to see. The complaint also alleges that the app collected the location data of its users, including minors, for close to a year between December 2015 and October 2016.
This alleged collection would be in violation of the Children’s Online Privacy Protection Act (COPPA). The law forbids social media companies like Facebook and TikTok from collecting the data of children under 13 years of age without the express consent of their parents or guardians. [Source]
Another spotlight on TikTok’s privacy practices came from Matthias Eberl at Süddeutsche Zeitung, whose technical analysis of the service’s app and website revealed “multiple breaches of law, trust, transparency and data protection,” including undisclosed data transfers and a “highly controversial method of device fingerprinting.”
I did a detailed privacy check of the Tiktok app and website. You can read my article here (german): https://t.co/SgSAMrQDza
Tiktok commits multiple breaches of law, trust, transparency and data protection.
— Matthias Eberl (@MatthiasEberl) December 4, 2019
My comment: Tiktok is breaking the law in multiple ways while exploiting mainly teenagers data. This should be regulated quick and rigorous. We have all necessary laws. Don’t let them break society like 10 years of FB. Journalists should find a better place for vertical video.
— Matthias Eberl (@MatthiasEberl) December 4, 2019
On the censorship front, a series of reports have used insider accounts and leaked guidelines to uncover questionable content moderation policies, including some that may have been used to disguise compliance with Chinese political censorship imperatives. The company has repeatedly described these policies as crude temporary measures that have since been refined and replaced, though in some cases the original practices continued until at least September.
Late last month, these concerns reached their widest audience yet after TikTok suspended the account of a 17-year-old American, Feroza Aziz. Aziz had posted a video in which eyelash curling tips abruptly gave way to a call for attention to mass detentions in Xinjiang. The video itself was also temporarily removed.
— feroza.x (@x_feroza) November 25, 2019
I am blocked from posting on tik tok for a month. This won’t silence me.
— feroza.x (@x_feroza) November 25, 2019
TikTok says the ban was not due to criticism of China. Her previous account, they said, had posted a video of Osama bin Laden on Nov. 15, leading to a ban for her phone. She says the video was an obvious joke about racism she has faced as a Muslim, and shared the video with us: pic.twitter.com/5w7WpV2gB9
— Drew Harwell (@drewharwell) November 26, 2019
UPDATE: tik tok has issued a public apology and gave me my account back. Do I believe they took it away because of a unrelated satirical video that was deleted on a previous deleted account of mine? Right after I finished posting a 3 part video about the Uyghurs? No. pic.twitter.com/ehUpSJiyy1
— feroza.x (@x_feroza) November 27, 2019
Aziz’s description of the situation in Xinjiang as “another Holocaust” exaggerates the nevertheless bleak reality. Weighing the current crisis against the 1948 U.N. Convention on genocide at The Financial Times on Wednesday, Uyghur RFA journalist Gulchera Hoja wrote that “we do not have evidence of mass killing, [although] acts such as the forced transfer of children and forced sterilisation are already being perpetrated.”
In the latest of several corporate blog posts defending the company, TikTok’s Head of Safety Eric Han sought to “clarify the timeline of events, apologize for an error, and explain more about our moderation philosophy and the next steps our team will be taking in our continued commitment to our community.”
November 14, 2019 @ 2:34pm ET – On a previous account (@getmefamousplzsir), a TikTok user posted a video that included the image of Osama bin Laden, resulting in an account ban in line with TikTok’s policies against content that includes imagery related to terrorist figures. No China-related content was moderated on this account.
While we recognize that this video may have been intended as satire, our policies on this front are currently strict. Any such content, when identified, is deemed a violation of our Community Guidelines and Terms of Service, resulting in a permanent ban of the account and associated devices.
[…] November 25, 2019 @ 3:32am ET – As part of a scheduled platform-wide enforcement, the TikTok moderation team banned 2,406 devices associated with accounts that had been banned for one of three types of violations: (1) Terrorism or terrorist imagery, (2) Child exploitation, (3) Spam or similar malicious content. Because the user’s banned account (@getmefamousplzsir) was associated with the same device as her second account (@getmefamouspartthree), this had the effect of locking her out of being able to access her second, active account from that device. However, the account itself remained active and accessible, with its videos continuing to receive views.
November 27, 2019 @ 7:06am ET – Due to a human moderation error, the viral video from November 23 was removed. It’s important to clarify that nothing in our Community Guidelines precludes content such as this video, and it should not have been removed.
November 27, 2019 @ 7:56am ET – The video went live again on the platform after a senior member of our moderation team identified the error and reinstated it immediately.
In total, the video was offline for 50 minutes. [Source]
The Washington Post’s Drew Harwell and Tony Romm reported on Aziz’s explanation for the bin Laden video—”I’ve been told to go marry a terrorist, go marry bin Laden, so I thought: ‘Let me make a joke about this'”—and her belief that “TikTok is trying to cover up this whole mess. I won’t let them get away with this.” An editorial from the newspaper on Monday cited the case as a warning:
Ms. Aziz, unsurprisingly, is skeptical. So should everyone be, when a leaked excerpt of the platform’s terms revealed that its reviewers are instructed to reduce the visibility of political and protest content — such as depictions of public assemblies that “include violence.” The document represents an easing of previous standards that had also affected even more innocuous material, such as “mocking” and “criticizing” elected officials, from “calling for impeachment” to “lip-syncing.” But the changes still give the lie to TikTok’s insistence that “political sensitivities” do not factor into its decisions.
There’s additional reason to suspect China’s government has an interest in exerting control over what people talk about on its prized export platforms: The Verge reports that Chinese Americans have been barred from praising pro-democracy candidates in Hong Kong on the messaging service WeChat. Tencent, which owns WeChat, suggested that some of these users might have been accessing the national instead of international version of the app — which would subject them to Chinese law. But how that could have occurred without their knowledge remains a mystery.
U.S. technology companies are used to setting the rules of the road for the world, and that has meant a radical openness, with all its ups and, more recently, its downs, as well. China has never wanted to let that openness in, but it has always been eager to spread its closed system out. Countries that still want their own citizens to live freely should say no. [Source]
At The Atlantic, Scott Nover surveyed the Post’s own “self-aware, slapstick, and slightly cringey” use of TikTok, where its account bio explains that “newspapers are like ipads but on paper.”
Elsewhere, a series of articles by Chris Köver and Markus Reuter at Netzpolitik.org detailed three aspects of the company’s content management regime: its efforts to protect disabled and other users from bullying by limiting their videos’ reach; its various methods for boosting or suppressing content, and changes made to handling of political material after earlier reporting from The Guardian; and its treatment of content attacking TikTok or mentioning its competitors.
TikTok, the fast-growing social network from China, has used unusual measures to protect supposedly vulnerable users. The platform instructed its moderators to mark videos of people with disabilities and limit their reach. Queer and fat people also ended up on a list of „special users“ whose videos were regarded as a bullying risk by default and capped in their reach – regardless of the content.
[…] One source familiar with moderation reported that staff repeatedly pointed out the problems of this policy and asked for a more sensitive and meaningful policy.
However, their comments were dismissed by the Chinese decision-makers. The rules were mainly handed down from Beijing. This is largely in line with what the Washington Post learned from former TikTok employees in the USA. [Source]
According to the source, moderation [of German-language videos] takes place in three review stages. The first review already takes place in Barcelona after 50 to 150 video views. Berlin is responsible for the second review from 8,000 to 15,000 views and the third review from about 20,000 views. At night, German-speaking Chinese moderate content from Beijing. TikTok confirmed this to netzpolitik.org.
[…] According to the source, TikTok changed its moderation rules after the Guardian’s September reporting and subsequent criticism. The source says that the company explicitly referred to the bad press in front of employees. The extent of the changes has been unique to date, but smaller modifications are more frequent.
[…] Until this major adjustment, moderation rules had almost completely ruled out criticism of politics and political systems. Those who criticised constitutional monarchy, parliamentary systems, separation of powers or socialist systems were throttled. Only with the major changes was this „ban on politics“ removed from the moderation rules. [Source]
One of the rules netzpolitik.org was able to see was „content depicting an attack on TikTok“. It said that „constructive criticism“ and „feedback“ were allowed. For content „attacking, condemning or criticizing TikTok“, the moderators were advised to mark the videos as „Not Recommend“. A classification of „Not Recommend“ greatly limits the possible viewership of a video. It then no longer appears in the algorithmically selected „For You“ feed, which the user sees when opening the app.
[…] When asked when TikTok ceased using this rule, the TikTok press office provided a vague answer. In order to counteract misinformation, a restrictive, temporary approach was adopted „at the beginning“. At no time did this approach represent a long-term solution and the company no longer pursued it.
[…] Any content with a unique identifier of a direct competitor was to be classified as „Not Recommend“. Identifiers could include: a logo, the name as text, a screenshot or a user interface. The rule was also applied to indirect competitors if their logo or name could be seen in more than half of the video – even if the logo was intentionally obscured by the TikTok user. Videos explaining the functionalities of direct competitors were similarly demoted. [Source]
Further scrutiny of TikTok’s content handling practices has come from data analysis Twitter account @AirMovingDevice, which has returned to the platform after being pressured into silence in March.
Thread: testing Tiktok’s video review process
I uploaded the same video with different texts, and monitored it through Tiktok’s API.
Some videos go through a review process before it’s publicly visible (b4 that only visible to uploader). Some keywords seem to trigger that… pic.twitter.com/L8xuOzR96y
— Air-Moving Device (@AirMovingDevice) November 16, 2019
I only tested a limited set of keywords, so it’s unclear what the scope of this “review process” is. It would be interesting to test more keywords.
— Air-Moving Device (@AirMovingDevice) November 16, 2019
Another thread from @AirMovingDevice explained how TikTok “hides certain videos under a hashtag. They’re not deleted: still visible on the uploader’s profile page & the uploader can still see it under the hashtag, giving the illusion that nothing is hidden. But the public doesn’t see the video when searching for a hashtag.” @AirMovingDevice cautioned that there is “NO CLEAR INDICATION that videos with certain themes are more frequently hidden: HK/XJ/Trump-related hashtags are all over the place. Plus, videos under innocuous hashtags are often hidden as well, e.g. #carrotcake and #newyorkgiants. […] It is unclear why certain videos are hidden. And I am NOT claiming that TikTok is intentionally censoring content: it could be an innocent response to spam etc.”
Beyond these privacy and censorship concerns, TikTok’s Chinese counterpart Douyin has been accused of providing propaganda cover for the ongoing mass detention campaign in Xinjiang. Its involvement in the region was reported in an expansion of the Australian Strategic Policy Institute’s Mapping China’s Tech Giants project, which now tracks 23 companies and other organizations’ global footprint through more than 26,000 data points. Other new additions, “mainly in the artificial intelligence (AI) and surveillance tech sectors,” are iFlytek, Megvii, SenseTime, YITU, CloudWalk, DJI, Meiya Pico, Dahua, Uniview and BeiDou. The update includes case studies “on TikTok as a vector for censorship and surveillance, BeiDou’s satellite and space race and CloudWalk’s various AI, biometric data and facial recognition partnerships with the Zimbabwean Government.”
[… B]eyond the expected regulatory missteps of a fast-growing social media platform, ByteDance is uniquely susceptible to other problems that come with its closeness to the censorship and surveillance apparatus of the CCP-led state. Beijing has demonstrated a propensity for controlling and shaping overseas Chinese-language media. The meteoric growth of TikTok now puts the CCP in a position where it can attempt to do the same on a largely non-Chinese speaking platform—with the help of an advanced AI-powered algorithm.
[… W]e have found that TikTok’s parent company ByteDance—which is not on the US entity list for human rights violations in Xinjiang—collaborates with public security bureaus across China, including in Xinjiang where it plays an active role in disseminating the party-state’s propaganda on Xinjiang.
[…] Unfortunately, it’s extremely difficult for international authorities to sanction the circa 1,000 homegrown local Xinjiang security companies. However, as companies such as Huawei seek to expand overseas, foreign governments can play a more active role in rejecting those that participate in the Chinese Government’s repressive Xinjiang policies. [Source]
The Washington Post’s Anna Fifield also reported on ASPI’s findings:
Xinjiang Internet Police began working with Douyin, the local version of TikTok, last year and built a “new public security and Internet social governance model” in 2018. Then in April, the Ministry of Public Security’s Press and Publicity Bureau signed a strategic cooperation agreement with ByteDance to promote the “influence and credibility” of police departments nationwide, the ASPI experts said.
The agreement also reportedly says ByteDance will increase its offline cooperation with the police department, although the details of this cooperation are not clear.
[…] ByteDance has also been working with Xinjiang authorities under a program called “Xinjiang Aid,” whereby Chinese companies open subsidiaries or factories in Xinjiang and employ locals who have been detained in the camps. Its operations are centered on Hotan, an area of Xinjiang considered backward by the Communist Party and where the repression has been among the most severe.
ByteDance has been guiding and helping Xinjiang authorities and media outlets to use its news aggregation app and Douyin to “propagate and showcase Hotan’s new image,” according to the ASPI report. [Source]
Last month, Reuters reported on a memo by Bytedance founder Zhang Yiming which, without explicitly referring to obstacles in the U.S., urged staff to diversify TikTok’s markets, strengthen its capacities in “handling global public affairs,” and improve data protection. The report noted that “India is the main driver for TikTok’s new downloads this year.” At The Atlantic, Snigdha Poonam and Samarth Bansal reported on the platform’s explosive popularity in India, whose “results are both magical and nightmarish.”