The idea that TikTok could be used to spread pro-China messaging to American citizens, especially young people, has been a key factor in Congress’s efforts to ban the app. During congressional testimony in March, FBI Director Christopher Wray warned of China’s ability to “conduct influence operations” on TikTok, saying those efforts would be “extraordinarily difficult” to detect.
Loading
The NCRI researchers acknowledged that their study does not show “definitive proof” that the Chinese government or TikTok employees have intentionally manipulated the algorithm. Hashtags are added to content by users.
The analysis builds on NCRI’s previous findings that TikTok amplifies or demotes content based on whether it aligns with the interests of the Chinese government. That report was cited heavily by US politicians who see the app as a threat to national security. TikTok Chief Executive Officer Shou Zi Chew called that previous report misleading when questioned about the findings during a Senate hearing earlier this year.
TikTok pointed Bloomberg to a critique of that research published by the Cato Institute, a libertarian, free-market think tank. (One of Cato’s Institute’s key donors and former board members, Jeffrey S. Yass, is also a significant shareholder in TikTok’s parent company, ByteDance.)
ByteDance and TikTok executives have repeatedly denied allegations that the Chinese government uses the social media app to disseminate propaganda, but those arguments have failed to placate US government officials. TikTok has since sued the US government to overturn that law, arguing that Congress has not substantiated its claims of the app being a national security threat.
Loading
The NCRI is an independent non-profit organisation composed of political scientists, security experts and research analysts. The group receives funding from Rutgers University, the British government, and “private philanthropic families,” Finkelstein said.
To conduct the study, researchers collected more than 3400 videos related to the keywords ‘Uighur’, ‘Xinjiang’, ‘Tibet’ and ‘Tiananmen,’ terms researchers consider important to the Chinese government’s messaging. Researchers searched for each keyword on TikTok, Instagram and YouTube and viewed the first 300 or so videos that were displayed.
From there, each video was classified as either pro-China, anti-China, neutral or irrelevant by up to three human reviewers. Researchers pointed out that their classification of content as pro-China or anti-China involved “subjective judgment.” They further cautioned that “although efforts were made to minimise bias, the potential for interpretative differences remains”.
Videos that highlighted Uighurs’ plight in China, mentioned Tibetan liberation or contained imagery based on the massacre at Tiananmen Square, were classified as anti-China content by reviewers.
Official CCP promotional messages, messages promoting the narrative that Tibet has been liberated and patriotic images of Tiananmen Square with no mention of the massacre, were considered to be pro-China content.
The analysis found that TikTok contained the highest proportion of pro-China content across all three platforms for searches of the words ‘Tibet’ and ‘Tiananmen’.
More than 25 per cent of search results for ‘Tiananmen’ for example, were considered pro-China, which researchers defined as patriotic songs, travel promotions or scenic representations that make no mention of the 1989 massacre there. In comparison, only about 16 per cent of search results on Instagram were pro-China, and just about 8 per cent on YouTube. A spokesperson for Instagram declined to comment. YouTube representatives didn’t immediately respond to a request.
In some cases, Instagram and YouTube showed higher rates of pro-China content than TikTok. For ‘Uighur’ and ‘Xinjiang,’ about 50 per cent of searches on YouTube returned positive content, compared to less than 25 per cent on TikTok. Researchers attributed the results to a handful of influential accounts created by, or affiliated with, state actors.









Add Category