Video sharing app TikTok will slap warning labels on videos it suspects contain “misinformation” and discourage users from sharing them. The move brings TikTok’s policies closer in line with those of Twitter.
TikTok already removes videos that its fact-checkers deem contain “false” information. However, the Chinese-owned company is expanding on this policy, and announced on Wednesday that videos suspected of, but not proven to contain, “misinformation” will be restricted.
Starting on Thursday in the US and Canada, and later this month globally, suspect videos will be “flagged as unsubstantiated content,” and viewers attempting to share them will be reminded of this and offered a chance to cancel their share.
Twitter introduced a similar policy in the runup to the 2020 US presidential election, labeling certain tweets (usually ones raising concerns about voter fraud) as “disputed” and limiting retweets in order to protect “the integrity of the election conversation.” More recently, Twitter unveiled ‘Birdwatch,’ a feature that lets certain verified users add notes to posts they identify as “misinformation.” Amid cries of censorship from conservatives, Twitter reportedly plans further crackdowns, following its permanent suspension of former President Donald Trump from its platform last month.
Under its community guidelines, updated in December, TikTok bans “misinformation that incites hate or prejudice, misinformation related to emergencies that induces panic, medical misinformation, content that misleads community members about elections,” and “conspiratorial content that attacks a specific protected group.”
Even before last year’s election, TikTok banned “misinformation” related to Covid-19 and climate change, and partnered up in August with PolitiFact and Lead Stories (both of whom have been accused by conservatives of bias) to screen out election-related wrongthink before the vote in November.
In its announcement on Wednesday, TikTok said that its new labeling system decreased the rate at which users shared videos by 24 percent, and reduced ‘likes’ on flagged videos by 7 percent. Twitter reported a similar drop in sharing when it banned the retweeting of “misinformation” before the election.
Think your friends would be interested? Share this story!