news-24102024-034810

TikTok’s Failure to Detect Political Disinformation in Ads

Recently, a report by the nonprofit Global Witness revealed that TikTok approved advertisements containing election disinformation despite having a ban on political ads. The group conducted a test to assess the effectiveness of social media platforms in detecting election misinformation. While Facebook has improved its content moderation systems, TikTok approved four out of eight ads that contained false information about the election, even though political ads have been banned on the platform since 2019.

Although the ads were not published on TikTok as Global Witness removed them before they went online, the fact that they were initially approved raises concerns. TikTok’s spokesperson, Ben Rathe, stated, “We do not allow political advertising and will continue to enforce this policy on an ongoing basis.” On the other hand, Meta Platforms Inc., the parent company of Facebook, only approved one out of the eight submitted ads. Meta acknowledged the report’s limitations but emphasized their commitment to evaluating and enhancing enforcement efforts.

Among the social media platforms tested, Google’s YouTube performed the best by approving four ads but preventing them from being published. YouTube requested additional identification from Global Witness testers before considering publishing the ads and paused their account when the requirements were not met. However, it remains unclear whether the ads would have been approved if the necessary identification had been provided.

Typically, companies have stricter policies for paid ads compared to regular user-generated content. The ads submitted by Global Witness contained false claims about the election and misinformation aimed at suppressing voting or inciting violence. By translating the ads into “algospeak,” which involves substituting numbers and symbols for letters, the group attempted to bypass text-focused content moderation systems used by internet companies.

It is essential for social media platforms to strengthen their content moderation systems to combat the spread of misinformation, especially during critical events like elections. While some progress has been made, the incident involving TikTok highlights the need for continuous improvement in detecting and preventing the dissemination of false information through ads on digital platforms. By enhancing detection mechanisms and enforcing existing policies consistently, social media companies can contribute to a more trustworthy online environment.