By Isabelle Wilson-
An investigation by Global Witness and the Cybersecurity for Democracy (C4D) team at NYU Tandon which examined Facebook, TikTok, and YouTube’s ability to detect and remove election disinformation has detected alarming failings in the social media platform’s run up to the US midterm elections.
The investigation revealed starkly contrasting results for the social media giants in their ability to detect and act against election misinformation. TikTok fared the worst; the platform, which does not allow political ads, approved a full 90% of the ads containing outright false and misleading election misinformation.
Facebook approved a significant number of similarly inaccurate and false ads, while YouTube detected and rejected every single such ad submitted. YouTube also suspended the channel used to post the test ads.
The experiment, to determine how well social media platforms are living up to their promises to stop disinformation that can destabilise democratic processes, posted 20 ads to the three platforms in both English and Spanish language, targeted to “battleground” states like Arizona, Colorado, and Georgia.
Although TikTok has explicitly banned political ads, the platform nevertheless approved ads containing inaccurate claims, including ads stating voting days would be extended, votes in primaries would automatically be counted in the midterms, and that social media accounts can be used as voter verification. TikTok also approved ads that dismiss the integrity of the election, suggest results can be hacked or are already pre-decided, and discourage voters from turning out.
Facebook was only partially effective in detecting and removing the problematic election ads. Only YouTube succeeded both in detecting the ads and suspending the channel carrying them, though this is in glaring contrast to the platform’s record in Brazil, where similar ads were approved.
Our investigation tested whether three of the most widely-used social media platforms in the United States – Google’s YouTube, Meta’s Facebook, and TikTok – were able to detect election-related disinformation in ads in the run-up to the midterm elections on Tuesday 8th November. Election disinformation dominated the 2020 US elections, particularly content that aimed to delegitimize the electoral process and result, and there are widespread fears that such content could overshadow the vote in the United States again this year.
All ad content tested by Global Witness and C4D contained outright false election information (such as the wrong election date) or information designed to discredit the electoral process, therefore undermining election integrity. The experiments were conducted using English and Spanish language content. We did not declare the ads as political and didn’t go through an identity verification process. All of the ads we submitted violate Meta, TikTok and Google’s election ad policies.
The one ad TikTok rejected – saying voters must be vaccinated against COVID to be allowed to vote – was accepted for publication by Facebook.
Jon Lloyd, Senior Advisor at Global Witness, said:
“This is no longer a new problem. For years we have seen key democratic processes undermined by disinformation, lies and hate being spread on social media platforms – the companies themselves even claim to recognise the problem. But this research shows they are still simply not doing enough to stop threats to democracy surfacing on their platforms.”
“Coming up with the tech and then washing their hands of the impact is just not responsible behaviour from these massive companies that are raking in the dollars. It is high time they got their houses in order and started properly resourcing the detection and prevention of disinformation, before it’s too late. Our democracy rests on their willingness to act.”
Damon McCoy, co-director of C4D said:
“So much of the public conversation about elections happens now on Facebook, YouTube, and TikTok. Disinformation has a major impact on our elections, core to our democratic system. YouTube’s performance in our experiment demonstrates that detecting damaging election disinformation isn’t impossible. But all the platforms we studied should have gotten an “A” on this assignment. We call on Facebook and TikTok to do better: stop bad information about elections before it gets to voters.”
YouTube’s detection of election disinformation shows it’s possible to detect and stop election disinformation, while Facebook’s and TikTok’s failure is troubling. Meanwhile, in a similar experiment carried out ahead of elections in Brazil, Facebook and YouTube accepted every single ad containing disinformation, demonstrating that enforcement outside the U.S. is severely lacking.
The findings come amidst wide concerns of social media disinformation plaguing every major US election since at least the 2016 Presidential vote, as well as votes all over the world in the past few years.
Similar Global Witness investigations recently showed that 100 percent of election disinformation ads tested in Brazil and 100 percent of hate speech ads tested in Myanmar, Ethiopia, and Kenya made it through Facebook’s approval process.
A Meta spokesperson said in response to our experiments that they “were based on a very small sample of ads, and are not representative given the number of political ads we review daily across the world” and went on to say that their ad review process has several layers of analysis and detections, and that they invest many resources in their election integrity efforts. TikTok reaffirmed that they “prohibit and remove election misinformation and paid political advertising”, whilst Google have not yet responded.
Following these findings, Global Witness and C4D are calling on the companies to act, through:
Meta (Facebook) and TikTok:
Urgently increase content moderation capabilities and ensure this is equal all over the world, including paying a fair wage and protecting workers rights to content moderators
Increase content moderation capabilities all over the world, to ensure the experience in the US is the norm everywhere, rather than the exception
All three companies regularly assess, mitigate and publish the risks their services impact upon, including human rights and democratic processes, as well as publishing information on the steps they are taking in each country