YouTube’s tough policies against election misinformation followed a sharp drop in the spread of false and misleading videos on Facebook and Twitter, according to new research released Thursday outlining the power of the video service on social media.
the researchers Center for Social Media and Politics at New York University The election frauds shared on Twitter soon after the November 3 election saw a significant increase. In November, those videos consistently accounted for nearly one-third of all election-related video shares on Twitter. Top YouTube channels about election fraud shared on Twitter that month came from sources that had promoted election misinformation in the past, such as Project Veritas, Right Side Broadcasting Network and One America News Network.
But the proportion of election fraud claims shared on Twitter dropped sharply December 8. That day YouTube said it would remove videos that promoted the unfounded theory that widespread errors and fraud changed the outcome of the presidential election. By December 21, for the first time since the election, the proportion of election fraud content from YouTube shared on Twitter had dropped below 20 percent.
The ratio declined further after January 7, when YouTube announced that any channel violating its election misinformation policy would receive a “strike”, and channels that received three strikes in a 90-day period, They will be removed permanently. By Inauguration Day, the ratio was around 5 percent.
This trend was replicated on Facebook. A post-poll surge in sharing videos containing fraud theories hit nearly 18 percent of all videos on Facebook just before December 8. After YouTube introduced its stricter policies, the ratio declined sharply, before increasing slightly before January 6. Riot in the Capitol. After the new policies came into force on 7 January, the ratio fell again to 4 per cent by Inauguration Day.
To reach their conclusions, the researchers collected a random sample of 10 percent of all tweets each day. He then singled out the tweets that were linked to the YouTube video. He did the same for YouTube links on Facebook using CrowdTangle, a Facebook-owned social media analytics tool.
From this large data set, the researchers extensively filtered YouTube videos about the election as well as electoral fraud using a set of keywords such as “stop the steel” and “sharpgate”. This allowed the researchers to get an idea of the volume of YouTube videos about election fraud over time and how this volume shifted in late 2020 and early 2021.
In recent years there has been a proliferation of misinformation on major social networks. YouTube is specifically legged lags behind other platforms in cracking down on various types of misinformation, often announcing stricter policies several weeks or months later Facebook And Twitter. In recent weeks, however, YouTube has tightened its policies, such as Banning all antivaccine misinformation and suspending the accounts of key antivaccine activists, including Joseph Mercola and Robert F Kennedy Jr.
Megan Brown, a research scientist at the NYU Center for Social Media and Politics, said it’s possible that after YouTube banned content, people could no longer share videos promoting election fraud. It is also possible that interest in electoral fraud theories may have diminished significantly after states have certified their election results.
But the bottom line is that “we know these platforms are deeply intertwined,” said Ms. Brown. He pointed out that YouTube has been identified as one of the most shared domains on other platforms, including in . also includes Both Facebook’s recently released material report and NYU’s My Research.
“It’s a big part of the information ecosystem,” said Ms. Brown, “so when YouTube’s platform gets healthier, so do others.”