Google’s YouTube unveiled a new policy in its latest attempt to clean up the content of the popular video platform. The policy bans videos “alleging that a group is superior in order to justify discrimination, segregation or exclusion,” as well as those that deny violent events happened, such as the Holocaust or the mass shooting at Sandy Hook Elementary School. Discrimination includes age, gender, race, caste, religion, sexual orientation and veteran status. With this policy in place, YouTube has begun to remove thousands of videos to rid its site of bigotry, extremism and hate speech.
The New York Times reports that, although “YouTube did not name any specific channels or videos that would be banned … numerous far-right creators began complaining that their videos had been deleted, or had been stripped of ads, presumably a result of the new policy.” Facebook did something similar last month, banning the accounts of Infowars’ Alex Jones and six other controversial users; Twitter also barred Jones. President Trump has also claimed that social media platforms “censor right-wing opinions.”
Judging what breaks their policies put YouTube in the center of controversy when right-wing commentator Steve Crowder wasn’t muzzled for repeatedly insulting Vox reporter Carlos Maza with slurs about his ethnic heritage and sexual orientation. The next day, YouTube reversed itself, saying that Crowder had violated rules and it would suspend ads on his channel.
“Making rules is often easier than enforcing them,” notes NYT. YouTube’s massive scale also makes it difficult to “track rule violations.” Channels that “post some hateful content, but that do not violate YouTube’s rules with the majority of their videos” would not necessarily be banned, but rather given a strike, with YouTube’s three-strike enforcement. Other channels that “repeatedly brush up against our hate speech policies” will not be able to generate advertising revenue.
YouTube also changed its recommendation algorithms in January, in order to “recommend fewer objectionable videos.” A company spokesperson said this has “resulted in a 50 percent drop in recommendations to such videos in the United States.” Twitter is conducting a study on “whether the removal of content is effective in stemming the tide of radicalization online.”
The Wall Street Journal reports YouTube admitted that some of the objectionable content “has value to researchers and NGOs looking to understand hate in order to combat it, and … [is] exploring options to make it available to them in the future.” It also notes more specifics of why the public is angry “over the prevalence of toxic content online.” According to the Anti-Defamation League, 37 percent of Americans “experienced online hate and harassment in 2018,” with 17 percent of all users “encounter[ing] hate and harassment on YouTube, specifically.”
While Facebook, YouTube and Twitter have attempted to corral or banish some content, their policies are not in lockstep. For that reason, when a doctored video of House Speaker Nancy Pelosi (D-California) in which she appeared to be drunk showed up, YouTube removed it, but Facebook did not, only slowing down its spread. YouTube has also said that, “some videos could remain up … if they discuss topics like pending legislation, aim to condemn or expose hate, or provide analysis of current events.”
YouTube Says Homophobic Taunts Don’t Violate Its Policies, TechCrunch, 6/5/19