YouTube is banning supremacist videos as part of a crackdown on hate content that will see thousands of streaming uploads removed from the site.
The online giant has come under growing pressure to police the footage it hosts, following a number of complaints about the amount of far-right content and other extremist videos being shared on the site. The site’s algorithms, which are designed to videos to user that will keep them engaged and online, have also been called into question, as they have resulted in these extremist videos and misinformation being recommended to unsuspecting audiences.
YouTube says that videos that violate its policies are removed “faster than ever and users are seeing less borderline content and harmful misinformation”. In 2017, it introduced a tougher stance towards videos with supremacist content, including limiting recommendations and features like comments and the ability to share the video, which reduced views by an average of 80 per cent. Now, it’s going further by specifically prohibiting videos that allege a group is superior in order to justify discrimination, segregation or exclusion based on qualities such as age, gender, race, caste, religion, sexual orientation or veteran status. This would include, for example, videos that promote or glorify Nazi ideology or conte denying that well-documented violent events, like the Holocaust or the shooting at Sandy Hook Elementary.
“We recognize some of this content has value to researchers and NGOs looking to understand hate in order to combat it, and we are exploring options to make it available to them in the future. And as always, context matters, so some videos could remain up because they discuss topics like pending legislation, aim to condemn or expose hate, or provide analysis of current events. We will begin enforcing this updated policy today; however, it will take time for our systems to fully ramp up and we’ll be gradually expanding coverage over the next several months,” says YouTube.
YouTube also wants to reduce the spread of content that comes right up to the line. In January, it piloted an update of our systems in the USA to limit recommendations of borderline content and harmful misinformation, such as videos promoting a phoney miracle cure for a serious illness, or claiming the earth is flat. We’re looking to bring this updated system to more countries by the end of 2019. Thanks to this change, the number of views this type of content gets from recommendations has dropped by over 50 per cent in the USA.
“Our systems are also getting smarter about what types of videos should get this treatment, and we’ll be able to apply it to even more borderline videos moving forward. As we do this, we’ll also start raising up more authoritative content in recommendations,” says YouTube. “For example, if a user is watching a video that comes close to violating our policies, our systems may include more videos from authoritative sources (like top news channels) in the “watch next” panel.”
“The openness of YouTube’s platform has helped creativity and access to information thrive. It’s our responsibility to protect that, and prevent our platform from being used to incite hatred, harassment, discrimination and violence. We are committed to taking the steps needed to live up to this responsibility today, tomorrow and in the years to come,” concludes the site in a statement made today.