Google’s 4-Step Approach to Controlling Terrorist Content on YouTube


Google has just announced that it is taking a four-step approach to controlling terror content on YouTube, the world’s most popular streaming video platform, in addition to ongoing efforts and collaborations. After the great YouTube non-search advertising boycott that saw some of Alphabet’s biggest clients pull out portions of their business, Google has done a lot to address the issue. This is their latest move against extremist and terror-based video content on YouTube.

The first is the use of machine learning algorithms to train new “content classifiers” to help moderators decide whether or not a piece of content that is related to terrorism and extremism.

Google is also upping the number of independent experts reviewing YouTube content as part of YouTube’s Trusted Flagger program.

“Human experts still play a role in nuanced decisions about the line between violent propaganda and religious or newsworthy speech,” said Google in a blog post, adding that these experts were 90 percent accurate in terms of identifying content that violates YouTube’s current content policies. 50 new organizations are being added to the 63 organizations that are part of the Trusted Flagger program.

The third change is around content that does not clearly violate usage terms, such as videos that contain religious or supremacist content. Google intends to use interstitial warnings for such content, which will essentially let the uploader know that such videos “will not be monetised, recommended or eligible for comments or user endorsements.”

The fourth effort is much broader, and aimed at counter-radicalization. Working with Jigsaw, Google plans to expand its existing ‘Creators for Change’ program and implement the “Redirect Method”, which essentially targets “potential Isis recruits” and “redirects them towards anti-terrorist videos that can change their minds about joining.”

We expect that Google will be rolling out these initiatives on YouTube over the coming months. The world’s top tech companies are already working together to share technologies that can help themselves and smaller players address the problem of extremist content, with Microsoft, Facebook and Twitter already part of an international forum.

Thanks for visiting! Would you do us a favor? If you think it’s worth a few seconds, please like our Facebook page and follow us on TwitterIt would mean a lot to us. Thank you.

Source: Google