Facebook AI removes terror posts even before flagged

 Using Artificial Intelligence (AI) and Machine Learning (ML) techniques, Facebook is successfully removing all Islamic State (IS) and Al Qaeda-related terror content from its platform before anyone flags it, the social media giant said on Wednesday.

“Today, 99 per cent of the IS and Al Qaeda-related terror content we remove from Facebook is content we detect before anyone in our community has flagged it to us, and in some cases, before it goes live on the site,” Monika Bickert, Head of Global Policy Management at Facebook wrote in a blog post.

Facebook does this primarily through the use of automated systems like photo and video matching and text-based machine learning.

“Once we are aware of a piece of terror content, we remove 83 per cent of subsequently uploaded copies within one hour of upload,” added Brian Fishman, Head of Counterterrorism Policy, Facebook.

Deploying AI for counterterrorism is not as simple as flipping a switch.

A system designed to find content from one terrorist group may not work for another because of language and stylistic differences in their propaganda.

The social media giant is currently focusing its techniques to combat terrorist content about Islamic State (IS), Al-Qaeda and their affiliates.

The use of AI against terrorism is increasingly bearing fruit, but ultimately, it must be reinforced with manual review from trained experts, said Facebook.

“To that end, we tap expertise from inside the company and from the outside, partnering with those who can help address extremism across the Internet,” the company noted.

Facebook has announced the formation of the Global Internet Forum to Counter Terrorism (GIFCT) where the social media giant is working with Microsoft, Twitter and YouTube to fight the spread of terrorism and violent extremism across their platforms.

GIFCT has already brought together more than 50 technology companies over the course of three international working sessions.

“Through GIFCT, we also engage with governments around the world and are preparing to jointly commission research on how governments, tech companies and civil society can fight online radicalisation,” Facebook said.

Facebook has also forged partnerships with several organisations that have expertise in global terrorism or cyber intelligence to help us in our efforts.

These partners include Flashpoint, the Middle East Media Research Institute (MEMRI), the SITE Intelligence Group, and the University of Alabama at Birmingham’s Computer Forensics Research Lab.

They flag Pages, profiles and groups on Facebook potentially associated with terrorist groups.

“These organisations also send us photo and video files associated with IS and Al Qaeda that they have located elsewhere on the Internet, which we can then run against our algorithms to check for file matches to remove or prevent their upload to Facebook altogether,” the blog post read.

Leave a Reply

Your email address will not be published.