Can AI stop extremists on social media?

By GRACE SMITH

A few days ago, the United Kingdom government unveiled a new, $843,834 (600,000 British pounds) technology that would detect and flag videos with extreme jihadist propaganda.

An image from Isis’s Dabiq propaganda magazine.

This is the first major step into improving the automated flagging of inappropriate videos which has become a major concern for both viewers and content creators.

This issue was first brought to light in 2016 which jihadist videos reached hundreds of thousand of views on YouTube.

At this point, the platform would trust viewers with flagging content, which would then go under individual review by YouTube employees. But since the content creation has spiked in recent years, the review process has become inefficient and has fallen to criticism.

In response, the program set up an imperfect algorithm which flagged anything relating to violent acts, tragic events, or inappropriate content in general. As a result, many news-focused pages lost their funding and creators became unable to speak on tragedies or even curse in videos without the risk of losing their income.

ASI Data Science’s new artificial intelligence has proven to accurately flag videos and has only flagged 0.005 percent non-IS related videos and major giants like Facebook and Google are meeting with the developers to see about implementing the technology onto their platforms.

With the reveal of this technology also came British government’s willingness to pass legislation to make this a mandatory part of online technology. Many social media sites have had major issues with violent, terrorist-focused pages and videos using them as a host and even as a place for group recruitment.

Sites such as Facebook, Twitter, and YouTube have worked to create blanket solutions but still come under fire for the inaccurate and ineffective results produced by algorithmic solutions.

UK man Shafi Mohammed Saleen, a prolific ISIS supporter, who was convicted of spreading terrorist group propaganda on Twitter.

‘Social media companies continue to get beat in part because they rely too heavily on technologists and technical detection to catch bad actors,’ says an expert at the Foreign Policy Research Institute in the use of the internet by terror groups.

As the popularity of social media continues to grow so does the untraveled “Wild West” of the internet and we continue to question how we should handle it. The improvement in AI recognition seems like a step in the right direction, especially with the compliance of internet giants like Facebook and Google.