Cart ()

Facebook Expands Anti-Terror Effort Amid Hate Propaganda Scandals

Facebook Expands Anti-Terror Effort Amid Hate Propaganda Scandals

On Tuesday, Facebook disclosed new reforms which seek to reduce the spread of terrorist recruitment, ideological hate groups and disinformation campaigns on social media. Given the world’s most popular platform’s over-reliance on its controversial engagements and profitable propaganda both private and public, Facebook still has a long way to go before it can actually curtail its shady business model in favor of the social good.

In the wake of emerging big tech investigations on Capitol Hill, Facebook has decided to grasp for the media spotlight through announcing their efforts to curb “coordinated inauthentic behavior” to “improve how we combat terrorists, violent extremist groups and hate organizations” on the platform. 

This includes expanded definitions of a terrorist organization, increased resources for user deradicalization, AI systems to detect and block live videos of shooters, and strict policies on advertising and media amid disinformation efforts from Iraq, Ukraine, Russia, and the private sector who were able to change news headlines without a journalist’s knowledge. 

According to CBC News, advertisers were able to rewrite their own headlines when sharing news stories, giving users the impression their tailored headlines was part of the original piece. This policy, which Facebook announced is under review, raises the possibility of how easily the platform can be abused for misinformation, whether for market or hate-based reasons.

“This is essentially… it would be a situation where Facebook is hosting and allowing and accepting monies for a misleading, essentially, political ad or a political weapon, at that point,” said Jennifer Grygiel, an associate professor of communications at Syracuse University. Given Facebook has over 2.5 billion users across a monopolistic 75% market share of all social media, this political weapon is certainly no joke, especially when placed in the hands of extremists ready to strike on the culture.

From their inability to remove footage of the Christchurch terror attack that killed 51 people to their enabling incitement of violence in the Myanmar genocide, Facebook has acknowledged this history with terror has “strongly influenced” its latest updates, revealing they’ve also adopted an “industry action plan” with their fellow tech giants. Alongside Microsoft, Twitter, Google, and Amazon, the tech giants listed a nine-point plan to address active events and hate agents.

These proposed solutions include “shared technology development” between all companies, undisclosed “crisis protocols” with “industry, governments, and NGOs” so their “data sets” are “shared, processed, and acted upon by all stakeholders with minimal delay”, support for “relevant research” in the areas of “bigotry and hatred”, as well as “publishing on a regular basis transparency reports regarding detection and removal of terrorist or violent extremist content”, providing no indication such a database would extend to banned users who haven’t committed legitimate crimes but infringed terms of service.

In a report from The New York Times, Facebook has touted their reaction to terrorist content has been successful, claiming they’ve been able to detect and delete 99 percent of extremist posts — 26 million pieces of content — before they were even reported. The report notes other comments clarifying this claim only includes Islamist militants and white supremacists engaging in violence, not separatists or preachers who only share the hateful ideology. 

Even still, the company says they’re working with American and British law enforcement officials to improve their detection, using first-person surveillance and training programs from western countries to give the AI more informative context. Given AI is reportedly responsible for Facebook’s self-acclaimed record compared to their limited team of 350 admins, there’s some reason to doubt their programs have been entirely on the ball, especially if independent inquiries into admin practices were made.

This isn’t to say Facebook has entirely failed. Since March, Facebook has shown enough effort to direct highly engaged hate users associated with terms of white supremacy and white nationalism to resources in Life After Hate, an organization founded by former violent extremists, which has since been expanded to Australia and Indonesia following Christchurch. “This is a fight that happens on a daily basis,” said Facebook’s Global Director for Counterterrorism Policy Brian Fishman told AAP. “We see bad actors trying to circumvent the techniques that we’ve put in place to identify them, and we change what we’re doing as a response to those techniques.”

Whether those techniques are under a transparent framework is on Facebook. Just over the weekend, over 200 accounts were banned for allegations of promoting white supremacist groups. “While our previous definition focused on acts of violence intended to achieve a political or ideological aim, our new definition more clearly delineates that attempts at violence, particularly when directed toward civilians with the intent to coerce and intimidate, also qualify,” Facebook stated in the blog post, despite providing no evidence to prove these claims legitimate before or even after taking action.

While some of these updates have been rolled out across the past few months, Facebook remains one of the most secretive platforms when it comes to the very issues they claim have been “widely discussed” by the public. “We know that bad actors will continue to attempt to skirt our detection with more sophisticated efforts,” the company concluded, “and we are committed to advancing our work and sharing more progress.” The decision to police the platform with absolute impunity, free from judgmental contradiction or means to ensure they’re correct, indicates the problem with leaving these weapons operated behind closed doors.

Related News