Cart ()

Facebook Civil Rights Audit Targets Nazi Dog Whistles Over Anti-Trust

Facebook Civil Rights Audit Targets Nazi Dog Whistles Over Anti-Trust

It’s no secret that Facebook isn’t perfect. At the beginning of July, the platform released their major civil rights audit which they claim is the “first step” in strengthening the site’s “long-term accountability.” Despite contributions from more than 90 civil rights organizations and experts on policy, product and enforcement, all forced to zero in on the trend of Nazi dog whistles, the monopoly still has an elephant-sized blindspot left unseen by design.

The audit was launched over a year ago, following countless media scandals regarding data privacy, enabling political extremism and propaganda used to sway elections towards elitist candidates. This monopoly force, the home to over 2.5 billion users across its 75% market share of all social media, is so big that real reform was bound to fail. Facebook, while conceding some ground to their civil rights watchdogs, show they want the unethical fun times to keep on rolling.

The 30-page report is split into four sections on content moderation, advertising, elections and discrimination, and conveniently focuses on flawed business on the platform rather than the platform’s own inherent business flaws. Facebook critics, such as myself, would argue this puts the cart before the horse. However, Facebook’s ignorance of its own powers didn’t stop executives from playing both revolutionary reformer and realist warrior (a contradiction they hope you won’t notice). In reality, these fun times also require playing PR accountability dodger over scandals like Cambridge Analytica.

“When you scale a sizeable part of Mount Everest, are you making progress? Yes, but have you reached the summit? No. So we haven’t reached the summit by any means, but we have really put a few stakes in the ground, and gotten the company to understand that these are platform-wide concerns and that civil rights values and principles and laws should be applied across the platform,” Laura Murphy, the civil rights and civil liberties advocate leading Facebook’s audit, told Vice Motherboard in a phone call. 

Sadly, Facebook’s mountain is more of a molehill when compared to its wider issues. These include its unsecured database of 1.5 million emails and passwords, allowing advertisers access to users’ “shadow contact info,” allowing device makers access to private profiles, the site’s spyware VPN used to secretly surveillance minors, its media suppression of independent outlets following mainstream partnerships, its non-consent facial recognition software database, its political advertising systems with disastrous fraud problems and even bribing editors of Wikipedia to cover up such scandals from the public. Countless other examples of Facebook’s overreach can be found here throughout my profile history.

This is all before there’s a discussion of Facebook’s claimed “political bias.” Even on the most basic concerns surrounding hate speech, audit contributors expressed their distaste towards Facebook’s “permissive attitude” towards hateful and abusive content, said to largely persecute “people of color and religious minorities,” arguing the platform needs to expand the ban of explicit white nationalism to also include content that ideologically supports it, even if the terms “white nationalism” or “white separatism” are not used. 

This follows their own admission that Facebook could have prevented thousands of needless deaths if there was stronger content moderation during the Myanmar genocide, the landscape for a new state-religious war between the government and the minority Muslim Rohingya people. At the time, Facebook only admitted a “need to do more to ensure we are a force for good in Myanmar,” but offered little to no direct solutions. Domestically, Facebook has only agreed to half-measures against fascist-aligned actors while singing the songs of reform.

“Without an active commitment by tech companies to understand these impacts, patterns of discrimination and bias can spread online and become more entrenched,” Murphy writes in the report introduction. “When you consider all of the groups that are subject to discrimination, it is clear that civil rights issues don’t concern one set of people or one community but they affect everyone and speak to our shared values. As such, being responsive to civil rights concerns is not just a moral imperative but a business imperative for all online platforms.”

Where do these imperatives lead exactly? 

For starters, the report concludes that Facebook will target discrimination by creating a “searchable database” for credit and employment adverts nationwide. According to Vice, this follows several discrimination lawsuits which found that Facebook customers were allowed to target adverts specifically on racial grounds, which violates protections under the Civil Rights Act. Facebook was forced to settle with the National Fair Housing Alliance, the Communications Workers of America and the American Civil Liberties Union. Thanks to pressure, Facebook was forced to issue reform. 

The audit argues this plan could still be subject to racists gaming the algorithm despite good and clear intentions. “The narrow scope of the policy leaves up content that expressly espouses white nationalist ideology without using the term ‘white nationalist.’ As a result, content that would cause the same harm is permitted to remain on the platform,” the report reads. If Facebook believes in its decision that such ideologies are “inherently hateful,” where enforcement will hurt a market of sneaky racists for the sake of common morality, this forces Facebook to decide whether it will judge racists words as much as it will racist intentions. 

This, of course, is assuming Facebook will even judge cases manually rather than leaving tasks to the artificial bots. According to company estimates, this centralized network for billions enforces its policies through roughly 7,500 human moderators, each deciding actions on posts surfaced by either the artificial intelligence algorithm or flagged user reports. 

This AI racket can easily suppress porn, spam, and fake accounts, which entail an already sketchy practice on meta-data searches and content surveillance, but it’s not good at interpreting meaning. If techno-autistic bots can’t read common English and normal users can’t get their voices heard, cases of hate simply don’t reach the higher-ups at Facebook enforcement. With basic non-protections like these, who needs enemies?

This isn’t to say Facebook shouldn’t listen to criticism just because it’s not up to complete satisfaction. “It’s still a public record of accountability,” said Rashad Robinson, activist and president of racial justice organization Color Of Change, speaking to Motherboard. “it’s Facebook being open and transparent about what they’ve done, and it allows us, the public, advocacy organizations, and people who do public interest work to examine the space between what they said they’ve done, and what the experience is for everyday people. That is really important.”

“This is why we led the call for the audit in the first place,” added Carolyn Wysinger, another Color Of Change activist who has claimed she’s been repeatedly had posts removed by Facebook when discussing the issue of racism on the platform. “When people say that something is happening to them, people don’t believe them. If we’re saying, hey, we’re getting disproportionately banned by Facebook, people are like: no you’re not; that’s not true, everybody gets banned.” When given this database, showing the evidence to watchdogs and lawmakers upon inquiry, users can at least make some better arguments in the public space instead of just relying on Facebook’s own selective narratives.

However, the goal of the audit should be to give people as much equal standing as their platform, not concessions that should have been there day one. “A real civil rights audit should tell the public exactly where the deficiencies in removing hateful activity and hate content creators have been found and how they can and will be closed. The document released by the company falls far short of that need,” argued Henry Fernandez, a senior fellow at the Center for American Progress and member of Change the Terms, who spoke with Gizmodo after its release. 

In turn, Facebook showed they really are putting the cart before the horse. “Getting our policies right is just one part of the solution,” Facebook said in their counter-statement. “We also need to get better at enforcement — both in taking down and leaving up the right content.” It should go without saying that you cannot get better at enforcement if you don’t have concrete policies to enforce, and even there the report finds they’re “too narrow” for any AI or coder to handle. When you’re running off CEOs and their fluffy words of reform, offering no concrete principles for the team, you’re bound to no ethics by design. Facebook’s power is too important for such carelessness, and people must decide whether this power is better decentralized, nationalized or democratized heading forward.

“The murder of 51 Muslims in Christchurch broadcast all over the world on Facebook Live, made it clear that this is a life and death matter, and still, the company has yet to take serious action to protect our community,” said Madihha Ahussain, Muslim Advocates’ special counsel for anti-Muslim bigotry. “Facebook’s announcement that it will convert an ad hoc, an interdepartmental collaboration of current staff tasked with addressing civil rights concerns into a permanent configuration will not result in meaningful change,” she said after an announcement that Facebook intends to “institutionalize” a Civil Rights Task Force. “Facebook’s so-called audit is simply too heavy on platitudes and not comprehensive enough.”

You can read the report for yourself in full here.

Related News