YouTube’s Child Protection Program Has Censorship Problems

YouTube’s Child Protection Program Has Censorship Problems

Google, the search engine monopoly always watching over us, now has more than 100 government agencies and private sector organizations assisting in the regulation of content posted on their site YouTube. Intended to take down extremist content, such as Jihadist material and child pornography to the likes of #ElsaGate, the Trusted Flaggers program, started in 2012, is now giving political capital to crusaders against the perpetually ill-defined “hate speech.”

The Daily Caller, a staunchly right-wing publication, reports that numerous confidentiality agreements were signed by the company, prohibiting Google, YouTube’s parent company from disclosing any and all behavior between these 100 agencies and organizations from being released to the public, according to a Google representative who spoke with them in January.

Organizations that have gone public with their involvement in both problematic content policing and counter-extremist monitoring are the Anti-Defamation League and a European anti-intolerance movement known only as No Hate Speech. The BBC, in August of last year, noted that these Trusted Flaggers aren’t just company employees, but also unspecified law enforcement agencies and child protection charities. These organizations have not been public with their connection and influence on the site, swearing Google to secrecy under the guise of ink, pen and mutually agreed confidentiality.

YouTube’s “Trusted Flaggers” program isn’t the only measure to regulate internet content from extremism to mere words. Our mid-December report on TrigTent detailed how Susan Wojcicki, the chief executive of YouTube, wanted to hire well over 10,000 workers to process the removal of loosely defined extremist content that “endangers children.”

YouTube public policy director Juniper Downs told a Senate committee on Wednesday that 50 of the 113 Trusted Flaggers program members joined in 2017, during the “Adpocalypse” period on the site when YouTube’s advertisers placed pressure on the company to police content or risk significant funds and campaigns being withdrawn from their platform.

Downs’ account to the Senate committee — describing how Trusted Flaggers are equipped with digital tools allowing them to mass flag content for review by YouTube personnel — sounds awfully similar to a rejected YouTube policy (and now a dead meme) known as “YouTube Heroes.” The YouTube Heroes program applied the same principle of mass flagging content for review, the difference being that these tools were possessed by everyone on the platform. The program, after immediate criticism, was never put in place.

Critics argued from a “who will watch the watchmen” point of viewing, questioning the mob mentality that could devolve from a community-based system of majority censorship. The Trusted Flaggers, by contrast, empowers a new private aristocracy, where the few are the end all voice of what is considered oh-so-problematic. Among the concerned was University of Toronto professor Jordan B. Peterson.

A fierce critic of political correctness and seemingly authoritarian law, rising to prominence because of his opposition to the potential consequence of hate speech code in Canada’s Bill C-16, the clinical psychologist ended up having one of his videos blocked across 28 countries in early January.

The video — a segment from the H3H3 podcast where Peterson, as a guest, outright rejected white supremacy — was falsely flagged by a legal entity under the ill-defined crime of “hate speech,” with the explanation sent to Peterson saying that he incited “terrorism.”

“Here’s some more ‘explanation’ for the censorship,” he tweeted. “Incitement of hatred, terrorist recruitment, incitement of violence, celebration of terrorism. Even to fall briefly and erroneously into such a category is a chilling event….”

When the removal was questioned by Ethan Klein, host of that same H3H3 podcast in question, the company sent a tweet as though it was a mistake, not the formal legal complaint it explained to Peterson personally.

When Peterson asked for more clarification, the company did not respond, leaving us in the dark about what will be considered the removable content of the week: Jihadi recruitment or a refutation of extremist worldviews? Arguments for white ethnostates or counter arguments for a liberal society? Just searching for keywords, the typical way things are conducted by YouTube’s admin algorithms, is not a tenable solution. And neither is giving the reins to an illiberal aristocracy with political goals of their own.

Both of these result in outright removal or placement in the “restricted” mode, which essentially filters your content out of the reach of children and users who are not signed into an account (the vast majority of users), leading to an eventual demonetization which cuts your financial viability on the platform.

In her testimony before the Senate committee, Downs addressed the concerns of combating “offensive” or “inflammatory” content that falls very far from the tree of violent extremism and more on the side of disagreeable political incorrectness and vulgar humor.

“Some borderline videos, such as those containing inflammatory religious or supremacist content without a direct call to violence or a primary purpose of inciting hatred, may not cross these lines for removal. But we understand that these videos may be offensive to many and have developed a new treatment for them,” she said. “Identified borderline content will remain on YouTube behind an interstitial, won’t be recommended, won’t be monetized, and won’t have key features including comments, suggested videos, and likes. Initial uses have been positive and have shown a substantial reduction in watch time of those videos.”

Leaving the likes of Peterson, without a platform to address smears, in an isolated zone of either poor artificial intelligence or the digital aristocrats who watch over us.