The #YouTubeWakeUp movement, following recent coverage throughout the mainstream media, has certainly achieved its namesake goal. Once again, the big tech platform has found itself at the center of an administrative crisis after a new series of videos revealed how the website has been enabling a “soft-core pedophile ring” publicly exploiting minors for sexual-market gains. Since the scandal broke last week, YouTube has begun purging tens of millions of user comments and over 400 channels showcasing their effort to root out the abusive black market being fostered across their domain.
It was previously reported in Bloomberg that some of YouTube’s highest advertisers, such as Disney, McDonald’s, Nestle and “Fortnite” developers Epic Games, began withholding marketing funds from the YouTube platform in light of video exposés published by content creator Matt Watson. The report demonstrated how the YouTube algorithm could lead someone from simply searching bikini hauls with consenting adults to falling down a “wormhole” of disturbing videos featuring objectified children.
The material, while not directly sexual in nature, often features children simply just talking to a camera, practicing gymnastics, dancing, trying out new clothes and sitting in suggestive positions that stimulate these dirty minds. Some of the videos were being hosted on accounts with absolutely no association to the families of origin, meaning content was being harvested from these families without any consent or knowledge as their children were being exploited to garner an audience of pedophiles desperately organizing around their next fix. Anything goes in order to get their predatory rocks off.
These videos, advertently monetized by these brands, were often paired with perverted comment sections where users either lusted over the video’s underage subjects or would outright exchanged information to seemingly trade illegal child pornography. Fall into this rabbit hole and users were littered users with like-minded recommendations.
“Once you are in this loophole, there is nothing but more videos of little girls,” Watson explained. “How has YouTube not seen this? How is it that there are people who are genuinely good, where every algorithm under the sun detects when they swear more than two times or make videos about panic attacks and depression, yet this shit is going on? It’s been in the public consciousness for over two years and nothing is being done. It’s disgusting.”
This scandal didn’t fall on deaf ears, however. “Any content, including comments, that endangers minors is abhorrent and we have clear policies prohibiting this on YouTube,” said YouTube spokeswoman Andrea Faville in a statement last Thursday. “We took immediate action by deleting accounts and channels, reporting illegal activity to authorities and disabling comments on tens of millions of videos that include minors. There’s more to be done, and we continue to work to improve and catch abuse more quickly.”
Following this response, YouTube reportedly began hosting conference calls with their various clients and agency partners to quell concerns of raising another #Adpocalypse situation, according to inside sources from Adweek. This seems credible as the publication were able to obtain and verify a YouTube memo sent to advertisers detailing the platform’s new policy-based responses to child exploitative content — echoing the exact behavior that is being reported right now.
YouTube was quick to clarify how admins would enforce their content policies over users in the future, addressing #Adpocalypse criticisms from consumers regarding their questionable promise to “increase creator accountability” in monetization. A YouTube spokesperson spoke with The Verge to explain limits on advertising would only be a “short-term fix” as moderators would evaluate whether the content is growing a hub for predatory behavior. It’s unclear whether this content moderation will be through A.I. systems, which overwhelmingly accounts for their removal rates, or whether human evaluators will be prioritized in potential child-abuse cases.
Over the last 48 hours, YouTube’s cleanse has coincided with their partnership with the National Center for Missing and Exploited Children, an organization committed to identifying illegal activity against minors on its platform that, alongside their 10,000 manned Trusted Flagger program, are responsible for content oversight. There is a serious effort on YouTube’s part to purge the problems from their site. There remains the concern of how it will affect the do-gooders caught in the firing line.
It appears ad restrictions for “videos that include minors and are at risk of predatory comments” are merely just temporary measures compared to their focus on video removals, channel removals, user bans across multiple accounts (pre/post-termination) and disabling comments to avoid both creepy behavior and illegal trade practices. Could the site actually be prioritizing the safety of their users over the financial interests of the disassociation-craving advertisers? Or are we just lucky these market rivals suddenly overlap against child predators?
There is an argument to be made the new ad policy, while temporary, is still a universal punishment being used against families uploading child content with no malicious intent. It’s a fair observation, though isn’t without a means of protest for the innocent. “Creators can appeal the decision,” the spokesperson explained, “and the limits are supposed to be lifted soon.” The source failed to provide a timeline of when this policy will be gone.
The TrigTent platform has also repeatedly criticized YouTube’s questionable form of due process (or lack thereof) in the handling of political and social content. Unlike those prior instances, however, YouTube has no appearance of conflicting interests which could result in an unfair judgment against non-predators — just don’t expect swift justice considering the number of cases YouTube will surely be overseeing in the meantime. In a fair court system, 10 million comments alone are enough to create horrific gridlock where good actors fall beneath the cracks. This could be the area where YouTube’s potential administrative failures lay in their ineffectiveness at oversight, not their suspect intentions.
“YouTube appearing in the press again for predatory comments and practices of users of its platform is not a surprise,” one of the advertiser sources told Adweek. “When you’re dealing with a platform that generates 300-plus hours of video per minute, the realities of being able to check and verify the content become daunting. For anyone saying human vetting is the only way to go, be prepared for a vastly reduced YouTube with ‘waiting times’ and liberal arguments of censorship and free speech. If people are asking for machine learning to solve their problems, be prepared for issues like this to keep appearing. No amount of investment in people or technology will solve these issues for YouTube; it’s ingrained in the very DNA of the platform.”