Cart ()

"NotJordanPeterson.com" and the Complex Issues Behind Deepfake Legislation

"NotJordanPeterson.com" and the Complex Issues Behind Deepfake Legislation

When it comes to greater transparency, the internet both giveth and taketh away. The rise of Deepfake, the combination of “deep learning” and “fake” describing software that manipulates content into the likeness of others, shows the information age faces a serious monster beyond the control of its own creators.

It was earlier this week that NotJordanPeterson.com, a new text to speech website simulating the voice of the controversial teacher, was pressured into shutting down after the shock, awe and suggestive legal threats from the real Dr. Peterson. Given the surreal accuracy of its results, which involved hilarious pantomiming of feminist literature, vulgar rap videos, communist revolutionary talking points, and nonsensical meta-analysis on “the art of sucking dick” that was only a few clicks away, the doctor does raise legitimate criticisms about how Deepfakes could harm our “information ecosphere”. 

“Something very strange and disturbing happened to me this week,” Peterson wrote on his website. “If it was just relevant to me, it wouldn’t be that important (except perhaps to me), and I wouldn’t be writing this column about it. But it’s something that is likely more important and more ominous than we can even imagine. Wake up. The sanctity of your voice, and your image, is at serious risk. It’s hard to imagine a more serious challenge to the sense of shared, reliable reality that keeps us linked together in relative peace. The Deep Fake artists need to be stopped, using whatever legal means are necessary, as soon as possible.”

It should be noted this message, while fundamentally necessary, happens to come from a questionable ambassador. Peterson, notorious for framing himself as the “intellectual dark web” hero for freedom of speech, has cultivated his own hypocritical reputation for filing lawsuits against his critics, repeatedly doxing students who participated in protests, organizing an elusive fellowship program where criticism is suppressed by his social media moderators and the proposing of the now-scrapped McCarthy-esque hit-list on colleges with alleged neo-Marxist content.

This makes Peterson legitimately easy to dismiss for some… and makes the potential illegitimate content against him all the more dangerous to a transparent discourse. While the 1,300 word blog post suggests more bad taste lawsuits are in order, condemning Deepfakes as a threat to both his own personal reputation and taking aim at his preferred socio-political targets the “social justice warriors” and “postmodern neo-Marxists”, it also does raise necessary questions about how this technology harms individual privacy, bodily property rights, the de-verifying of online audio-video resources and how this affects due process in the court of public opinion.

Peterson seems to be one of the few celebrity subjects speaking out about this invasive issue, even if one takes the more cynical motive of self-interest rather than the greater good of digital transparency as his primary concern. Nevertheless, some credit is due despite the counter-concerns on free expression grounds. Thus far, Deepfake technology has only ever made a name for itself as a niche in the meme, cinema, and pornography markets, but to deny the potential for political propaganda loses sight of how easily conspiracy, fraudulence, and “gotcha” culture can spread. Give bad actors the tools and defamation is now a readily available product at the click of a mouse.

This is not to say it’s an either/or case of malicious deception or artistic liberty. There are programs like Faux Rogan, a new Joe Rogan audio-video DeepFake where people can vote on what’s either real or fake, channels like Ctrl Shift Face, the YouTube page seamlessly changing subjects’ faces in real-time, sites like Mr. DeepFakes, the pornographic database for fake celebrities projected onto real porn stars, and lest we forget the  BuzzFeed video that made this entire brand of technology famous by projecting a fake PSA from actor Jordan Peele onto former President Barack Obama. All of these examples have a case for free expression at the expense of the digital image of others.

The discussion requires serious consideration of definitions, principles, and political policy (in that order). According to VICE News, Peterson has expressed sympathies for bringing about new laws like Rep. Yvette Clark (D-NY) and her proposed DEEPFAKES Accountability Act, though such solutions raise the alarms as to how due process, the First Amendment and Big Tech’s Section 230 protections will adapt to these new technologies.

In a report from the Electronic Frontier Foundation, the bill states “there is an exception for parodies, satires, and entertainment — so long as a reasonable person would not mistake the ‘falsified material activity’ as authentic” yet it “doesn’t specify who has the burden of proof” if such a case was taken to court, “which could lead to a chilling effect for creators.” Make the wrong joke, an ill-considered satire or change the course of politics by falsifying the public record, whether genuine AI-generated or edited CGI videos, the fear of unwanted censorship begins to fester and the value of deception becomes all the more pervasive.

We also have to ask if these definitions are limited to genuine AI Deepfakes. As noted by Reclaim The Net, a famous video which falsely showed Rep. Nancy Pelosi slurring her words was falsely condemned as a Deepfake that people urged for removal across social media despite only being user-generated fakery. Does requiring “mandatory labeling, watermarking, or audio disclosures” extend to all mediums? Are simple offenses worth fines upwards of $150,000 plus potential criminal penalties? What constitutes a difference in the eyes of the law? Why are there further upper-class exemptions for officers and employees of the US government? How do they determine a video is actually a Deepfake? Were these questions simply overlooked by Jordan “whatever legal means are necessary” Peterson? Or has he stepped outside his bounds on an issue bigger than himself?

These are similar criticisms to those from Hao Li, one of the world’s most prolific Deepfake artists, questioning the same “off-the-shelf deception” he helped create. “I realized you could now basically create anything, even things that don’t even exist,” Li said in a Technology Review interview. “Even I can’t tell which ones are fake. We’re sitting in front of a problem [since] videos are just pixels with a certain color value. We are witnessing an arms race between digital manipulations and the ability to detect those with advancements of AI-based algorithms catalyzing both sides. When that point comes, we need to be aware that not every video we see is true.”

To put it simply, even the creators of this technology are losing sight of their own creations. How can we expect average people to keep a cool head in this marketplace of intentional misinformation? Through blurring the line between fantasy and reality, utilizing generative adversarial networks (GANs) and facial recognition tracking from that of age-old photography, Deepfake is able to freely harvest, “learn” and compose data in such a convincing way that it amounts to automated bodily forgery, whether it’s just for memes or a means to a socio-political end.

Related News