Cart ()

Can Google’s Anti-Deepfake Trials Prevent the New Disinformation Age?

Can Google’s Anti-Deepfake Trials Prevent the New Disinformation Age?

As researchers begin study trials on countering Deepfakes—technology that manipulates footage into the likeness of others — experts are arguing whether the fight is already a lost cause. It turns out the fate of the information age rests in the hands of companies like Google, the tech industry’s monopolistic serial abuser, now conducting their own experiments.

In a recent report from The New York Times, it was revealed that Google’s top scientists hired over a dozen actors to sit at tables, walk down hallways and cross streets while talking into a video camera. Through the use of artificial intelligence and photo manipulation, their faces were seamlessly swapped with hundreds of male and female subjects, using the altered videos to better detect signs of future Deepfake manipulations. According to sources for the Times, the company’s current fear is this technology could be used to sway the 2020 presidential election and other democratic elections in the future.

There are valid concerns about having institutions like Google — under no obligation to play online fact-checker — use their vast power to order, construct and dismantle their own “perfect” Deepfakes. As these AI tools become streamlined paths towards digital forgery, whether we’re talking innocent face-swap video apps like Zao to the more pervasive fake porn industry of Mr. DeepFakes, there’s an alarming difference between having individuals spreading false smut and corrupting our highest governments and corporations. “And you can already see a material effect that Deepfakes have had,” argues Nick Dufour, a Google engineer overseeing the company’s research. “They have allowed people to claim that video evidence that would otherwise be very convincing is a fake.”

While such rhetoric may seem hyperbolic, this kind of misinformation economy has already precipitated real-world examples. In August, creators of the website NotJordanPeterson.com were able to make a high-functioning text to speech software simulating the voice of Dr. Jordan Peterson, one of the most controversial academics and political pundits, giving anyone on the internet the ability to make convincing audio clips of him. This led to the creation of “Peterson” clips of feminist literature, vulgar rap videos, communist revolutionary talking points, and nonsensical meta-analysis on “the art of sucking dick”, showcasing the potential for dangerously accurate content made from nothing.

“Something very strange and disturbing happened to me this week,” Peterson wrote on his website at the time, suggesting a copyright threat was in order against the site’s owners. “If it was just relevant to me, it wouldn’t be that important (except perhaps to me), and I wouldn’t be writing this column about it. But it’s something that is likely more important and more ominous than we can even imagine. Wake up. The sanctity of your voice, and your image, is at serious risk. It’s hard to imagine a more serious challenge to the sense of shared, reliable reality that keeps us linked together in relative peace. The Deep Fake artists need to be stopped, using whatever legal means are necessary, as soon as possible.”

In a larger political context, such tools can be just as effective for selling the public on doctored propaganda as much as creative comedy. As a journalist, audio and video evidence is a valuable resource in verifying someone’s record. If Deepfake software can alter or generate a person’s statement at will, the temptation to use such technology to refute or contest video evidence is huge, especially for those running for political office with a vested interest in controlling the narrative about their public record. This goes double for the most divisive candidates. 

The Times report cites several other cases where “video evidence” coming out of countries like proto-fascist Brazil, Gabon and communist China is simply unverifiable by current media standards. “The Gabonese president, for example, was out of the country for medical care and his government released a so-called proof-of-life video”, the report states. “Opponents claimed it had been faked. Experts call that confusion ‘the liar’s dividend’, already challenging our assumptions about what is real and what is not”. It’s not unreasonable to see how the United States and their largest corporations can fight their battles through manufactured truth, amplifying the common problem of “he said-she said” in the audio-visual realm. For experts in the field, Pandora has already escaped the box.

According to a statement from Google, “any academic or corporate researcher could [already] download its collection of synthetic videos and use them to build tools for identifying Deepfakes,” providing the online world “essentially a syllabus of digital trickery” to either further or deconstruct the tech’s potential viability. Other researchers, such as the engineers for Canada’s Dessa, claim their Deepfake detector could identify Google’s faked videos with “almost perfect accuracy”, noting other tests regarding Deepfake videos across the internet “failed more than 40 percent of the time”. If Google’s monopoly force cannot compete with independent forgers, we’re far behind from truly reliable Deepfake debunkings.

There’s a reason why candidates like Sen. Marco Rubio (R-FL) have taken to calling Deepfakes the “modern equivalent of nuclear weapons”. Our most valuable resources are being brought into question at a rate beyond our understanding, let alone regulatory ability. It was reported in VICE News that Deepfake critics like Peterson and Rubio have expressed sympathies for proposals like the DEEPFAKES Accountability Act. The legislation, however, isn’t clear on its solutions without harming the First Amendment and Big Tech’s Section 230 protections.

In a report from the Electronic Frontier Foundation, the bill states “there is an exception for parodies, satires, and entertainment — so long as a reasonable person would not mistake the ‘falsified material activity’ as authentic” yet it “doesn’t specify who has the burden of proof” if such a case was taken to court, “which could lead to a chilling effect for creators.” As I wrote at the time about the bill, this could turn those making the wrong joke or pushing ill-considered satire into criminals for “falsifying the public record”, regardless of whether it’s through AI-generated or edited CGI videos. The lack of clear definitions can make censorship and deception all the more predatory.

This was expanded upon by Reclaim The Net, noting how a famous video showed Rep. Nancy Pelosi slurring her words was falsely condemned as a Deepfake. Is mocking a politician a crime if it’s by AI rather than editing? Is it worth the proposed fines upwards of $150,000 plus potential criminal penalties? Why do these laws also provide two-tier justice exemptions for officers and employees of the U.S. government? And how can we expect to track down these tricksters without a huge expansion of the surveillance state? Until these advancements are made, perhaps swallowing the black pill is the only current course of action. This is not just the insight of a humble newsman seeing a new information cold war, but the very people who created this problem in the first place.

“I realized you could now basically create anything, even things that don’t even exist,” argued Hao Li, one of the world’s most prolific Deepfake artists in a Technology Review interview. “Even I can’t tell which ones are fake. We’re sitting in front of a problem [since] videos are just pixels with a certain color value. We are witnessing an arms race between digital manipulations and the ability to detect those with advancements of AI-based algorithms catalyzing both sides. When that point comes, we need to be aware that not every video we see is true… Soon, it’s going to get to the point where there is no way that we can actually detect [deepfakes] anymore, so we have to look at other types of solutions.” Now it’s just a matter of whether these solutions can keep up and work before weapons are ready to fire.

Related News