'Deepfakes' Are Coming. Few Are Convinced Big Tech Is On Top Of It.

Social media and video-hosting websites are struggling to deal with extremely realistic “deepfake” clips that are starting to appear online.

The new technology makes it possible to create images and videos that are so convincing it is difficult to know what is real and what has been manipulated. Facebook, Twitter, YouTube and Reddit have only begun to develop countermeasures.

Many fear that the doctored videos could spark social unrest and influence the outcome of elections. Bobby Chesney, a University of Texas School of law professor and associate dean, warned that “the opportunity for malicious liars is going to grow by leaps and bounds.”

Despite the threat, social-media platforms and video websites have not fully detailed how they intend to respond. When CNN asked what they are doing to identify and root out fake videos, “none of them offered much more than vague explanations along the lines of Facebook's promise to 'protect the community from real-world harm,'” according to the television network.

Facebook officials claim they are consulting with researchers who have looked into possible solutions to the problem. One of the experts confirmed that he has been working with Google. At the same time, however, “deepfake” creators are honing their skills to thwart any attempts to detect doctored images.

The technology typically involves applying a person's face or voice to a video. The technique has long been used by pornographers to superimpose the faces of celebrities onto nude bodies. Now, there also could be political ramifications, as BuzzFeed demonstrated earlier this year when it posted a fake video of Barack Obama. The news site put the former president's face on a clip of movie producer Jordan Peele ridiculing Donald Trump.

Deepfakes have the potential to foment confusion and misleading information about social issues and political candidates. Many are worried that the videos could cause more chaos than the fake news which appeared on social media during the 2016 White House race. Federal investigators looking into Russia's meddling in the election have accused Kremlin-linked hackers of spreading false information, including lies about Hillary Clinton.

The new technology is significantly more advanced than photoshopping, which is often easy to spot. Most Americans are unaware how vulnerable they are to being fooled, as they still tend to believe what they see. While there has been growing skepticism about doctored photos, many people do not view videos with the same scrutiny.

As more social-media users understand what is happening, they could become increasingly cynical about politics. Blurring the line between what is real and what is fake may lead to voters simply believing what they want to believe without any substantiated evidence.

Social-media firms are “still at the early stages of addressing 2016-era misinformation,” and “it's very likely there won't be any real infrastructure in place” as voters cast ballots in this fall's mid-term congressional elections, Aviva Ovadya of the Center for Social Media Responsibility told CNN. Jonathon Morgan, CEO of New Knowledge, predicted that “troll farms” will use artificial intelligence to make deepfakes designed to appeal to targeted groups.

Sam Woolley of the Digital Intelligence Lab at the Institute for the Future pointed out that no social-media platform has developed rules concerning manipulated videos. He cautioned that because deepfakes could be used to defame anyone, they “have potentially larger implications for society and democracy.”

Woolley suggested that Facebook, Twitter and other companies will eventually figure out how to shield their users from false and misleading videos. “It's not a question of if they take action on deepfakes, or if they begin to moderate this content; it's a question of when,” he said. “At the moment, the movement is pretty slim.”

The U.S. military has been working on the issue. The Pentagon, concerned that the nation's enemies could use deepfakes to falsely depict American soldiers committing atrocities, recently established the Defense Advanced Research Projects Agency. Within four years, the organization is expected to design methods of detecting doctored photos and videos.

Among the tools being analyzed are algorithms that assess biometric data. An algorithm used by Gfycat screens faces in videos one frame at a time, but the technology site Motherboard has questioned the method's effectiveness. Gfycat acknowledged that its approach is slow, sometimes taking several days to examine a clip.

Another technique measures the pulses of individuals depicted in videos, according to Satya Venneti of Carnegie Mellon University. She explained that people have “widely varying heart-rate signals.”

Siwei Lyu, director of Computer Vision and Machine Learning Lab at the University at Albany SUNY, recommends watching for blinking. “If a video is 30 seconds and you never see the person blink, that is suspicious,” he told CNN. Lyu said that after he wrote a research paper about the theory, deepfake creators made adjustments. That is why he and other researchers are continuing their search for answers.

Related News
Comments