Fake news is real — A.I. is going to make it much worse

Visits: 3

President Donald Trump points at CNN’s Jim Acosta and accuses him of “fake news” while taking questions during a news conference following Tuesday’s midterm congressional elections at the White House in Washington, U.S., November 7, 2018.

Kevin Lemarque | Reuters

“The Boy Who Cried Wolf” has long been a staple on nursery room shelves for a reason: It teaches kids that joking too much about a possible threat may turn people ignorant when the threat becomes an actual danger.

President Donald Trump has been warning about “fake news” throughout his entire political career putting a dark cloud over the journalism professional. And now the real wolf might be just around the corner that industry experts should be alarmed about.

The threat is called “deepfaking,” a product of AI and machine learning advancements that allows high-tech computers to produce completely false yet remarkably realistic videos depicting events that never happened or people saying things they never said. A viral video starring Jordan Peele and “Barack Obama” warned against this technology in 2018, but the message was not enough to keep Jim Carrey from starring in “The Shining” earlier this week.

The danger goes far beyond manipulating 1980s thrillers. Deepfake technology is allowing organizations that produce fake news to augment their “reporting” with seemingly legitimate videos, blurring the line between reality and fiction like never before — and placing the reputation of journalists and the media at greater risk.

For more on tech, transformation and the future of work, join CNBC at @ Work: Human Capital + Finance Summit in Chicago on July 16.

Ben Zhao, a computer science professor at the University of Chicago, thinks the age of getting news on social media makes consumers very susceptible to this sort of manipulation.

“What the last couple years has shown is basically fake news is quite compelling even in [the] absence of actual proof. … So the bar is low,” Zhao said.

The bar to produce a convincing doctored video is lower than people might assume.

Earlier this year a clip purporting to show Democratic leader Nancy Pelosi slurring her words when speaking to the press was shared widely on social media, including at one point by Trump’s attorney Rudy Giuliani. However, closer inspection revealed that the video had been slowed to 75% of its normal speed to achieve this slurring effect, according to the Washington Post. Even with the real video now widely accessible, Hany Farid, a professor at UC Berkeley’s School of Information and a digital forensics expert, said he still regularly receives emails from people insisting the slowed video is the legitimate one.

“Even in these relatively simple cases, we are struggling to sort of set the record straight,” Farid said.

It would take a significant amount of expertise for a fake news outlet to produce a completely fabricated video of Oprah Winfrey endorsing Trump, but researchers say the technology is improving every day. At the University of Washington, computer vision researchers are developing this technology for positive, or at least benign, uses like making video conferencing more realistic and letting students talk to famous historical figures. But this research also leads to questions about potential dangers, as the attempts made by attackers are expected to continually improve.

How to detect a deepfake

To make one of these fake videos, computers digest thousands of still images of a subject to help researchers build a 3-D model of the person. This method has some limitations, according to Zhao, who noted the subjects in many deepfake videos today never blink, since almost all photographs are taken with a person’s eyes open.

However, Farid said these holes in the technology are being filled incredibly rapidly.

“If you asked me this question six months ago, I would’ve said, ‘Yeah, [the technology] is super cool, but there’s a lot of artifacts, and if you’re paying attention, you can probably tell that there’s something wrong,'” Farid said. “But I would say we are … quickly but surely getting to the point where the average person is going to have trouble distinguishing.”

In fact, Zhao said researchers believe the shortcomings that make deepfake videos look slightly off to the eye can readily be fixed with better technology and better hardware.

“The minute that someone says, ‘Here’s a research paper telling you about how to detect this kind of fake video,’ that is when the attackers look at the paper and say, ‘Thank you for pointing out my flaw. I will take that into account in my next-generation video, and I will go find enough input … so that the next generation of my video will not have the same problem,'” Zhao said.

Once we live in an age where videos and images and audio can’t be trusted … well, then everything can be fake.

Hany Farid

professor at UC Berkeley’s School of Information

One of the more recent developments in this field is in generating speech for a video. To replicate a figure such as Trump’s voice, computers can now simply analyze hundreds of hours of him speaking. Then researchers can type out what they want Trump to say, and the computer will make it sound as if he actually said it. Facebook, Google and Microsoft have all more or less perfected this technology, according to Farid.

Manipulated videos of this sort aren’t exactly new — Forest Gump didn’t actually meet JFK, after all. However, Farid says this technology is hitting its stride, and that makes the danger new.

“To me the threat is not so much ‘Oh, there’s this new phenomenon called deepfakes,'” Farid said. “It’s the injection of that technology into an existing environment of mistrust, misinformation, social media, a highly polarized electorate, and now I think there’s a real sort of amplification factor because when you hear people say things, it raises the level of belief to a whole new level.”

The prospect of widespread availability of this technology is raising eyebrows, too. Tech-savvy hobbyists have long been using deepfakes to manufacture pornography, a consistent and comically predictable trend for new technology. But Zhao believes it is only a matter of time before the research-caliber technology gets packaged and released for mass-video manipulation in much broader contexts.

“At some point someone will basically take all these technologies and integrate and do the legwork to build a sort of fairly sophisticated single model, one-stop shop … and when that thing hits and becomes easily accessible to many, then I think you’ll see this becoming much more prevalent,” Zhao said. “And there’s nothing really stopping that right now.”

Facing a massive consumer trust issue

When this happens, the journalism industry is going to face a massive consumer trust issue, according to Zhao. He fears it will be hard for top-tier media outlets to distinguish a real video from a doctored one, let alone news consumers who haphazardly stumble across the video on Twitter.

“Once we live in an age where videos and images and audio can’t be trusted … well, then everything can be fake,” Farid said. “We can have different opinions, but we can’t have different facts. And I think that’s sort of the world we’re entering into when we can’t believe anything that we see.”

Zhao has spent a great deal of time speaking with prosecutors, judges — the legal profession is another sector where the implications are huge — reporters and other professors to get a sense for every nuance of the issue. However, despite his clear understanding of the danger deepfakes pose, he is still unsure of how news outlets will go about reacting to the threat.

“Certainly, I think what can happen is … there will be even less trust in sort of mainstream media, the main news outlets, legitimate journalists [that] sort of react and report real-time stories because there is a sense that anything that they have seen … could be in fact made up,” Zhao said.

Then it becomes a question of how the press deal with the disputes over reality.

“And if it’s someone’s word, an actual eyewitness’ word versus a video, which do you believe, and how do you as an organization go about verifying the authenticity or the illegitimacy of a particularly audio or video?” Zhao asked.

Defeating the deepfakes

Part of this solution may be found in the ledger technology that provides the digital infrastructure to support cryptocurrencies like bitcoin — the blockchain. Many industries are touting blockchain as a sort of technological Tylenol. Though few understand exactly how it works, many swear it will solve their problems.

Farid said companies like photo and video verification platform Truepic, to which he serves as an advisor, are using the blockchain to create and store digital signatures for authentically shot videos as they are being recorded, which makes them much easier to verify later. Both Zhao and Farid are hoping social platforms like Facebook and Twitter will then promote these videos that are verified as authentic over non-verified videos, helping to halt the spread of deepfakes.

“The person creating the fake always has the upper hand,” Farid said. “Playing defense is really, really hard. So I think in the end our goal is not to eliminate these things, but it’s to manage the threat.”

Until this happens, Zhao said the fight against genuinely fake news may not start on a ledger, but in stronger consumer awareness and journalists banding together to better verify sources through third parties.

“One of the hopes that I have for defeating this type of content is that people are just so inundated with news coverage and information about these types of videos that they become fundamentally much more skeptical about what a video means and they will look closer,” Zhao said. “There has to be that level of scrutiny by the consumer for us to have any chance of fighting back against this type of fake content.”

A woman in Washington, D.C., views a manipulated video that changes what is said by President Donald Trump and former president Barack Obama, illustrating how deepfake technology can deceive viewers.

ROB LEVER | AFP | Getty Images

Nicholas Diakopoulos, assistant professor in Northwestern University’s School of Communication and expert on the future of journalism, said via email that the best solutions involve a mix of educational and sociotechnical advances.

“There are a variety of perceptual cues that can be tip-offs to a deepfake and we should be teaching those broadly to the public,” he said.

Diakopoulos has referenced Farid’s work on photo forensics among ideas outlined in an article he wrote for the Columbia Journalism Review last year. He also cited a research project called FaceForensics that uses machine learning to detect, with 98.1% accuracy, whether a video of a face is real. Another research technique under study: Blood flow in video of a person’s face can be analyzed in order to see if pixels periodically get redder when the heart pumps blood.

“On the sociotechnical side, we need to develop advanced forensics techniques that can help debunk synthesized media when put into the hands of trained professionals,” he told CNBC. “Rapid response teams of journalists should be trained and ready to use these tools during the 2020 elections so they can debunk disinformation as quickly as possible.”

Diakopoulos has studied the implications of deepfakes for the 2020 elections specifically. He also has written papers on how journalists need to think when “reporting in a machine reality.”

And he remains optimistic.

“If news organizations develop clear and transparent policies of their efforts using such tools to ensure the veracity of the content they publish, this should help buttress their trustworthiness. In an era when we can’t believe our eyes when we see something online, news organizations that are properly prepared, equipped and staffed are poised to become even more trusted sources of information.”

Read More Go To Source