Fake News 2.0
Is it about to get worse?
July 4, 2017, Bruno Jacobsen
Could so-called “fake news” be about to become a lot more real? Last year, Kenny Jones and Derrick Bonafilia from Williams College wrote code that uses a type of machine-learning algorithm to create art that is practically indistinguishable from “real” art. The technique can also be applied to videos and other images—which makes us wonder: How will we be able to tell what's real from what's not in the future?
According to Techcrunch, the two students, under the guidance of professor Andrea Danyluk, taught themselves machine learning and read about 50 papers on Generative Adversarial Networks (GANs) before completing the project in under a year. Their success has landed them a job as software engineers at Facebook.
And earlier in February, YouTuber Mario Klingemann gifted us with an early sample of what GANs can do with video. In it one can see Françoise Hardy (decades ago), a famous French artist in the 1960s, defend Donald Trump from recent events, by evoking “alternative facts.”
Of course, the video is obviously a fake—it was created in a few days on a desktop computer. The voice is that of Trump’s adviser Kellyanne Conway and the footage, as it appears, never actually happened. But it’s a good example of the potential of this technology to distort reality, especially considering how fast AI progresses.
Traditionally, AI, or machine learning, require a lot of human effort. But this technology circumvents this need by creating an alternative way to teach AI how to create labeled data from existing data by using two neural networks.
Kind of like a “cop vs. a counterfeiter”, if we assume images are dollar bills, the counterfeiter (generator network) tries to create realistic-looking bills while the cop (discriminator network) tries to determine whether they are real or fake. The game goes on for a while and eventually the counterfeiter is able to create bills that are identical and indistinguishable from the real deal.
As a technological feat, this is wonderful progress and should be pursued. But why might it be a problem when it comes to reporting false news or events?
As The Economist puts it, “images and sound recordings retain for many an inherent trustworthiness. GANs are part of technological wave that threatens this credibility.” And it’s true. Today, “fake news” are by and large reduced to false reporting based on erroneous testimony, data manipulation or misinterpretation, outright media bias, or mistakes even when intentions are good. When confronted with these, we ask for more solid evidence, such as clear footage or audio. But what happens when these can be easily faked?
Some suggestions include the use of cryptography to verify that footage or images are provided by trusted organizations or unique keys that only the source devices or the signing organization possess. These will likely ensure that traditionally reliably news outlets will continue to provide “real news.” But they must put processes in place that ensure the veracity of their content. The rest of us—we’ll just need to start to be a little more skeptical of what we lay eyes upon.
Do you have any comments? Let us know in the section below!