GAN – Law Street https://legacy.lawstreetmedia.com Law and Policy for Our Generation Wed, 13 Nov 2019 21:46:22 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.8 100397344 The Fake Obama Video: Will this Be the Next Development in Fake News? https://legacy.lawstreetmedia.com/blogs/technology-blog/fake-news-obama-phenomenon/ https://legacy.lawstreetmedia.com/blogs/technology-blog/fake-news-obama-phenomenon/#respond Wed, 19 Jul 2017 20:14:28 +0000 https://lawstreetmedia.com/?p=62184

AI could make Obama say pretty much anything.

The post The Fake Obama Video: Will this Be the Next Development in Fake News? appeared first on Law Street.

]]>
Image Courtesy of Marco Verch; License: (CC BY 2.0)

Have you ever thought about having the ability to make people say whatever you want? You could get the chance to make Trump say that he loves Hillary Clinton, or get Christopher Nolan to finally say that Cobb was in a dream at the end of “Inception.” Well, with the help of some new technology, soon we may all have that ability.

Earlier this month, computer scientists at the University of Washington, with the help of an artificially intelligent neural network, were able to create video footage of former President Barack Obama that they can manipulate into perfectly matching any audio recording. Check it out:

The researchers used AI to model Obama’s face, and mapped the model based on 14 hours of video and audio footage of the former president’s weekly addresses to produce a “synthetic” video replica. The researchers were even able to put in audio from an interview Obama gave in 1990, and could theoretically insert the voice of an Obama impersonator.

In an interview with tech website Digital Trends, Dr. Supasorn Suwajanakorn, a researcher on the project, said:

Unlike prior work, we never require the subject to be scanned or a speech database that consists of videos of many people saying predetermined sentences. We learn this from just existing footage. This has the potential to scale to anyone with minimal effort.

So, in the near future it may only take minimal effort to create fake videos. While the current technology requires many examples of video and audio footage to produce a forgery, it may only be a matter of years until just a few audio and voice recordings would be needed.

Earlier this year a German artist, Mario Klingemann, released a video that represented French singer Francoise Hardy depicted as a 20 year old answering questions from someone offscreen.

However, just like in the Obama video, that isn’t Hardy’s real voice. Nor is that what she looks like anymore–she’s now 73. Instead, Klingemann used the voice of Kellyanne Conway from the infamous interview in which she introduced the term “alternative facts.”

Granted, the quality of this video is nowhere near the impressive feat of the fake Obama video. But it’s important to note that this video wasn’t created by a team of scientists, but just one guy. Perhaps most impressively, it only took him a few days and required absolutely no digital editing software.

Kingemann made the video using old music video clips of Harding from her twenties, and inserted them into a generative adversarial network (GAN). GAN is a machine-learning algorithm that was developed back in 2014. It uses neural networks to learn statistical properties of audio in order to produce the context of said audio. Once the algorithim is provided with enough audio context, you can tell it what words to say, and it will do so using the speech patterns of the given individual’s voice.

Reconfiguring audio is relatively easy, but the development of images is a much more complicated process. Ian Goodfellow, the inventor of GAN and a recent addition to Google’s AI division, has made progress in improving image creation. When asked about how long until the generation of “Youtube fakers” arrives on the internet, he expected it would be about three years until anyone with a computer and minimum coding experience could have access to this technology.

However, there have been breakthroughs in how to combat this new trend in technology as well. Analysis of metadata of photos, videos, and audio recordings can tell us exactly how, when, and where the content was created and will be able to indicate if the content was doctored.

But in this day and age it’s becoming harder to convince people that everything they see and read may not be true. A significant number of people still believe a debunked theory that DNC staffer Seth Rich was killed because he leaked information pertaining to Hillary Clinton. If such conspiracy theories run rampant, how are we going to convince people that videos they can see simply aren’t real?

The potential for fake videos could have a profound effect beyond what we can imagine. Think about this: less than six months ago a man walked into a D.C. pizza restaurant with a shotgun because of a conspiracy theory spread on social media. Imagine what the reaction to “Pizzagate” could have been if there was somehow a fake video involved?

James Levinson
James Levinson is an Editorial intern at Law Street Media and a native of the greater New York City Region. He is currently a rising junior at George Washington University where he is pursuing a B.A in Political Communications and Economics. Contact James at staff@LawStreetMedia.com

The post The Fake Obama Video: Will this Be the Next Development in Fake News? appeared first on Law Street.

]]>
https://legacy.lawstreetmedia.com/blogs/technology-blog/fake-news-obama-phenomenon/feed/ 0 62184