Samsung’s AI Lab Can Create Fake Video Footage From a Single Headshot

The research is an unsettling demonstration of the future of 'deepfake' technology

Samsung AI brought this Einstein portrait to life. - Credit by Egor Zakharov
Headshot of Patrick Kulp

Deepfake technology may be reaching a point where a single Facebook profile picture is enough material for someone to literally put words in your mouth.

Researchers at Samsung’s artificial intelligence lab in Russia published a paper this week outlining a technique by which a portrait image can be made to appear as if its subject is speaking in (semi-)realistic video footage. The ability to fabricate such clips from photographs is not new, but methods to do so have previously relied on training machines with large data sets of images of the subject in question.

While this tech has plenty of innocuous uses in fields like design and content creation, experts also worry that its potential to spread misinformation could herald a future in which separating truth from fiction online becomes impossible. Until now, however, the need for many images of the subject in question has limited targets to celebrities, politicians and other figures photographed extensively in the media.

Deepfake creators typically employ what’s called a Generative Adversarial Network, a set-up in which two neural networks are trained on the same data set of images. One then tries to create its own image resembling what it’s seen, until the other can no longer tell the difference between that creation and the actual images.


Instead of training this system on multiple images of a single person, Samsung’s researchers fed it a trove of diverse headshots from which it was able to map certain broad facial features and expressions. That preloading process tuned the software so that it was agile enough to approximate the facial movements of anyone from the Mona Lisa to Salvador Dali.

The results aren’t perfect; a fake clip of Marylin Monroe doesn’t capture her distinctive mole, for instance. And there are periodic flashes of distortion in facial movement that can also betray the artificiality of the footage. Still, it’s realistic enough sometimes that a casual viewer not paying close attention might not notice.

Brands and agencies have already begun to explore less sinister uses for deepfake technology. Earlier this month, the Dalí museum in Florida partnered with Goodby, Silverstein & Partners to create a deeepfake version of the artist to take selfies with visitors. R/GA and video startup Synthesia also forged a video last month in which soccer superstar David Beckham appeared to speak nine different languages for a campaign against malaria.

“It’s one of those things where the technology itself is neither good nor bad; it’s what you do with it,” R/GA’s global chief technology officer, Nick Coronges, told Adweek in an earlier interview.


@patrickkulp patrick.kulp@adweek.com Patrick Kulp is an emerging tech reporter at Adweek.
Publish date: May 23, 2019 https://dev.adweek.com/digital/samsungs-ai-lab-can-create-fake-video-footage-from-a-single-headshot/ © 2020 Adweek, LLC. - All Rights Reserved and NOT FOR REPRINT