Still images come to life in an AI model developed by Samsung at its Moscow research lab, ZDNET reports. Researchers claim with 32 or more reference frames (about a second of video), the results can be highly realistic.

With as little as a single frame, speech-like sequences can be generated from reference images, such as a rare photo.

Samsung

Samsung suggests several applications: improved video calls, better player presence in multi-player games, and more believable character special effects in film and television.

Others say it adds to the arsenal for deep fakes, deceptive imagery intended to misrepresent statements or actions by well-known personalities, including politicians and authority figures.

The system works by extracting features from the source picture and mapping them to expressions and movements drawn from a huge database of images. It uses as AI technology known as generative adversarial networks, often more simply called GANS.

The research paper calls the approach ‘few-shot adversarial learning.’

OUR TAKE

  • This is a great illustration of the two-edged sword of many AI advances. The same technology benefiting ‘video presence’ for consumer products also makes deepfakes easier to create

SEE FULL STORY

Samsung uses AI to transform photos into talking head videos
ZDNET | May 23, 2019 | by Campbell Kwan

SEE ALSO

LATEST

Discover more from journalismAI.com

Subscribe now to keep reading and get access to the full archive.

Continue reading