‘What if you could manipulate the facial features of a historical figure, a politician, or a CEO realistically and convincingly using nothing but a webcam and an illustrated or photographic still image?’ 

A new technique known as facial reenactment enables any image of a face to be put in motion by a third party. Venture Beat reports the new image can portray any emotion or action, or what Synced calls ‘any humanly possible face-based task.’

The source of the movement is known as the ‘driver face.’

The original image is the ‘target face.’ The technique is known as ‘few shot’ modelling, using machine learning to extract and then extrapolate the facial features.

The target face can be only a single frame.

Previous state-of-the-art systems required more frames to act as training data, for instance, a few seconds (video typically has 30 frames per second.)

Potential uses include motion tracking for film/tv production. Potential abuses include more plentiful deep fakes.

Compare the images in the first two columns (target, driver) to those in last two columns (how the target is manipulated by the driver, and then a subsequent frame)

The new system is called MarioNETte. It was developed by a Seoul research lab, Hyperconnect.


Researchers train AI to map a person’s facial movements to any target headshot
VENTURE BEAT | November 27, 2019 | by Kyle Wiggers


  • MarioNETte: Few-Shot Identity Preservation in Facial Reenactment
    SYNCED | November 26, 2019 | ‘Starting with a dynamic driver face, researchers can manipulate any target face — from today’s celebrities to historical figures, including any age, ethnicity or gender — to perform any humanly possible face-based task.’