This workflow is designed to replicate a video movement so that your own avatar/actor will perform the action and then apply a voice file with realistic lipsync and face expression.
As it names says, it is perfect to create User Generated Content to easily promote brands on social media without having to handle real life recording sessions.It is also perfect to create consistent and realistic social media hosts in no time.
Basically it uses MimicMotion to create the animated avatar/picture video, then proceeds with using LatentSync to do the lipsync and to finish Reactor is used to upscale the face (as LatentSync tends to blurry the mouth and surrounding areas).
VFI is used to extrapolate video motion after the Mimic Motion process as we only use 1 frame out of 2 of the driving motion video to accelerate the Mimic Motion treatment (which is the most time consuming part of the workflow)