Comfyui Workflow using Latent Sync nodes from https://github.com/ShmuelRonen/ComfyUI-LatentSyncWrapper to allow you to add audio lip sync to an existing video - with loop extending to match the video source to the audio length... Ready to run. There is absolutely no technical reason why you couldn't link this to the end of a LTX or Hunyuan Video gen workflow - but for me - it's better to keep them separate - and only pass the best quality final videos into this feed. On Large Pro + GPU - first run was the 23 second output and took 340 seconds (but I think it downloaded some files that will always be there now) and the 10 second output took 93 seconds to generate(edited)
GitHubGitHub - ShmuelRonen/ComfyUI-LatentSyncWrapper: This node provides ...
This node provides lip-sync capabilities in ComfyUI using ByteDance's LatentSync model. It allows you to synchronize video lips with audio input. - ShmuelRonen/ComfyUI-LatentSyncWrapper