Introduction
An advanced video-to-video transformation workflow built on Tencent’s HunyuanVideo framework, enhanced by Jukka Seppänen’s (@Kijaidesign) ComfyUI nodes. This workflow excels at generating high-quality videos from source footage while preserving temporal consistency and smooth motion dynamics.
Following the great successful open-sourcing of HunyuanVideo, Tencent proudly present the HunyuanVideo-I2V, a new image-to-video generation framework to accelerate open-source community exploration!
This repo contains offical PyTorch model definitions, pre-trained weights and inference/sampling code. You can find more visualizations on Tencent project page. Meanwhile, Tencent has released the LoRA training code for customizable special effects, which can be used to create more interesting video effects.
https://github.com/kijai/ComfyUI-HunyuanVideoWrapper
https://github.com/Tencent/HunyuanVideo-I2V
Recommended machine:Ultra
Workflow Overview
How to use this workflow
Step 1: Load Image
Step 2: Adjust Video parameters
Set the image resolution. The effect is very poor if it is lower than 540. This parameter is the quality resolution of the video, not the image size.
Step 3: Input the Prompt
No need to describe the entire picture in detail, just enter key information such as lens, action, etc.
Step 4: Set the number of sampler steps
When I was testing, the effect of generating a two-dimensional video was very good when step=30, and the real person would have a bad face; when step=50, the real person's facial texture gradually became clear, and there was a small probability of drawing a card
Step 5: Get Video
You can change the video length by setting frame_rate or num_frames (in WanVideo Empty Embeds). Video length = num_frames/frame_rate