Introduction
VACE is an all-in-one model designed for video creation and editing. It encompasses various tasks, including reference-to-video generation (R2V), video-to-video editing (V2V), and masked video-to-video editing (MV2V), allowing users to compose these tasks freely. This functionality enables users to explore diverse possibilities and streamlines their workflows effectively, offering a range of capabilities, such as Move-Anything, Swap-Anything, Reference-Anything, Expand-Anything, Animate-Anything, and more.
This template provides three workflows: control, prompt word to replace object, and replace objects with reference images.
https://github.com/ali-vilab/VACE?tab=readme-ov-file
https://huggingface.co/ali-vilab/VACE-Wan2.1-1.3B-Preview
https://huggingface.co/Kijai/WanVideo_comfy
Recommended machine:Ultra-PRO
Workflow Overview
How to use this workflow
Part 1: control
Select 'control' Workflow
Step 1: Upload and adjust videos
1.Upload Video
2.Adjust the 'force_rate' parameter to set the frame rate of the uploaded video.Adjust the 'frame_load_cap' parameter to set the length of the uploaded video. The formula is: frame_load_cap/ force_rate = video length
3.Adjust the ‘width’ and ‘height’ parameters to adjust the video resolution.Adjust the ‘crop’ parameter to determine whether to crop when adjusting the video resolution. If you select ‘disable’, it will only scale without cropping
4.On the AIO Aux Preprocessor node,you can choose openpose or depth
Step 2: Input prompt words and speed up video generation
1.Enabling the WanVideo TeaCache node will speed up video generation but reduce video quality.
2.Input prompt word.
Step 3: Adjust the output video
1.When I was testing, the effect of generating a two-dimensional video was very good when step=30, and the real person would have a bad face; when step=50, the real person's facial texture gradually became clear, and there was a small probability of drawing a card.
2.Adjust the ‘width’ and ‘height’ parameters to adjust the video resolution.Adjust the 'frame_rate' parameter to set the frame rate of the output video.Adjust the 'num_frame' parameter to set the length of the output video.The formula is: num_frame/frame_rate=video length.
Part 2: prompt word to replace object
Select 'prompt word to replace object' Workflow.
Step 1: Upload and adjust videos
1.Upload Video.
2.Adjust the 'force_rate' parameter to set the frame rate of the uploaded video.Adjust the 'frame_load_cap' parameter to set the length of the uploaded video. The formula is: frame_load_cap/ force_rate = video length.
3.Adjust the ‘prompt’ parameter and select the name of the object to be replaced, such as people, goods, fruits, you can also use ‘subject’, because it is universal.
4.After selecting the object to be replaced, you need to enter the replacement object in the ‘WanVideo TextEncode’ node. For example, if the object in the uploaded video is a lemon, you can enter the prompt word apple in the prompt word, and the lemon in the generated video will be replaced by an apple.
Step 2: Input prompt words and speed up video generation
1.Enabling the WanVideo TeaCache node will speed up video generation but reduce video quality.
2.Input prompt word.
Step 3: Adjust the output video
1.When I was testing, the effect of generating a two-dimensional video was very good when step=30, and the real person would have a bad face; when step=50, the real person's facial texture gradually became clear, and there was a small probability of drawing a card.
2.Adjust the ‘width’ and ‘height’ parameters to adjust the video resolution.Adjust the 'frame_rate' parameter to set the frame rate of the output video.Adjust the 'num_frame' parameter to set the length of the output video.The formula is: num_frame/frame_rate=video length.
Part 3: Replace objects with reference images
Select 'Replace objects with reference images' Workflow.
Step 1: Upload and adjust videos
1.Upload Video.
2.Upload reference images.
3.Adjust the 'force_rate' parameter to set the frame rate of the uploaded video.Adjust the 'frame_load_cap' parameter to set the length of the uploaded video. The formula is: frame_load_cap/ force_rate = video length.
4.Adjust the ‘prompt’ parameter and select the name of the object to be replaced, such as people, goods, fruits, you can also use ‘subject’, because it is universal.
5.After selecting the object to be replaced, you need to enter the replacement object in the ‘WanVideo TextEncode’ node. For example, if the object in the uploaded video is a lemon, you can enter the prompt word apple in the prompt word, and the lemon in the generated video will be replaced by an apple.
Step 2: Input prompt words and speed up video generation
1.Enabling the WanVideo TeaCache node will speed up video generation but reduce video quality.
2.Input prompt word.
Step 3: Adjust the output video
1.When I was testing, the effect of generating a two-dimensional video was very good when step=30, and the real person would have a bad face; when step=50, the real person's facial texture gradually became clear, and there was a small probability of drawing a card.
2.Adjust the ‘width’ and ‘height’ parameters to adjust the video resolution.Adjust the 'frame_rate' parameter to set the frame rate of the output video.Adjust the 'num_frame' parameter to set the length of the output video.The formula is: num_frame/frame_rate=video length.