This is a video-to-image workflow. You can upload an action video you like, and then enter the corresponding character and background information in the prompt words, so that a video with the same action will be generated. Finally, upload the newly generated video and portrait image in the face replacement workflow, so that you can generate an action video of the character you specify.
This workflow mainly uses kijia's Wan2.1-Fun-Control model, which supports Canny, Depth, Pose, MLSD and other functions
https://huggingface.co/Kijai/WanVideo_comfy/resolve/main/Wan2.1-Fun-Control-14B_fp8_e4m3fn.safetensors
Face swapping uses the ReActor node
https://github.com/Gourieff/ComfyUI-ReActor