FramePack Image to Video (I2V)
Super easy to use! All setup and optimizations have been completed for you. Just drop in your image and a good prompt indicating what you want to animate!
This includes a custom fine-tuned Hunyuan model with better prompt adherence and less censorship than the original.
Use at least ULTRA for best experience.
Thanks to Kijai for making an excellent wrapper to get this working in comfy: https://github.com/kijai/ComfyUI-FramePackWrapper
More info from https://github.com/lllyasviel/FramePack:
Links: Paper, Project Page
FramePack is a next-frame (next-frame-section) prediction neural network structure that generates videos progressively.
FramePack compresses input contexts to a constant length so that the generation workload is invariant to video length.
FramePack can process a very large number of frames with 13B models even on laptop GPUs.
FramePack can be trained with a much larger batch size, similar to the batch size for image diffusion training.
Video diffusion, but feels like image diffusion.