Apps Page Background Image

Free diffusion-pipe Online

Diffusion-Pipe is an advanced training script optimized for diffusion models like HunyuanVideo, Wan2.1 Video, and FLUX. Designed for efficient LoRA training, it utilizes a pipeline parallel architecture, enabling the training of large-scale models that exceed single-GPU capacity. This specialized framework streamlines the LoRA training process for a lot of models like HunyuanVideo and Wan2.1, making it an essential tool for AI developers focused on character-consistent video generation and customized AI video and image styles.

Quick Start of diffusion-pipe

How to Train Your Wan2.1 LoRA Through Diffusion-Pipe

  • 1. Access Diffusion-Pipe within MimicPC: Log in to your MimicPC account, click "Add New App," and select "diffusion-pipe" version 1.0.2. This pre-installed version is optimized for Wan2.1 LoRA training, eliminating complex setups.
  • 2. Name Your Dataset and Select Base Model: In Diffusion-Pipe, create a simple English dataset name (e.g., "test_wan") for your project. Then, select "Wan21" as the base model. Click "CREATE DATASET" to initialize the dataset structure.
  • 3. Upload Dataset and Subtitle Files: Upload your dataset (sample images, videos, txt file, or other data formats supported by Wan2.1) to the "Dataset Configurations" section. If training involves text-to-image, upload corresponding subtitle files. Ensure all files are correctly formatted and error-free.
  • 4. Configure Model Path and Settings: In the "DATASET DIRECTORY" section, specify the Model Path where the trained LoRA will be saved. To specify the model path, you need to locate the base model file within the "models" directory. First, click the "wan" folder. Inside, you'll find the base model file (e.g., "T2V-480P-1.3B"). Click on the base model file and then select "Copy Full Path." Paste the copied path into the "Model Configurations" section, specifically into the "Official checkpoint Path" field. Configure the Tensor Data Type (e.g., FP16, FP32) and Output Type based on your project requirements. Note that the Output Type selected must match the base model selected in ComfyUI for compatibility.
  • 5. Adjust Training Parameters: Fine-tune training parameters in the "Training Parameters Configuration" section. Typically, you'll adjust the training steps within the "Epochs" tab, setting them to 1000 or more for a more refined LoRA. Also, set the save frequency to every 200-500 epochs. Save your configurations.
  • 6. Start Training and Monitor: Click the "Start Training" button to begin the LoRA training process. Monitor the training progress through the provided logs. Training time depends on dataset size and model complexity. The trained LoRA file will be located in the Output file upon completion.
Quick Start of Image
Quick Start of Image
Quick Start of Image
Quick Start of Image
Quick Start of Image
Quick Start of Image
KOL Background Image

How to Train Wan2.1 LoRA with Diffusion-Pipe in 6 Steps

kol Video Imageicon play image

FAQ About diffusion-pipe