Introduction
The OminiControl workflow provides a powerful, flexible way to enhance Diffusion Transformer models like FLUX with precise control over the generation process.
OminiControl
Universal Control 🌐: A unified control framework that supports both subject-driven control and spatial control (such as edge-guided and in-painting generation).
Minimal Design 🚀: Injects control signals while preserving original model structure. Only introduces 0.1% additional parameters to the base model.
Limitations
The model's subject-driven generation primarily works with objects rather than human subjects due to the absence of human data in training.
The subject-driven generation model may not work well with FLUX.1-dev
.
The released model currently only supports the resolution of 512x512.
Read more and download: https://github.com/Yuanshi9815/OminiControl.git
Workflow Overview
How to use this workflow?
Step 1: Input Image and Text Prompts
- Load Image: Upload the image you want to modify.
- Input a descriptive prompt to guide the generation process.
Select Control Type:
- Fill: Mask specific areas for localized edits.
- Subject: Theme-based generation.
- Spatial: Use spatial controls like edges, depth, or color.
Step 2: Adjust Parameters
- Seed: Set a fixed seed for consistent results or randomize for variation.
- Control Type: Choose between canny, depth, or coloring for spatial alignment.
- Mask Settings: For Fill control, apply a mask to the target area and input the desired modification.
Step 3: Generate and Export Image
Use the Save Image node to export the generated image.