Introduction
HiDream-I1, an open-source image generation foundation model by HiDream.ai, has quickly gained prominence for its powerful performance and innovative design. With 17 billion parameters, it surpasses the widely popular Flux (12 billion parameters) and has outperformed Flux on the Hugging Face Text-to-Image Model Leaderboard, making it a focal point in the industry.
HiDream-I1 boasts a parameter size nearly 42% larger than Flux, which directly translates to superior image generation capabilities. Whether in detail rendering or overall quality, HiDream-I1 delivers exceptional results, standing out in comparisons with Flux.
HiDream-I1 excels in various artistic styles, including abstract, cartoon, and artistic styles, with an industry-leading HPS v2.1 score that reflects its superiority in human aesthetic preferences.
HiDream-I1 accurately interprets complex user instructions, earning top scores in benchmarks like GenEval and DPG, and providing an unparalleled creative experience.
https://huggingface.co/HiDream-ai/HiDream-I1-Dev
https://github.com/lum3on/comfyui_HiDream-Sampler
https://github.com/SanDiegoDude/ComfyUI-HiDream-Sampler/
https://github.com/hykilpikonna/HiDream-I1-nf4
Recommended machine:Ultra
Workflow Overview
How to use this workflow
Part 1 : Text2Img
Step 1 : Input the Prompt
Step 2 : Set the sampler parameters
- model_type: Adjust the master model type (If you arbitrarily change the master model, it will take up a lot of your local storage)
- resolution: Change the image size (The limit of the node itself, can only select the size of the node, and can not set any proportion by itself)
- num_images: Sets the number of images to be output at a time
Step 3 : Get Image
Part 2 : Img2Img
Step 1 : Create a Image
In order to facilitate users to quickly obtain image materials, the image generated by Text2Img is set to be directly imported to Img2Img node. Of course, users with Image assets can load a load Image node to connect to it.
Step 2 : Input the Prompt
Since HiDream's Img2Img is generated by reference based on the style of the original image (similar to IPAdapter or Inpainting), the Prompt you enter only serves the purpose of redrawing orientation, and does not serve the purpose of Canny or Depth
Step 3 : Set the sampler parameters
- model_type: Adjust the master model type (If you arbitrarily change the master model, it will take up a lot of your local storage)
- resolution: Change the image size (The limit of the node itself, can only select the size of the node, and can not set any proportion by itself)
- denoising_strength: Degree of image redrawing