Apps Page Background Image
Workflows/SD3.5 Large Controlnets - Depth

SD3.5 Large Controlnets - Depth

Save it for me
Operate
MimicPC
11/29/2024
ComfyUI
New&Hot
Image Editing
SD & SDXL
1 / 0
Detailed Introduction

Introduction

Stability AI announced adding new capabilities to Stable Diffusion 3.5 Large with the release of three ControlNets: Blur, Canny, and Depth, on 10/26.

These models are free for both commercial and non-commercial use, offered under the permissive Stability AI Community License, providing powerful tools for creators and developers.


This Depth workflow uses depth  to guide image generation by providing spatial information and composition based on your uploaded image and prompts. It is ideal for architectural designs, 3D asset texturing, and scenarios requiring precise spatial control and depth-aware rendering.


Depth


Use depth maps, generated by DepthFM, to guide image generation. Great for architectural renderings and texturing 3D assets, and other use cases that require exact control over the composition of an image.
Read more: https://huggingface.co/stabilityai/stable-diffusion-3.5-controlnets


Workflow Overview

How to Use this Workflow

1. Upload your Image
Upload your main image, which will also set the image dimensions for the empty latent.


2. Depth Node
The Depth node uses depth maps to provide spatial and compositional guidance, ensuring precise depth and layout in the generated image.


3. Enter your Prompt
Add prompts tailored to the desired style and depth-aware variations you want to achieve.

4. Output
Generate your depth-guided image with accurate spatial details and tailored stylization.


Details
APPComfyUI(v0.3.6)
Update Time11/29/2024
File Space1.3 GB
Models0
Extensions5