ControlNet offers alternative methods for managing images generated by Stable Diffusion without the need for prompts. However, its extensive features and intricate settings can overwhelm many users.
To demystify ControlNet, this article presents a concise overview of its key functionalities. Additionally, it introduces the sd-webui-controlnet extension, a highly reliable open-source tool with a significant following on GitHub. Detailed installation and usage instructions are provided to streamline the integration process.
Mikubill/sd-webui-controlnet: WebUI extension for ControlNet (github.com)
What is ControlNet?
ControlNet is a neural network utilized to exert control over models by integrating additional conditions into Stable Diffusion. It also encompasses ControlNet for Stable Diffusion Web UI, an extension of the Stable Diffusion Web UI.
This framework imposes constraints on images to prevent significant deviations from extracted features like poses and compositions. Consequently, it facilitates image generation based on these extracted features.
Through ControlNet, users can generate images while adhering to specific poses or retain line integrity in drawings, enabling a diverse array of expressive possibilities.
Exploring ControlNet: Unlocking its Capabilities
ControlNet offers a myriad of applications, including:
- Pose Definition: Facilitating the generation of images or stick figures with predefined poses derived from reference images.
- Illustration Style and Texture Modification: Enabling the alteration of illustration styles and textures to create diverse visual outputs.
- Color Enhancement in Line Drawings: Adding color to line drawings to enhance visual appeal and realism.
In the following examples, the reference image on the left is compared to the result generated using ControlNet on the right, showcasing its transformative potential.
Example 1(ControlNet Openpose):
Example 2(Controlnet Segmentation):
Example 3(Controlnet Lineart):
Introducing the ControlNet Feature Extraction Model
The ControlNet model integrates Stable Diffusion with various feature extraction models to achieve precise image control. By harnessing these models, ControlNet ensures that generated images maintain specific features.
Feature extraction serves as the cornerstone for manipulating images in diverse ways. The array of feature extraction models is extensive, covering nearly every imaginable possibility. Some notable models include:
- Open Pose for Pose Control: OpenPose, initially designed for human pose estimation, plays a pivotal role in ControlNet for pose control. It extracts poses from images, facilitating image generation while preserving these poses. Providing pose instructions through prompts can be challenging, but this method significantly reduces the required effort. Moreover, it can generate images from stick figures, enabling the replication of ideal poses even without pre-existing ones.
- Depth Model for Extracting Depth from Images: A Depth model extracts depth information from images, enabling control over spatial dimensions. This understanding of the 3D structure aids in generating images with precise depth representation. It proves especially useful when altering the texture of objects, such as furniture, within an image.
- Canny Model, Soft Edge, Scribble for Extracting Edges from Images: The Canny Model is one such model capable of extracting edges from images. By doing so, it can create line drawings from images, allowing for color changes in illustrations or coloring monochrome line drawings while preserving the original linework. Additionally, models like Soft Edge and Scribble focus on extracting major lines, commonly used for tasks such as converting illustration textures.
Example of feature extraction:
- Original image
- OpenPose
- Depth
- Canny
- Soft Edge
- Scribble
Introducing the ControlNet Extension for Stable Diffusion Web UI
The ControlNet feature is now seamlessly integrated into the Stable Diffusion Web UI through the newly available ControlNet extension. This article serves as your comprehensive guide on utilizing ControlNet directly within the Stable Diffusion Web UI. Below outlines the general procedure:
Installation Steps
- Check if ControlNet is already installed
- Add the ControlNet extension to Stable Diffusion Web UI
- Download the feature extraction models
1. Confirming ControlNet Isn’t Installed
Begin by ensuring that ControlNet isn’t already installed. It’s not uncommon for ControlNet to be included inadvertently during the installation of the Stable Diffusion Web UI or other extensions. To verify, check for the presence of the ControlNet menu at the top of the screen.
If ControlNet is already installed, proceed directly to step 2: “Adding the ControlNet extension to Stable Diffusion Web UI”.
2.Adding the ControlNet extension to Stable Diffusion Web UI
To seamlessly integrate the ControlNet extension “sd-webui-controlnet,” please follow these straightforward steps:
- Navigate to the Extensions tab within the Stable Diffusion Web UI.
- Switch to the Install from URL tab to proceed with the installation process.
- In the URL for extensions’ git repository field, input: https://github.com/Mikubill/sd-webui-controlnet.
- Click on the Install button to initiate the installation process.
- After installation, switch to the Installed Tab.
- Click on “Apply and restart UI” to ensure that the changes take effect.
Upon the UI’s restart, if you see the ControlNet menu displayed as illustrated below, the
installation has been successfully completed.
For Windows users encountering the error “ModuleNotFoundError: No module named ‘pywintypes'” during ControlNet loading, please execute the command “pip install pypiwin32” in the command prompt to install the required package.
3. Downloading Feature Extraction Models
The ControlNet extension does not include feature extraction models. Therefore, it is necessary to download and place the feature extraction models in the appropriate folder.
You can download the feature extraction models from the Hugging Face repository provided below:
The .pth files are the model files, and the .yaml files are the model structure definition files.
Please download both types of files.
lllyasviel/ControlNet-v1-1 at main (huggingface.co)
You have the option to download all the models, but please note that they have large file sizes and may take a while to download. For efficiency, it is recommended to download them in the following order of priority:
- control_v11p_sd15_openpose.pth, control_v11p_sd15_openpose.yaml
- control_v11f1p_sd15_depth.pth, control_v11f1p_sd15_depth.yaml
- canny, scribble, soft edge
- Others
Place the downloaded files under “stable-diffusion-webui/models/ControlNet”.
Using ControlNet OpenPose
To get started, we will use ControlNet that has been installed. Our first step is to try using OpenPose. We will use an image of a person sitting in a formal kneeling position from the free materials provided by Pakutaso. We will then generate an image of a schoolgirl sitting in the same pose.
Performing Feature Extraction(1/3)
Let’s start with feature extraction.
Feature extraction:
- Open the ControlNet menu.
- Set the image.
- Choose OpenPose for the Control Type.
- Click the feature extraction button.
If the generated image looks like a stick figure as shown below, the feature extraction was successful.
Generating an Image from Extracted Features(2/3)
- Enable “Enable” in the ControlNet menu.
- Configure desired image generation settings (similar to using txt2img)
- Prompt: “Photo of Japanese girl sitting on floor in a classroom, school uniform”
- NegativePrompt: “EasyNegative”
- Width: 768, Height: 512, Batch size: 6
- Click the image generation button (similar to using txt2img)
Images Generated by OpenPose(3/3)
Input Image:
Prompt:
“Photo of Japanese girl sitting on the floor in a classroom, school uniform”
Result:
The generated images closely resemble the input image in terms of pose. By using ControlNet and OpenPose, we can extract poses and generate images in the same pose. I hope you find this explanation helpful.
For more detailed explanations, please refer to the following article.
How to Use ControlNet OpenPose.
Various Functions of ControlNet
Controlling Detailed Features (Preserving Facial Features, Clothing, and Atmosphere)
Learn how to use the ControlNet Tile function and explore specific examples of its application, such as converting between anime and live-action, image correction, and upscaling. This article provides a comprehensive guide to using ControlNet Tile, a feature that may be less familiar compared to OpenPose and Canny. Visit the link for more information: How to use ControlNet Tile and specific examples (mutual conversion between anime and live-action, correction, upscaling, etc.): ControlNet 1.1 New Features
Composition and Shape Control (Generating Images with Consistent Composition)
Discover how to use ControlNet Segmentation to generate images with the same composition. With ControlNet’s numerous functions, it can be challenging to determine which one to use. This article focuses on ControlNet Segmentation and provides practical examples to help you understand its usage. Read more here: How to use ControlNet Segmentation and explanation. Generating images with the same composition.
Maintain consistent composition and three-dimensional structure while generating images using ControlNet NormalMap. This article explores the application of ControlNet NormalMap and provides insight into its usage. Follow the link to learn more: ControlNet NormalMap. Generating images while maintaining composition and 3D structure.
Hand Correction with ControlNet Depth
Explore different methods of hand correction with this article, which covers various approaches and explains their effectiveness. Although these methods may not universally apply, understanding their specific situations can greatly enhance your hand correction process. Read more here: Results of verifying all 6 methods for hand correction…
Line Drawing Extraction (Useful for Coloring, Live-action Adaptation, and Illustration)
Learn how to use ControlNet Soft Edge to change colors, adapt images into a live-action style, create animations, and add colors to line drawings. This article dives into practical examples of using ControlNet Soft Edge. Find out more by visiting the link: ControlNet Soft Edge for color changing, live-actionization, animation, and coloring.
Discover how to use ControlNet Scribble for coloring, live-action adaptation, animation, and adding colors to line drawings. This article provides practical examples to help you understand the potential uses of ControlNet Scribble. Follow the link for more information: ControlNet Scribble for color changing, live-actionization, animation, and coloring.
Achieve high-quality image generation, coloring, and line drawing with ControlNet 1.1 Canny. This article focuses on ControlNet Canny, explaining its features and providing valuable insights into its usage. Click on the link below to learn more: High-quality image generation from Canny, coloring, and line drawing with ControlNet 1.1.
Learn about the new features of ControlNet 1.1 Lineart and Anime Lineart in this article. Gain a better understanding of how to utilize Lineart and Anime Lineart for high-quality image generation from coloring and line drawing. Visit the link for more details: New features of ControlNet 1.1 Lineart. High-quality image generation from coloring and line drawing.
Other Features
Discover how to use ControlNet Inpaint, a powerful feature introduced in ControlNet 1.1. This article provides a comprehensive guide on how to utilize ControlNet Inpaint effectively, comparing it to three other processors. Click on the link to learn more: How to use ControlNet Inpaint and comparison of 3 processors. ControlNet 1.1 New Features
Explore ControlNet Shuffle, a new feature introduced in ControlNet 1.1, and learn how to make the most of its capabilities. This article provides a detailed explanation and practical examples of ControlNet Shuffle. Visit the link for more information: How to use ControlNet Shuffle, a new feature in ControlNet 1.1.
Learn about ControlNet Instruct Pix2Pix, a new feature introduced in ControlNet 1.1, and its usage by trying it out. This article provides insights into how to use ControlNet Instruct Pix2Pix effectively. Click on the link below to find out more: Trying out the new feature “instruct pix2pix (ip2p)” of ControlNet 1.1.
Discover how to free yourself from pose-related challenges by using ControlNet OpenPose. This article explains how to make the most of ControlNet OpenPose and overcome difficulties in achieving complex poses. Follow the link for more information: How to use ControlNet OpenPose. Free yourself from pose-related troubles.
コメント