With the latest update of ControlNet to version 1.1, there has been a notable rise in interest towards its usage as a crucial extension for high-quality image generation. In this article, we will delve into the enhancements and additions that ControlNet 1.1 brings. By reading this article, you will gain clarity on the following points:
- The reasons behind the increased attention towards ControlNet 1.1.
- How the newly introduced features can be utilized.
- Detailed information on the capabilities of both existing and new features.
Summary of ControlNet 1.1 Update
Improvement 1: Addition of new features
An important aspect of the ControlNet 1.1 update is its ability to expand the range of models compatible with ControlNet. This broadens the types of features that can be extracted, ultimately enhancing ControlNet.
Improvement 2: Enhancement of existing models
ControlNet 1.1 also improves the performance of existing features. Through the utilization of high-quality data for training, the accuracy of the existing models has been significantly enhanced.
Improvement 3: Introduction of Reference Only mode
ControlNet 1.1 introduces the innovative Reference Only mode. This mode eliminates the need for the feature extraction model as it solely references the image. Despite being a simple image reference function, Reference Only mode offers remarkable versatility. Further details on Reference Only mode will be provided in subsequent explanations.
Advantages of ControlNet 1.1 Update
The ControlNet 1.1 update offers notable benefits by enhancing image generation with Stable Diffusion. Notably, the inclusion of powerful models such as Tile, Lineart, and Anime Lineart greatly enhances the image generation process. By leveraging existing images as a basis, rather than relying solely on prompts, it minimizes trial and error, resulting in a significant improvement in image generation efficiency. Furthermore, compared to earlier versions, ControlNet 1.1 allows for a wider range of features to be extracted, offering a broader selection of usable images.
How to Use ControlNet 1.1
Installation of ControlNet:
If ControlNet is not yet installed, please install it following the usual procedures and download the corresponding model. For detailed instructions on installing ControlNet, please refer to the article provided.
ControlNet is already installed:
If you already have ControlNet installed, you can update it by following these two steps:
- Update the source code.
- Download the model.
Updating the source code:
To update the source code, perform the following steps:
- Navigate to the Extensions tab and click on “Check for updates” to obtain the latest information on available extensions.
- Select the extensions you wish to update. If there are any extensions you do not want to update, simply uncheck them.
- Click on “Apply and quit” to update the extensions. Once the Stable Diffusion Web UI has closed, relaunch it.
Downloading the model:
The page provided below contains all the models available for ControlNet 1.1. For each ControlNet functionality, there is a corresponding model file (.pth) and configuration file (.yaml).
Please note that each model file is 1.4GB in size, and downloading all of them may consume a significant amount of disk space. Therefore, it is sufficient to download only the model files that you require. Ensure to also download the accompanying yaml files.
New additions to ControlNet 1.1:
“Tile (Experimental Feature)”:
This technique enhances image resolution by tiling an image and reconstructing it based on those tiles. It can be used for upscaling, image correction, and texture transformation. Examples can be found in the article Controlnet Tile Usage and Specific Cases (Anime and Live Action Interconversion, Correction, Upscaling, etc.): ControlNet 1.1 New Features.
“Lineart”:
This technique extracts line drawings from images and generates new images based on those line drawings. It can be used for style transformation and illustration generation. Examples can be found in the article ControlNet 1.1 New Features Lineart, Coloring, High-Quality Image Generation from Line Drawings.
“Anime Lineart”:
Similar to “Lineart,” this technique specializes in extracting line drawings from illustrations. It is used for coloring and realistic conversion of line art and illustrations. Examples can be found in the article ControlNet 1.1 New Features Lineart, Coloring, High-Quality Image Generation from Line Drawings.
“Shuffle (Experimental Feature)”:
Shuffle allows you to generate images based on the color schemes of reference images. It can be useful for creating new images with similar color patterns. Examples can be found in the article ControlNet 1.1 New Features Shuffle Usage.
“Instruct Pix2Pix (Experimental Feature)”:
This technique allows you to specify changes to specific parts of images. It is suitable for overall image transformations. More information can be found in the article Trying ControlNet 1.1’s New Feature Instruct Pix2Pix (ip2p).
“Inpaint”:
Inpaint is used to partially modify and change images. It utilizes the latest inpainting techniques and can be used for tasks like hairstyle transformation and removal of unwanted objects. More details can be found in the article ControlNet Inpaint Usage and Comparison of 3 Processors.
“Reference Only”:
With this feature, you can modify images while maintaining certain features specified in the prompt. It is especially useful for maintaining facial features while changing poses. Examples can be found in the article ControlNet Reference Only! Free Image Modification. Detailed Usage.
Improved model in ControlNet 1.1
Image Segmentation
Perform image segmentation to extract the overall structure of an image.
Extracted segmentation images:
Generated based on composition (Right: Reference image, Left: Image generated with the prompt “female student”):
Depth Analysis
To extract depth, use the Depth model, which adds depth compared to segmentation. This allows for the following:
Texture Conversion (Right: Reference image, Left: Image generated with the prompt “wooden room”):
Generated based on composition (Right: Reference image, Left: Image generated with the prompt “female student”):
Normal Map Extraction
The Normal model extracts the normal map, allowing for finer detail extraction and mimicking the skeletal structure of an image.
Texture Conversion (Right: Reference image, Left: Image generated with the prompt “wooden room”):
Generated based on composition (Right: Reference image, Left: Image generated with the prompt “female student”):
Line Segment Detection (MLSD)
This model extracts only straight lines and can be used for texture conversion of inanimate objects.
Canny Edge Detection
This method uses Canny edge detection to extract fine lines.
Canny Edge Extraction Result:
Reenactment:
Soft Edge Detection (formerly HED)
Soft Edge provides rougher results compared to Canny, but it can extract important lines.
Soft Edge Extraction Result:
Reenactment:
Scribble-based Editing
Scribble can extract rough lines similar to Soft Edge.
Scribble Extraction Result:
Reenactment:
Pose Detection (OpenPose)
OpenPose is a model that can extract body, face, and hand parts from images.
Pose extraction from a human image:
This can be used to generate images with the same poses.
Left: Reference image, Right: Generated image.
Which feature should you begin with?
If you are interested in testing out the latest additions, I suggest starting with the following two:
- Tile
- Lineart
In terms of the features that are already available, I recommend the following three:
- OpenPose
- Normal
- Soft Edge
conclusion
ControlNet is a crucial tool for mastering image generation in the future. By utilizing ControlNet, the process of generating images can become more efficient through the use of existing images instead of relying solely on prompts. Although it may initially seem challenging, once you familiarize yourself with ControlNet, it will be difficult to revert to generating images without its assistance. This remarkable feature is highly recommended for anyone interested in exploring its potential.
コメント