ControlNet offers a multitude of features, making it difficult to determine which ones to utilize, correct? In order to assist those who are grappling with this issue, we will illustrate practical examples of ControlNet. In this article, we will concentrate on ControlNet Canny, addressing the following inquiries:
- What exactly is ControlNet Canny?
- What are some concrete instances of ControlNet being implemented?
- I am curious about the distinctions between ControlNet Lineart, SoftEdge, Scribble, and related techniques.
What is ControlNet Canny?
ControlNet Canny is a specific model within the ControlNet framework that specializes in handling line drawing information obtained through the Canny edge detection algorithm. It excels in extracting fine details compared to other extraction methods utilizing Canny. It is particularly effective when generating new images while preserving the original image’s composition and intricate details.
Differences compared to image generation from only prompt.
When generating images from prompts, the process relies on textual input to produce an image.
In contrast, ControlNet Canny works by extracting the line drawing information from the original image and then generating an illustration by adding color to those lines. This approach ensures that the resulting image maintains the contour of the original image.
Consequently, ControlNet Canny enables the production of high-quality images that accurately depict the contours of faces and hands. Although using ControlNet does require some effort, it provides a convenient feature by eliminating the need for trial and error when considering the workflow as a whole.
Capabilities of ControlNet Canny
- Image texture, texture, and color modification.
- Real-life image illustration and realization of illustrations.
- Line drawing coloring.
Image Texture, Texture, and Color Modification
The reference image is shown on the left, and the image with the modified texture using Canny is shown on the right. In this example, the prompt was to tan the skin.


Real-Life Image Illustration and Realization of Illustrations
The reference image is shown on the left, and the image that has been illustrated using Canny is shown on the right.


The reference image is shown on the left, and the image that has been realized using Canny is shown on the right.


Coloring Line Illustration
The left image is a line drawing, and the right image is the result of coloring it using Canny.


How to Use ControlNet Canny
Getting Ready for ControlNet Canny
ControlNet Canny is a feature of ControlNet and an extension of Stable Diffusion Web UI. Therefore, before using ControlNet Canny, you need to have ControlNet installed. If you haven’t installed it yet, please refer to the following article for instructions on how to install ControlNet.
Installing ControlNet Canny
To use ControlNet Canny, you need to download the ControlNet Model. Follow the link below and download the following two files, then place them in stable-diffusion-webui/models/ControlNet.
- control_v11p_sd15_canny.pth
- control_v11p_sd15_canny.yaml
lllyasviel/ControlNet-v1-1 at main (huggingface.co)
Basic Usage of ControlNet Canny
Follow these steps to configure the ControlNet menu:
- Enter the desired prompt.
- Click on the “▼” to expand the ControlNet menu.
- Set the image in the ControlNet menu screen.
- Check the “Enable” box.
- Select “Canny” for Control Type.
- Click the feature extraction button “💥” to extract features.
- Click “Generate” to create the image.

Specific Steps
Changing Texture, Color, and Quality
To modify the texture, color, and quality of an image, you can use ControlNet Canny. This tool allows you to change various aspects of an image, such as skin texture and color. For instance, if the skin in an image appears dirty, you can enhance it to have beautiful skin, change its color, or even create a sweat effect.
今回は肌を日焼けさせてみます。
In this example, we will demonstrate how to simulate a tan on the skin.
Follow these steps on the ControlNet menu screen:
- Drag and drop the image onto the ControlNet menu screen.
- Check the “Enable” box.
- Select “Canny” as the Control Type.
- Click the “💥” button for feature extraction.

Next, enter the prompt for the generated image:
Prompt: (brown skin, tanned skin: 2), 1 girl, a 20-year-old pretty Japanese girl in a classroom school uniform, blackboard
If you need to adjust any additional settings for image generation, such as image size, make the necessary changes.

The following images are the generated results, showcasing the tan effect:


Conversion between Real Photos and Illustrations
Converting Real Photos to Illustrations
Now, let’s learn how to convert a real photo into an illustration using ControlNet. The settings for ControlNet remain the same as before.
Here are the steps:
- To create an illustration, switch the model to an illustration-style model.
- Enter the prompt for the generated image.
- Generate the image.
Model: AnythingV5Ink_ink.safetensors [a1535d0a42]
Prompt: 1 girl, a 20-year-old pretty Japanese girl in a classroom school uniform, blackboard
If you need to adjust any additional settings for image generation, such as image size, make the necessary changes.

The following images are the generated results, showcasing the conversion to an illustration:


Converting Illustrations to Real Photos
Now, let’s explore converting an illustration into a real photo using ControlNet. The settings for ControlNet are different this time, so please pay attention to the following instructions. Follow these steps to perform the settings and feature extraction with ControlNet:
- Drag and drop the image onto the ControlNet menu screen.
- Check the “Enable” box.
- Select “Canny” as the Control Type.
- Click the “💥” button for feature extraction.
- Choose “My Prompt is more important” as the Control Mode. This option prioritizes the prompt during image generation to avoid overemphasizing certain details, such as overly large eyes in line-drawn illustrations.

Next, adjust the settings for image generation:
- Switch the model to a real photo-style model.
- Enter the prompt for the generated image.
- Generate the image.
Model: beautifulRealistic_brav5.safetensors [ac68270450]
Prompt: 1 girl, a 20-year-old pretty Japanese girl in a classroom school uniform, blackboard
If you need to adjust any additional settings for image generation, such as image size, make the necessary changes.

The following images are the generated results, showcasing the conversion to a real photo:


Coloring Line Drawings
Now, let’s discuss coloring line drawings. When coloring black and white line drawings, you can utilize the “invert(from white bg & black line)” preprocessor. This preprocessor function simply inverts the black and white colors of the line drawing. However, please note that this preprocessing step requires a strict distinction between black and white. If the line drawing appears slightly brighter, even though it may seem completely black to the human eye, it may not be recognized as a line drawing.
Follow these steps:
- Drag and drop the image onto the ControlNet menu screen.
- Check the “Enable” box.
- Select “Canny” as the Control Type.
- Set the Preprocessor to “invert(from white bg & black line)”.
- Click the “💥” button for feature extraction.

Next, adjust the settings for image generation:
- Switch the model to an illustration-style model.
- Enter the prompt for the generated image.
- Generate the image.
Model: AnythingV5Ink_ink.safetensors [a1535d0a42]
Prompt: 1 girl, suits, question
If you need to adjust any additional settings for image generation, such as image size, make the necessary changes.

The following images are the generated results, showcasing the colorization of line drawings:


ControlNet offers multiple methods in addition to ControlNet Canny:
- ControlNet Lineart: This model can extract line drawings from images, focusing on strong lines and ignoring weaker ones. It allows for the extraction of important lines.
- ControlNet SoftEdge: This model generates images using soft edges, preserving details effectively. It is suitable for color changes and other applications.
- ControlNet Scribble: A component of ControlNet’s model, it can generate refined illustrations from rough sketches or loose drawings. It can even create illustrations that match the model based on stick figures or rough sketches.
All these methods extract lines, but the level of detail varies. In increasing order of line detail, the extracted lines from the previous photo are as follows: Canny, Lineart, SoftEdge, and Scribble.
Your choice of method depends on the aspect of the lines in the image you want to focus on. However, Lineart is particularly useful due to its deep learning model, which enables it to identify and discard less important lines while extracting important ones. If you’re unsure which method to use, Lineart is recommended.




To learn more about ControlNet 1.1’s new features, such as Lineart, color filling, and high-quality image generation from line drawings, you can refer to this article:
コメント