ControlNet Scribble: Changing color, anime realizing, animating, and coloring

Are you interested in learning about ControlNet Scribble? This article provides practical examples of ControlNet to enhance your understanding. Our main focus will be on ControlNet Scribble. By reading this article, you will gain insights into the following inquiries:

  • What exactly is ControlNet Scribble?
  • What are some specific applications of ControlNet Scribble?
  • How does ControlNet Scribble distinguish itself from similar techniques such as ControlNet Lineart, SoftEdge, and Canny?

What is ControlNet Scribble?

ControlNet Scribble is a model within the ControlNet system that utilizes an algorithm to extract rough contours and sketches, thereby controlling the generation of images. It is capable of extracting line drawings in various patterns, making it a versatile option. However, it is important to note that ControlNet Scribble does not detect fine lines, making it more suitable for maintaining a rough composition.

The difference between “Generating images from prompts” and using “ControlNet Scribble”

When generating images from prompts, the images are created based on the provided text input.

On the other hand, when utilizing ControlNet Scribble, it extracts the contours and sketch information from the source image and generates an image that fills the contents of those contours. This allows for the image to be generated while preserving the rough structure.

Therefore, using ControlNet Scribble provides the convenience of controlling the intended composition and shape, eliminating the need for generating multiple images.

Here are some examples of what can be achieved with ControlNet Scribble:

  • Modifying the texture, color, and overall appearance of an image.
  • Transforming real-life photographs into illustrations or turning illustrations into realistic images.
  • Adding color to line drawings.

Texture, Color, and Feel Transformation:

The image on the left is the original reference image, and the image on the right is the same image with a changed texture, accomplished using Scribble. The prompt given was to tan the skin.

Illustrating Real-Life Photographs or Converting Illustrations:

The image on the left is the reference photograph, and the image on the right is the same image rendered as an illustration using Scribble.

The image on the left is the reference photograph, and the image on the right is the same image transformed into a realistic photograph using Scribble.

Colorizing Line Drawings:

The image on the left is the original line drawing, and the image on the right is the same drawing with colors applied using Scribble.

ControlNet Scribble Usage

Preparing ControlNet Scribble

To use ControlNet Scribble, ControlNet must be installed. ControlNet Scribble is a feature of ControlNet, which is an extension of the Stable Diffusion Web UI. If you haven’t installed ControlNet yet, please refer to the following article for instructions on how to install it.

What is ControlNet? What it can do. Detailed explanation of how to introduce it to Stable Diffusion Web UI

Installing ControlNet Scribble

To use ControlNet Scribble, you will need the ControlNet Model. Download the following two files from the link below and place them in stable-diffusion-webui/models/ControlNet.

  • control_v11p_sd15_scribble.pth
  • control_v11p_sd15_scribble.yaml

lllyasviel/ControlNet-v1-1(huggingface.co)

Using ControlNet Scribble

Follow the steps below to configure the ControlNet menu for ControlNet Scribble usage:

  1. Enter prompts for image generation.
  2. Click on the “▼” to open the ControlNet menu.
  3. Set the reference image in the ControlNet menu screen.
  4. Check “Enable” to activate ControlNet.
  5. Select “Scribble/Sketch” (or “Scribble” depending on the version) as the Control Type. This will set the Preprocessor and ControlNet Model.
  6. Click the feature extraction button “💥” to perform feature extraction. The results of feature extraction will be displayed after preprocessing is applied.
  7. ControlNet Scribble is now applied. Click “Generate” to generate the image.

Specific Usage of ControlNet Scribble

In this section, we will provide step-by-step instructions for the examples mentioned in “What Can You Do with ControlNet Scribble?”

Changing Texture, Color, and Appearance

With ControlNet Scribble, you have the ability to modify the texture, color, and appearance of an image. For instance, you can change the texture of the skin. If the generated image has unappealing skin, you can make it smoother and clearer, alter the skin color, or even make it look sweaty.

Let’s try tanning the skin this time.

Follow these settings in the ControlNet menu screen:

  1. Drag and drop the image into the ControlNet menu screen.
  2. Enable the “Enable” option.
  3. Choose “Scribble/Sketch” in the Control Type (or simply “Scribble” depending on the version).
  4. Click the feature extraction button “💥”.

Afterwards, input the prompt for generating the image.

Prompt: (brown skin, tanned skin: 2), 1girl, a 20 years old pretty Japanese girl in classroom.school uniform,blackboard

If you wish to make any other changes to the image generation settings, such as image size, you can modify them accordingly.

The generated images showcasing the tanned skin are as follows:

Converting Realistic Images to Illustrations and Illustrations to Realistic Images

Converting Realistic Images to Illustrations

Next, let’s attempt converting a realistic image into an illustration. The settings in ControlNet remain the same.

  1. Since we are transforming into an illustration, switch to an illustration-based model.
  2. Enter the prompt for generating the image.
  3. Generate the image.

Model: AnythingV5Ink_ink.safetensors [a1535d0a42]

Prompt: 1girl, a 20 years old pretty Japanese girl in classroom.school uniform,blackboard

If you would like to adjust any other settings for image generation, please make the necessary modifications.

The generated images in the form of illustrations are displayed below:

Converting Illustrations to Realistic Images

Now, let’s try converting an illustration to a realistic image. Please note that the settings in the ControlNet menu differ from the previous ones. Follow these settings and perform feature extraction in ControlNet:

  1. Drag and drop the image into the ControlNet menu screen.
  2. Enable the “Enable” option.
  3. Choose “Scribble/Sketch” in the Control Type (or simply “Scribble” depending on the version).
  4. Click the feature extraction button “💥”.
  5. Select “My Prompt is more important” in the Control Mode. This ensures that the prompt is prioritized while faithfully generating images from the sketch, as the balance between the face and body can be off when solely based on the sketch.

Next, adjust the settings for image generation:

  1. Switch to a realistic image-based model.
  2. Enter the prompt for generating the image.
  3. Generate the image.

Model: beautifulRealistic_brav5.safetensors [ac68270450]

Prompt: 1girl, a 20 years old pretty Japanese girl in classroom.school uniform,blackboard

If you wish to adjust any other settings for image generation, please make the necessary modifications.

The generated images are presented below:

Coloring Line Drawings

Next, let’s discuss coloring line drawings. When coloring black and white line drawings, you can utilize the “invert(from white bg & black line)” option as a preprocessor. This option simply flips the colors of the line drawing, turning white into black and black into white. However, please note that the distinction between black and white is strict, and if the line drawing appears slightly brighter to the human eye, it may not be recognized as a line drawing. So please exercise caution.

Follow these steps in the ControlNet menu screen:

  1. Drag and drop the image into the ControlNet menu screen.
  2. Enable the “Enable” option.
  3. Choose “Scribble/Sketch” in the Control Type (or simply “Scribble” depending on the version).
  4. Set the preprocessor to “invert(from white bg & black line)”.
  5. Click the feature extraction button “💥”.

Next, adjust the settings for image generation:

  1. Switch to an illustration-based model.
  2. Enter the prompt for generating the image.
  3. Generate the image.

Model: AnythingV5Ink_ink.safetensors [a1535d0a42]

Prompt: 1girl, suits, question

If you wish to adjust any other settings for image generation, please make the necessary modifications.

The results are as follows:

About ControlNet Scribble Configuration

About Preprocessor

  • scribble_hed: Holistically-Nested Edge Detection (HED) is an edge detector that generates contours similar to those drawn by humans. HED is suitable for image recoloring and style changes.
  • scribble_pidinet: Pixel Difference Network (Pidinet) detects edges of curves and straight lines. It is similar to HED but typically produces cleaner lines with less detail.
  • scribble_xdog: Extended Difference of Gaussian (XDoG) is an edge detection method. Adjusting the XDoG threshold and observing the Preprocessor output are crucial.
  • t2ia_sketch_pidi: This Preprocessor is a variation of scribble_pidinet. It can extract rough sketches compared to scribble_pidinet.
  • invert (from white bg & black line): This Preprocessor inverts images with a white background and black lines. Select this Preprocessor if you want to directly use black and white sketches as input images.

FAQ

What makes ControlNet Scribble different from similar techniques?

ControlNet offers several techniques in addition to ControlNet Scribble:

  • ControlNet Canny: This model applies Canny Edge Detection to images, enabling the extraction of fine lines.
  • ControlNet Lineart: This model focuses on extracting line drawings from images, emphasizing strong lines while ignoring weaker ones.
  • ControlNet SoftEdge: With this model, images can be generated using soft contours, making it suitable for tasks like color modification while preserving details.
  • ControlNet Scribble: As part of the ControlNet model, ControlNet Scribble specializes in generating refined illustrations from rough sketches or drawings, even if they are less detailed or simplistic.

All of these techniques involve line extraction, but there are differences in the level of detail. The following images demonstrate line extraction using the Canny, Lineart, SoftEdge, and Scribble techniques, in that order:

To select the appropriate technique, consider the specific lines in the image that you want to emphasize and focus on.

For more information about the new features in ControlNet 1.1, including Lineart, Colorization, and Line Drawing for High-Quality Image Generation, please refer to the following article: ControlNet 1.1 New Features – Lineart, Colorization, Line Drawing for High-Quality Image Generation

コメント