Evaluation of 6 Methods for Hand Correction…

In this article, we will discuss various methods for hand correction. It’s important to note that while these methods can be effective in specific situations, they may not work universally.

Many articles tend to highlight success stories, which sometimes overshadow the actual effectiveness of these methods. Unfortunately, I fell into this trap and wasted a significant amount of time by blindly following such articles.

However, you don’t need to worry. This article aims to be transparent and provide an honest appraisal of which methods are useful in different situations and when they may be less effective. Our goal is to ensure that you don’t waste your valuable time.

Below is a table featuring the different methods and our recommendations for each:

MethodRecommendation
Control through prompts
Control through embeddings
Correction with LoRA
Correction with ADetailer
Control through ControlNet and Depth
Control through ControlNet and OpenPose

Controlling through prompts

This approach involves using prompts to prevent hand collapse, especially negative prompts. While the effectiveness may not be particularly high, specifying these prompts can help prevent the condition from worsening, so it is not a waste to specify them.

The prompts for hand correction include:

Specifying prompts

  • five fingers

Specifying negative prompts

  • deformed hand
  • extra fingers
  • bad fingers
  • missing fingers
  • fewer digits, extra digit
  • liquid fingers

Controlling with embeddings

Now, let’s delve into controlling with embeddings. We will test it using the popular hand correction model, ‘badhandv4’.

badhandv4 – AnimeIllustDiffusion – badhandv4 | Stable Diffusion Embedding | Civitai

To begin, let’s generate an image without using ‘badhandv4’ with the following prompt:

school uniform,Photo of Japanese girl.peace sign hand
EasyNegative

Next, let’s try utilizing ‘badhandv4’ with the prompt:

school uniform,Photo of Japanese girl.peace sign hand
EasyNegative,badhandv4

It appears to have shown some improvement. Interestingly, not only the fingers but also the facial details have been corrected. This could be a valuable addition to the process.

Let’s also try ‘bad-hands-5’.

Bad-Hands-5 – Bad-Hands-5 | Stable Diffusion Embedding | Civitai

school uniform,Photo of Japanese girl.peace sign hand
EasyNegative,bad-hands-5

This also appears to show some improvement, but personally, I find ‘badhandv4’ to be more visually appealing.

Correction with LoRA

Now, let’s attempt correction using LoRA. We have a LoRA specifically designed for the peace sign, so we can give it a try.

LoRA Peace Sign✌ – v0.3 | Stable Diffusion LoRA | Civitai

This is LoRA, designed to enhance the accuracy of drawing the peace sign. To use it, you can follow the instructions provided by AUTOMATIC1111.

Prompt: “school uniform, Photo of Japanese girl, peace sign hand “

Negative Prompt: “EasyNegative”

Here are the results after applying LoRA correction:

As the quality seems to deteriorate with LoRA, we can try adjusting the strength of the correction:

Prompt: “school uniform, Photo of Japanese girl, peace sign hand “

However, it is not advisable to use LoRA correction excessively.

Correction with ADetailer

This section explains the usage of ADetailer, an extension designed to prevent distortions in the face and hands. To learn how to use ADetailer, please refer to the instructions provided here: ADetailer Installation and 5 Ways to Use It (Correction of Facial, Hand, Body Distortions) and Thorough Explanation of Mechanism

The settings for ADetailer are shown below:

ADetailer Settings

The images below demonstrate the results of using ADetailer to correct hand distortions. As you can see, the corrections are subtle and leave the appearance mostly unchanged.

Correction with ControlNet Depth

To utilize ControlNet, follow these steps for ControlNet installation and Depth model download. However, note that this method may not yield optimal results, so it is advisable to assess the outcomes beforehand.

ControlNet is a tool enabling image manipulation in Stable Diffusion without relying solely on prompts. While it offers diverse functionalities, selecting the appropriate method might pose a challenge.

To incorporate depth into image generation, generate a depth map using the Depth Library. A depth map represents the image’s depth. With ControlNet, images can be generated while emphasizing depth, allowing for structure-based image generation, such as hands.

To modify the depth map, install the Depth Library extension. If you haven’t installed it yet, follow the installation instructions provided in the repository, available at https://github.com/jexom/sd-webui-depth-lib.

Return to the “txt2img” screen, access the ControlNet tab, and configure the created depth map image. Select Depth as the Control Type and leave the Preprocessor setting as “none” since it is unnecessary. Finally, set the Starting Control Step to 0.45 to initiate ControlNet control at the 45% mark.

The resulting images generated using ControlNet control via the Depth Library are as follows:

However, it should be noted that the control has limitations. Generating the depth map can be cumbersome, and the accuracy is relatively low, hence controlling image generation using depth maps is not a highly recommended approach.

Correction with ControlNet OpenPose

The next method involves using OpenPose. To begin, ensure that you have ControlNet installed and that you have downloaded the OpenPose model. If you don’t have them, please refer to the following article for instructions on how to prepare:

What is ControlNet and what can it do?

Additionally, you need to add the “sd-webui-OpenPose-editor” extension.

huchenlei/sd-webui-openpose-editor: Openpose editor for ControlNet. Full hand/face support. (github.com)

Now that you are ready, let’s delve into the control using OpenPose. This method involves creating images with specific pose restrictions using ControlNet. By incorporating the hand structure in the pose, you can control the shape of the hand.

To begin, open the ControlNet configuration tab in the “txt2img” section and set an image. Follow the steps below to extract the pose using OpenPose and access the editing screen of the OpenPose editor.

Next, edit the extracted pose to adjust it to the desired pose.

  1. Add or modify any missing hand joints to ensure proper positioning.
  2. Send the pose to ControlNet and set it.

Experiment with adjusting the timing of ControlNet. In this case, a value of 0.25 was set to start the control from the beginning. Additionally, to make ControlNet rules more influential, select “ControlNet is more important”.

The result may not be satisfactory as adjusting the pose requires significant effort. Moreover, the accuracy of the outcome might not be very high, so this method is not highly recommended.

コメント