How to create consistent character with Stable Diffusion in ComfyUI

Stable Diffusion Art
24 May 202412:37

TLDRThis tutorial demonstrates how to create a consistent character using Stable Diffusion in ComfyUI. It guides viewers through downloading the workflow, installing missing custom nodes, and setting up the environment. The process involves using control net candy to fix composition, IP adapter for face transfer, and a face detailer for high-resolution facial features. Customization tips and a trick to save rendering time by using a fixed seed are also shared, making this a comprehensive guide for character creation in ComfyUI.

Takeaways

  • 😀 The video tutorial demonstrates how to create a consistent character using Stable Diffusion in ComfyUI.
  • 🔗 The necessary resources and links, including the workflow and model downloads, are provided in the description.
  • 💻 The first step is to download and load the workflow into ComfyUI, which may require installing missing custom nodes.
  • 📁 It's important to download and test specific models, with the recommendation to start with Proto XL as it's compatible with the workflow.
  • 🖼️ The workflow uses a control net to fix the composition of the image, requiring an uploaded composition image.
  • 👤 An IP adapter image is used to copy the face and hair details, with the help of phase ID plus V2 to extract these features accurately.
  • 🔧 The pH DETeller, part of the Impact Pack, is used to automatically fix faces in the image, improving the rendering quality.
  • 🎨 Customization of the character is possible by adjusting the prompt and using the IP adapter to control the final image's features.
  • ⏸️ To save time, the pH DETeller can be muted during certain steps, and the workflow can be adjusted using ComfyUI shortcuts.
  • 🔄 A fixed seed can be used to avoid re-running certain parts of the workflow, streamlining the image generation process.
  • 🖼️ The final high-resolution image with fixed faces can be saved locally after running the workflow.

Q & A

  • What is the main topic of the video?

    -The main topic of the video is showing viewers how to create a consistent character using Stable Diffusion in ComfyUI.

  • Where can I find the resources mentioned in the video?

    -The resources, including the workflow and model download links, can be found in the description below the video.

  • What is the first step to start using the workflow in ComfyUI?

    -The first step is to go to the provided link in the description, download the workflow, and then load it into ComfyUI.

  • What happens if some nodes are missing after loading the workflow in ComfyUI?

    -If some nodes are missing, you need to go to the manager, install the missing custom nodes, and then restart ComfyUI.

  • Why is it recommended to start with the Proto XL model in this workflow?

    -It is recommended to start with the Proto XL model because it has been tested and confirmed to work well with this specific workflow.

  • What is the purpose of using a control net in the workflow?

    -The control net is used to fix the composition of the image, allowing the system to extract the outline and maintain consistency in the character's appearance.

  • What is the role of the IP adapter in the workflow?

    -The IP adapter is used to copy the face and hair from an image, ensuring that only these features are transferred and not the rest of the image.

  • Why is the pH DETeller used in ComfyUI instead of the A not in Automatic 111?

    -The pH DETeller is used because it serves a similar function to the A not in Automatic 111, detecting faces and performing automatic inpainting to fix them at a higher resolution.

  • How can I customize the workflow to change the hair color of the character?

    -To customize the hair color, you can mute the pH DETeller, change the prompt to exclude any specific color control, and then rerun the pH DETeller after making the desired changes.

  • What is the benefit of using a fixed seed in ComfyUI?

    -Using a fixed seed saves time in rendering because if the seed hasn't changed, ComfyUI will use the cached result instead of rerunning the sampler.

  • How can I save the final image created by the workflow?

    -After running the workflow and achieving the desired result, you can right-click on the image and save it to your local storage.

Outlines

00:00

😀 Introduction to Creating a Consistent Character with Stable Diffusion in Comfy UI

The speaker introduces a tutorial on creating a consistent character using Stable Diffusion in Comfy UI. They mention providing resources and download links in the description for the workflow, models, and installation instructions. The process begins with downloading the workflow from the provided link, installing missing custom notes if prompted, and ensuring Comfy UI is set up correctly with the necessary components. The tutorial will cover the use of control net candy to fix the composition of the image and IP adapter for transferring facial features.

05:14

🔍 Detailed Explanation of the Workflow and Customization Tips

The speaker provides a detailed explanation of the workflow, emphasizing the use of the pH DETeller from the Impact Pack to automatically fix faces in the image, which is crucial due to the small size of the faces relative to the image resolution. They discuss the importance of selecting the right model, starting with Proto XL, and uploading both the control net image and the IP adapter image. Customization tips are shared, including muting the pH DETeller to save time and changing the prompt to alter the character's features, such as hair color. The speaker also explains the independent and complementary nature of image and prompt conditioning in the IP adapter.

10:16

🛠️ Final Workflow Insights and Conclusion

The speaker concludes the tutorial by summarizing the workflow and providing final insights. They remind viewers to use a fixed seed for rendering to save time and rerun only the pH DETeller note if necessary. The high-resolution faces are fixed using the DETeller, and viewers are encouraged to save their final images locally. The speaker also mentions a previous video on a similar workflow for Automatic 1.1 and invites viewers to check it out for further interest. The tutorial ends with a call to like and subscribe for more content.

Mindmap

Keywords

💡Stable Diffusion

Stable Diffusion is a type of machine learning model that generates images from textual descriptions. In the context of the video, it's used to create consistent character images. The script mentions using Stable Diffusion in ComfyUI, indicating that the software environment is being utilized to harness the capabilities of this AI model for generating character images based on provided prompts and reference images.

💡ComfyUI

ComfyUI is the user interface being used in the video to demonstrate the process of creating consistent character images with Stable Diffusion. It is a platform where users can manage and manipulate workflows involving AI models like Stable Diffusion. The script describes downloading and loading workflows, as well as installing missing custom nodes, which are essential for the workflow to function properly within ComfyUI.

💡Workflow

A workflow in this context refers to a series of steps or processes that are followed in a particular order to achieve a certain outcome. The video script explains how to download a specific workflow designed for creating consistent characters and load it into ComfyUI. The workflow involves various steps like using control nets and IP adapters to ensure the character's features are accurately rendered.

💡Control Net

Control Net is a tool used within the workflow to fix the composition of the image. It extracts the outline or structure of a provided composition image, which is then used to guide the generation process in Stable Diffusion. The script mentions uploading a composition image to the Control Net preprocessor to ensure the character's pose and structure are consistent.

💡IP Adapter

IP Adapter is a component of the workflow that is used to copy specific features from an image, such as a face, onto another image. The script describes using an IP Adapter to transfer the facial features from one image to the character image being generated, ensuring that the character's face matches the desired appearance.

💡Phase ID

Phase ID, specifically Phase ID Plus V2 mentioned in the script, is a tool within the IP Adapter that extracts the face and hair from an image. It is used to ensure that only the desired facial features are copied over, without including unnecessary background or other elements from the source image.

💡Face Detailer

Face Detailer is a custom node in ComfyUI that is used to automatically fix facial features in images generated by Stable Diffusion. The script explains that because the faces in the character image are small, the Face Detailer is necessary to accurately render the facial details. It detects faces and performs inpainting to correct any imperfections in the facial features.

💡Custom Nodes

Custom Nodes are additional components or plugins that can be installed in ComfyUI to extend its functionality. The video script mentions installing missing custom nodes that are required for the workflow to work properly. These nodes include the Face Detailer and possibly others that assist in the image generation process.

💡Seed

In the context of AI image generation, a seed is a numerical value that helps to produce a specific outcome when generating images. The script suggests using a fixed seed in ComfyUI to save time on rendering, as it allows the system to reuse a previous result if the seed remains unchanged.

💡Prom

Prom, short for prompt, is the textual description or instruction given to the AI model to guide the image generation process. The script discusses changing the prom to customize the character's features, such as hair color, and how it works in conjunction with the IP Adapter image to control the final appearance of the generated character.

💡Inpainting

Inpainting is a process where missing or damaged parts of an image are filled in or restored. In the context of the video, the Face Detailer performs inpainting to fix small facial features that may not render correctly at low resolutions. It crops, enhances, and then reintegrates the fixed facial features back into the character image.

Highlights

Tutorial on creating a consistent character with Stable Diffusion in ComfyUI.

Resources and download links for workflow, models, and installation provided in the description.

Instructions to download and load the workflow in ComfyUI.

Steps to install missing custom nodes if required by the workflow.

The importance of installing ComfyUI Manager for workflow management.

Recommendation to start with the Proto XL model for first-time users.

Explanation of using control net candy to fix image composition.

Demonstration of uploading a composition image to the control net preprocessor.

Utilization of IP adapter to copy an image with a face.

Details on how phase ID plus V2 extracts the face and hair from an image.

Description of the workflow's process from running to rendering.

Introduction to pH DETeller for automatic face fixing in ComfyUI.

Technique of using a fixed seed for rendering to save time.

Customization tips, including muting pH DETeller for global composition changes.

How to change the prom to alter hair color and other features.

Advantages of using both an image and a prom for independent and complementary image conditioning.

Final steps to save the high-resolution, fixed character image.

Summary of the workflow for creating consistent characters in ComfyUI.

Link to a similar workflow for Automatic 1.1 provided for interested viewers.