2024 ComfyUI Guide: Get started with Stable Diffusion NOW

Incite AI
18 Jan 202413:07

TLDRThis video tutorial offers a comprehensive guide to getting started with Comfy UI, a powerful stable diffusion interface for creating art. It covers installation prerequisites, downloading and extracting the software, setting up the environment for Nvidia or CPU use, and introduces the Comfy UI manager for ease of use. The video also demonstrates the workflow, from model installation to generating images, and highlights the ability to customize and upscale images using various nodes and models within the UI.


  • 😀 Comfy UI is a powerful and sometimes intimidating tool for creating art with stable diffusion, but it's accessible with the right guidance.
  • 🛠️ To start with Comfy UI, ensure you have Python and git installed, and follow the provided installation instructions from the GitHub page.
  • 📦 Download the portable Windows version of Comfy UI and extract it to a convenient location, keeping in mind the need for ample disk space for models.
  • 💻 The UI is designed to work with Nvidia GPUs or CPUs for generations, with AMD users needing to refer to GitHub for specific instructions.
  • 🔄 Use the 'update comfy UI' batch file to keep the software up to date by checking the GitHub page regularly.
  • 🔧 Install the 'comfy UI manager' tool within the UI to streamline the process and make it easier to manage nodes and models.
  • 📈 Understand the importance of selecting the right stable diffusion models and knowing where to place them within the Comfy UI directory structure.
  • 🖼️ Learn how to set up the generation process by configuring nodes like 'load checkpoint', 'clip text', 'empty latent image', and 'K sampler'.
  • 🔄 Discover the ability to customize and save your workflow, as well as troubleshoot issues with the refresh button and comfy UI manager.
  • 🎨 Explore the creative potential of Comfy UI by experimenting with different nodes, models, and settings to create unique images.
  • 🔧 The video provides a step-by-step guide to creating a complex workflow, including upscaling and using multiple models in a single process.

Q & A

  • What is the purpose of Comfy UI in the context of the video?

    -Comfy UI is a user interface for creating art with stable diffusion models. The video provides a guide on how to get started with Comfy UI, including installation and basic usage.

  • What are the prerequisites for installing Comfy UI as mentioned in the video?

    -The prerequisites for installing Comfy UI are having Python and git installed on your system. For assistance with this, the video refers viewers to an 'automatic 1111 install' video.

  • How can one download and extract Comfy UI from GitHub?

    -To download Comfy UI, one needs to visit Comfy's GitHub page and download the zip file. After downloading, it should be extracted using preferred compression software to a desired location.

  • What is the significance of having enough disk space when using Comfy UI?

    -Having enough disk space is important because you will likely download many checkpoint models during the use of Comfy UI, which can take up significant storage.

  • Is Comfy UI compatible with all types of GPUs?

    -Comfy UI is compatible with Nvidia GPUs and CPUs for generations. However, for AMD GPUs, users need to refer to the GitHub documentation for more information.

  • How does one launch Comfy UI?

    -Comfy UI can be launched by running either the 'run CPU' or 'run Nvidia GPU' batch files located in the extracted Comfy UI folder, depending on the user's hardware.

  • What is the Comfy UI Manager and how is it installed?

    -The Comfy UI Manager is a tool that simplifies tasks within the UI. It is installed by opening the command prompt, typing 'git clone' followed by the GitHub link for the Comfy UI Manager, and then pressing enter.

  • What is the role of the 'load checkpoint' node in the Comfy UI workflow?

    -The 'load checkpoint' node is used to select the base checkpoint model for the stable diffusion process. It is the starting point of the workflow chain and only has an output.

  • How does the 'CLIP text' node function in Comfy UI?

    -The 'CLIP text' node converts the text prompt into a format that stable diffusion can understand and use to guide the image generation process. There are typically two CLIP text nodes for positive and negative prompts.

  • What is the purpose of the 'image size' and 'batch size' settings in the Comfy UI workflow?

    -The 'image size' setting determines the dimensions of the generated image, while the 'batch size' determines how many images are generated simultaneously in one run.

  • How can one save and load workflows in Comfy UI?

    -Workflows can be saved using the 'save' button in the control panel, and loaded using the 'load' button. This allows users to reuse and modify previous workflows.

  • What is the function of the 'upscale latent' node in Comfy UI?

    -The 'upscale latent' node is used to increase the resolution of an image within the workflow. It requires connection to a sampler node to finalize the upscaled image.

  • How can Comfy UI store the workflow that created an image?

    -Comfy UI has the capability to store the workflow information directly into the image file itself, allowing users to recreate the same workflow by dragging the image back into Comfy UI.



🎨 Getting Started with Comfy UI for Art Generation

This paragraph introduces the Comfy UI, a powerful yet complex tool for creating art with stable diffusion UI. The video aims to guide users through the process of making amazing art with Comfy UI, assuming they have Python and git installed. It suggests watching an 'automatic 1111 install' video for assistance and instructs viewers to download Comfy UI from GitHub, extract it, and prepare for downloading checkpoint models. The video also mentions the portable Windows version's compatibility with Nvidia GPUs and the need to check GitHub documentation for AMD users. It outlines the steps to launch Comfy UI, including using batch files and updating the software, and introduces the Comfy UI manager, a tool installed via command line for easier UI management.


🔌 Understanding Comfy UI's Workflow and Nodes

This paragraph delves into the intricacies of Comfy UI's workflow, explaining the concept of nodes and their inputs and outputs. It describes the process of selecting a base checkpoint model and setting up CLIP text nodes for positive and negative prompts. The video provides a step-by-step guide on configuring the image size, batch size, and other parameters in the workflow. It also explains advanced features like the sampler node, CFG scale, and如何选择不同的采样器和调度器 to achieve desired results. The paragraph emphasizes the flexibility of the workflow, allowing users to customize their workspace, add new nodes, and even retrieve the workflow from an image generated by stable diffusion using Comfy UI.


🚀 Advanced Techniques and Customization in Comfy UI

The final paragraph showcases the advanced capabilities of Comfy UI by demonstrating how to create a complex workflow that involves multiple models, Luras, and an upscaling process. It illustrates the process of adding nodes to the workflow, such as loaders for Luras, an upscaling latent node, and additional samplers. The video provides a practical example of chaining different models and Luras, followed by an upscaling step, to create a highly customized image generation process. It also touches on troubleshooting, using the Comfy UI manager to install missing nodes, and updating models directly from within Comfy UI. The paragraph concludes with an invitation for viewers to share their tips and to subscribe for more content.



💡Comfy UI

Comfy UI refers to a user interface that is designed to be easy and pleasant to use. In the context of the video, Comfy UI is a specific tool for creating art with stable diffusion models. It is described as powerful and initially intimidating but ultimately designed to simplify the process of generating images. The script mentions installing and using Comfy UI to make art, indicating its central role in the video's tutorial.

💡Stable Diffusion

Stable Diffusion is a term used to describe a type of machine learning model capable of generating images from textual descriptions. It is a key concept in the video as the entire process revolves around using Comfy UI to work with Stable Diffusion models. The script discusses downloading models and using them within Comfy UI to create art, highlighting the significance of Stable Diffusion in the video's theme.


Python is a widely used high-level programming language known for its readability and versatility. In the video script, Python is mentioned as a prerequisite for using Comfy UI, suggesting that some level of programming knowledge or access to a Python environment is necessary to utilize the tool effectively.


Git is a version control system used for tracking changes in source code during software development. The script mentions Git in the context of installing Comfy UI, implying that it is used to manage and download the necessary files from the Comfy UI's GitHub repository.

💡Checkpoint Models

In the context of machine learning, a checkpoint refers to a snapshot of the model's state, saved during training to allow for recovery or evaluation. The script discusses downloading checkpoint models for use with Comfy UI, emphasizing the importance of these models in the image generation process.

💡Nvidia GPU

Nvidia GPU refers to a graphics processing unit manufactured by Nvidia Corporation, known for its use in gaming and professional applications that require high-performance graphics rendering. The video mentions that the portable Windows version of Comfy UI works with an Nvidia GPU, indicating that it leverages the GPU's processing power for generating images.

💡Batch Files

A batch file is a type of script file in DOS, OS/2 and Windows that contains a series of commands to be executed by the command-line interpreter. In the script, batch files named 'run CPU' and 'run Nvidia GPU' are mentioned as the method to launch Comfy UI, showing their role in starting the image generation process.


CLIP stands for Contrastive Language–Image Pre-training, a neural network model that connects an image with a text description. The script explains that CLIP converts text prompts into a form that Stable Diffusion can understand, which is crucial for guiding the image generation process within Comfy UI.


In the context of the video, a sampler is a node in Comfy UI that handles the process of generating an image from noise based on the provided model and prompt. The script describes different settings and options for the sampler, such as steps and CFG scale, which affect the final image outcome.


VAE stands for Variational Autoencoder, a type of neural network that learns to compress data and then reconstruct it. In the video, the VAE decode node is mentioned as part of the workflow, responsible for decoding the information from previous nodes into the final image output.

💡Custom Nodes

Custom nodes in Comfy UI are additional functionalities that can be installed to enhance or modify the image generation process. The script describes installing the Comfy UI manager, which facilitates the installation of custom nodes, and using them to create more complex workflows.


Upscaling in the context of image processing refers to increasing the resolution of an image while maintaining or improving its quality. The video script provides an example of using an upscaling node within the Comfy UI workflow to enhance the resolution of a generated image, demonstrating a way to refine the output.


Comfy UI is a powerful and complex tool for creating art with stable diffusion.

Installation of Comfy UI requires Python and git, with an optional auto-install video provided.

Comfy UI is portable and can be used on an Nvidia GPU or CPU.

AMD users need to refer to GitHub documentation for specific instructions.

The update process for Comfy UI involves checking the GitHub page and running an update batch file.

Comfy UI Manager is a tool that simplifies operations within the UI.

Stable diffusion models can be downloaded from sources like hugging face or CivitAI.

Existing models from automatic 1111 can be used in Comfy UI.

The control panel in Comfy UI allows for generation settings like batch count and seed control.

The workflow in Comfy UI consists of interconnected nodes for image generation.

Nodes have specific functions like load checkpoint, CLIP text, and sampler for image creation.

The sampler node is crucial for the generation process, with various options affecting the outcome.

VAE (Variational Autoencoders) decodes the information into the final image.

Comfy UI allows users to see the stages of generation and rearrange nodes for customization.

Workflows can be saved and loaded within Comfy UI for easy reuse.

Comfy UI Manager can resolve errors by installing missing custom nodes.

Models can be installed directly from within Comfy UI using the manager.

Comfy UI's ability to store the workflow within the image itself allows for easy replication.

A demonstration of Comfy UI's power includes upscaling and model mixing in a single workflow.

The video concludes with an invitation to share Comfy UI tips and subscribe for more content.