How to Install and Use Stable Diffusion (June 2023) - automatic1111 Tutorial

Albert Bozesan
26 Jun 202318:03

TLDRIn this informative tutorial, Albert introduces viewers to the world of Stable Diffusion, an AI image-generating software. He emphasizes the benefits of using the Auto1111 web UI and the unique ControlNet extension, which offers significant advantages over competitors. The tutorial guides users through the installation process on Windows with specific NVIDIA GPUs, model selection from, and the utilization of various settings for image generation. Additionally, Albert explores extensions like ControlNet for advanced features, demonstrating how it can enhance and refine the image generation process, ultimately encouraging users to experiment and unleash their creativity with Stable Diffusion.


  • πŸš€ Stable Diffusion is an AI image generating software that has gained popularity and is now accessible through the Auto1111 web UI.
  • 🌐 The ControlNet extension is a significant advantage of Stable Diffusion, offering features that outperform competitors like Midjourney and DALLE.
  • πŸ†“ Stable Diffusion is completely free to use and runs locally on your computer, ensuring no data is sent to the cloud and no subscriptions are required.
  • πŸ’» To install Stable Diffusion, a powerful computer with NVIDIA GPUs from at least the 20 series and a Windows operating system is needed.
  • πŸ› οΈ The installation process involves using Python 3.10.6, Git, and downloading the Stable Diffusion WebUI repository from a GitHub URL.
  • πŸ” Users can select and download models from, which can influence the style and quality of the generated images.
  • 🎨 The UI allows users to input positive and negative prompts to guide the AI in creating the desired image, with options to adjust various settings for optimal results.
  • πŸ–ΌοΈ ControlNet extends the capabilities of Stable Diffusion by enabling users to incorporate depth, edges, and poses from reference images into the generated content.
  • πŸ”„ The img2img tab allows users to refine generated images by adjusting settings like denoising strength and using inpainting for specific edits.
  • πŸ“ˆ is a resource for learning math, computer science, AI, and neural networks, offering interactive courses and exercises.
  • πŸŽ₯ Albert Bozesan's YouTube channel provides in-depth tutorials and tips for using Stable Diffusion and other creative tools.

Q & A

  • What is the main topic of the video?

    -The main topic of the video is the installation and usage of Stable Diffusion, an AI image generating software, with a focus on the Auto1111 web UI and the ControlNet extension.

  • Why did Albert decide to hold off on creating the tutorial initially?

    -Albert decided to hold off on creating the tutorial initially until it became clear what the best way to use Stable Diffusion was going to be.

  • What is the key advantage of Stable Diffusion over its competition, according to Albert?

    -The key advantage of Stable Diffusion over its competition, as mentioned by Albert, is the ControlNet extension, which he believes will outperform competitors like Midjourney and DALLE.

  • What are the benefits of using Stable Diffusion?

    -The benefits of using Stable Diffusion include being completely free to use, running locally on the user's computer, no data being sent to the cloud, and having a large open-source community contributing to its development.

  • What are the system requirements for running Stable Diffusion?

    -Stable Diffusion runs best on NVIDIA GPUs of at least the 20 series, and the tutorial is based on a Windows operating system.

  • How can one contribute to the Stable Diffusion community?

    -One can contribute to the Stable Diffusion community by developing updates, sharing experiences and solutions on platforms like the Stable Diffusion subreddit, and participating in discussions and problem-solving.

  • What is the process for installing the Auto 1111 web UI?

    -To install the Auto 1111 web UI, one needs to install Python 3.10.6 with the "Add Python to Path" option, install Git, and then clone the Stable Diffusion WebUI repository from GitHub using the provided URL in the Command Prompt.

  • How does one select and use models in Stable Diffusion?

    -Users can select and use models by visiting, choosing a model based on ratings and preferences, downloading the model and its associated VAE, and placing them in the appropriate folders within the web UI's models directory.

  • What are the functions of the positive and negative prompts in Stable Diffusion?

    -The positive prompt is used to describe the desired image, including medium, subject, and details. The negative prompt is used to specify what the user does not want to see in the generated image, which is crucial for defining the quality and style of the output.

  • How does the ControlNet extension enhance the capabilities of Stable Diffusion?

    -The ControlNet extension enhances Stable Diffusion by allowing users to use additional features like depth mapping, edge detection (canny), and pose recognition (openpose) to more accurately control the composition and details of the generated images.

  • What is the purpose of the img2img tab in the UI, and how is it used?

    -The img2img tab in the UI is used for generating additional variations of a preferred image while retaining its general colors and themes. It allows users to adjust settings like denoising strength and sampler to refine the results.

  • How does the inpainting feature work in Stable Diffusion?

    -The inpainting feature allows users to make specific edits to an image by literally drawing over the areas they want to change. Users can remove or alter elements and adjust settings like sampling steps and denoising strength for more detailed and accurate results.



πŸ–₯️ Introduction to Stable Diffusion and Auto1111 Web UI

Albert introduces the video by expressing his excitement to share a tutorial on Stable Diffusion, an AI image generating software. He discusses the evolution of the software and highlights the Auto1111 web UI as the best method to utilize it. Albert emphasizes the benefits of Stable Diffusion, such as being free, running locally, and having a robust open-source community. He provides a link to resources used in the video and outlines the system requirements, specifically mentioning the need for an NVIDIA GPU and the use of Windows. Albert advises viewers to check the video description for links and to seek help from the community if they encounter issues.


πŸ› οΈ Installation and Initial Setup

The paragraph details the process of installing Stable Diffusion using the Auto 1111 web UI. Albert instructs viewers to install Python 3.10.6 and Git, which are necessary for the setup. He guides through the steps of downloading the WebUI repository and running the installation batch file. Albert then explains how to access the UI through a web browser and the importance of selecting and installing models from to influence the image generation. He provides tips on choosing a versatile model and downloading necessary VAE files, and concludes with instructions on setting up the UI with the chosen model and VAE.


🎨 Image Generation Process and Prompting

Albert explains the process of generating images with Stable Diffusion. He outlines the structure of positive and negative prompts, emphasizing their importance in achieving desired results. Albert suggests using the DPM samplers for a balance of quality and speed and discusses the significance of sampling steps, width, height, and CFG scale. He also mentions the Restore Faces feature and its impact on facial details. The paragraph concludes with a discussion on batch size and batch count, and Albert demonstrates the generation of an image using the settings discussed.


🌐 Exploring Extensions and Advanced Features

In this section, Albert introduces the concept of extensions for Stable Diffusion, specifically focusing on ControlNet. He explains how to install and use ControlNet to enhance the image generation process. Albert demonstrates the use of different models like depth, canny, and openpose within ControlNet to manipulate various aspects of the generated images. He also touches on the issue of bias in AI models and how it can affect the results. The paragraph concludes with Albert showing how to refine the generated images using inpainting and other UI features.

πŸ“š Wrap-up and Additional Resources

Albert concludes the tutorial by summarizing the key points covered in the video and encourages viewers to experiment with Stable Diffusion. He mentions the availability of more in-depth tutorials on his channel and invites viewers to subscribe, like, and comment with suggestions for future content. Albert signs off, reiterating his enthusiasm for Stable Diffusion and expressing his hope that viewers will enjoy using the software.



πŸ’‘Stable Diffusion

Stable Diffusion is an AI image-generating software that uses machine learning models to create images from textual descriptions. It is noted for its ability to produce high-quality, detailed images and is considered a significant tool in the realm of AI and art. In the video, the presenter discusses the process of installing and using Stable Diffusion, highlighting its benefits over other AI image generation tools.

πŸ’‘Auto1111 web UI

Auto1111 web UI is the recommended interface for using Stable Diffusion, as mentioned in the video. It is a user-friendly interface that allows users to interact with the AI software through a web browser, making the process of generating images more accessible and straightforward. The presenter emphasizes that this UI is currently the best way to utilize Stable Diffusion.

πŸ’‘ControlNet extension

The ControlNet extension is an additional feature for Stable Diffusion that enhances its capabilities. It allows users to have more control over the generation process by using specific models to recognize and incorporate elements such as depth, outlines, and poses from reference images. This extension is highlighted as a key advantage that sets Stable Diffusion apart from its competitors.

πŸ’‘Open source community

The open source community refers to a group of developers and contributors who work collaboratively on software projects, sharing their knowledge and improvements. In the context of the video, the presenter mentions that Stable Diffusion is supported by a large open source community that actively develops and updates the tool, leading to faster and more regular improvements compared to commercial alternatives.


NVIDIA GPUs, or Graphics Processing Units, are specialized hardware components designed for handling complex graphical and computational tasks. The video specifies that Stable Diffusion runs best on NVIDIA GPUs from the 20 series or later, indicating that a powerful GPU is necessary for optimal performance and image generation capabilities.


Python is a widely-used high-level programming language known for its readability and ease of use. In the video, the presenter instructs viewers to install a specific version of Python, 3.10.6, to ensure compatibility with the Stable Diffusion software. This highlights the importance of having the correct software environment for the AI tool to function properly.


Git is a version control system that allows developers to manage and track changes in their code. In the tutorial, the presenter mentions the need to install Git to facilitate the installation of the Stable Diffusion WebUI and to receive updates. This underscores the dynamic nature of AI development, where continuous updates and community contributions are essential for improvement.


CivitAI is a website mentioned in the video that hosts user-created models for Stable Diffusion. These models can enhance the AI's image generation capabilities by improving general quality, altering art styles, or specializing in specific subjects. The presenter advises viewers to visit CivitAI to select models that suit their needs, demonstrating the importance of community resources in leveraging the full potential of AI tools.


Prompts are textual descriptions or phrases that guide the AI in generating specific images. In the context of the video, the presenter discusses the process of crafting positive and negative prompts to direct the output of the AI image generation. This is a critical aspect of using Stable Diffusion, as it allows users to communicate their creative vision to the AI.

πŸ’‘Sampling method

The sampling method refers to the algorithmic technique used by the AI to generate images from the prompts. The video mentions different samplers like DPM, DPM2, and DPM++, each with its advantages and disadvantages in terms of accuracy and speed. Understanding and choosing the appropriate sampling method is essential for achieving desired results in AI-generated images.

πŸ’‘CFG scale

CFG scale, or Configuration Scale, is a parameter in Stable Diffusion that controls the level of creativity or adherence to the prompt. A lower CFG scale allows the AI more freedom, potentially resulting in more creative but less accurate images, while a higher CFG scale makes the AI stick closer to the prompt, possibly at the cost of aesthetic quality. The video discusses the importance of balancing creativity with accuracy through adjusting the CFG scale.


Inpainting is a process within the context of AI image generation where users can edit or modify specific parts of an image. The video describes how the Cyberrealistic model has a special version for inpainting, allowing users to make changes such as removing or altering elements within an image. This feature exemplifies the level of control and customization that users can achieve with Stable Diffusion.


Introduction to Stable Diffusion, an AI image generating software.

Auto1111 web UI is identified as the best way to use Stable Diffusion currently.

ControlNet extension is introduced as a key advantage over competitors like Midjourney and DALLE.

Stable Diffusion is completely free and runs locally on your computer, ensuring no data is sent to the cloud.

The software is open source, with a large community contributing to its development.

Installation prerequisites include having an NVIDIA GPU from at least the 20 series and using Windows.

Python 3.10.6 and Git are required for installation.

A step-by-step guide on installing Stable Diffusion WebUI from GitHub repository is provided.

Models from can be used to influence the generated images significantly.

The importance of selecting appropriate models and VAE files is emphasized.

Instructions on setting up the UI with desired models and VAE files are given.

Explanation of how to use positive and negative prompts to guide image generation.

Details on various sampling methods and their trade-offs in quality and speed.

Guidance on optimal settings for width, height, and CFG scale for best image results.

Introduction to ControlNet extension and its ability to use depth, canny, and openpose models for more precise image generation.

Demonstration of how ControlNet can utilize reference images for composition, detail, and pose.

Explanation of the img2img tab for refining generated images and adjusting denoising strength.

Inpainting technique for making specific edits to generated images, such as removing or changing elements.

Final thoughts on exploring and experimenting with Stable Diffusion for creativity and learning.