bye midjourney! SDXL 1.0 - How to install Stable Diffusion XL 1.0 (Automatic1111 & ComfyUI Tutorial)
TLDRThe tutorial introduces the release of Stable Diffusion XL 1.0, a tool for generating diverse and high-quality images for free. It outlines three methods for using SDXL: running it locally with Automatic1111 UI, with ComfyUI which supports the refiner model, and using it online via clipdrop.co. The video demonstrates the installation process, image generation with different prompts, and the use of the refiner model for enhanced detail. It promises future content on advanced features and optimization for low VRAM devices.
Takeaways
- 🌟 Stable Diffusion XL 1.0 (SDXL 1.0) is a free tool for generating various styles and types of images.
- 🖼️ The tool can produce realistic and anime-style images, as well as other interesting content.
- 🛠️ There are three methods to use SDXL 1.0: locally with Automatic1111, locally with ComfyUI, and online for free.
- 📥 To run locally, download the base model and optionally the refiner model from the Hugging Face repository by Stability AI.
- 🔗 Use the pre-built file from the official GitHub repository of Automatic1111 for installation without needing to install Python or Git.
- 🔄 Extract the downloaded files and place the SDXL base model in the specified folder within the Automatic1111 web UI directory.
- 🔧 Run the 'update.bat' and 'run.bat' scripts to update the code and start the server for the Automatic1111 web UI.
- 🖼️ Adjust image resolution to 1024x1024 for better results, as SDXL was primarily trained on this size.
- 🔄 For ComfyUI, download and extract the stable build, then place the SDXL models in the correct folder and run the 'run_nvidia_gpu.bat'.
- 🔍 The refiner model can be used in ComfyUI to add more details to generated images, improving face, background, and texture quality.
- 🌐 If local running is not possible, use clipdrop.co for free online image generation with a limit of 400 free generations.
Q & A
What is Stable Diffusion XL 1.0 (SDXL 1.0) and what can it be used for?
-Stable Diffusion XL 1.0 is an AI model that can generate a wide variety of images in different styles, types, subjects, and backgrounds. It is used for creating realistic and anime-style images, among other types, and is available for free.
How many methods are presented in the script for using SDXL 1.0?
-The script presents three methods for using SDXL 1.0: running it locally with automatic1111, running it locally with ComfyUI, and using it online for free.
What is the first step to run SDXL 1.0 locally using automatic1111?
-The first step is to download the SDXL 1.0 base model from the huggingface repository by Stability AI. Optionally, the refiner model can also be downloaded.
Where can I find the pre-built file for automatic1111 to run SDXL 1.0 without installing Python or Git?
-The pre-built file for automatic1111 can be found on the official GitHub repository of automatic1111, under the 'Installing and running' section for Windows 10/11 with NVIDIA GPUs.
What is the purpose of the refiner model in SDXL 1.0?
-The refiner model in SDXL 1.0 is used to add more details to the generated images, enhancing the quality of faces, backgrounds, and other minor details. It is not required for Auto1111.
How does the script suggest changing the image resolution for better generation results with SDXL 1.0?
-The script suggests changing the width and height to 1024x1024, which is the primary training size for SDXL, to generate better images. Other native resolutions can also be used for different types of images like landscapes.
What is ComfyUI and how does it differ from automatic1111 in terms of SDXL 1.0 support?
-ComfyUI is another UI for running SDXL 1.0 locally. It has the best support for SDXL since its launch and natively supports the SDXL refiner, which is an advantage over automatic1111.
How can I access SDXL 1.0 if I am unable to run it locally?
-If running SDXL 1.0 locally is not possible, the script suggests using it online for free on clipdrop.co, which is a product by Stability AI, the company behind SDXL.
What is the process of using the refiner model with ComfyUI as described in the script?
-To use the refiner model with ComfyUI, one needs to download and install ComfyUI, paste the SDXL 1.0 base and refiner models into the appropriate folders, and then run the UI. After loading the refiner model, one can generate images and compare the results with and without the refiner.
What are some of the additional features or topics the script promises to cover in future videos?
-The script mentions that future videos will cover topics such as LoRA training, Dreambooth, ControlNet, and running SDXL on low VRAM devices.
Outlines
🖼️ Introducing SDXL 1.0: Free Image Generation Tool
The script introduces the release of SDXL 1.0, a tool for generating a wide variety of images in different styles, types, subjects, and backgrounds at no cost. It showcases examples of realistic and anime-style images, as well as interesting random creations. The tutorial then outlines three methods for using SDXL: running it locally with automatic1111 UI, comfy UI, and online for free if local running is not feasible. The process includes downloading the base model and optionally the refiner model from the huggingface repository, using pre-built files for automatic1111, and setting up the environment on Windows with NVIDIA GPUs. The script also demonstrates how to generate images with specific prompts and resolutions, noting that SDXL performs best with 1024x1024 images.
🔧 Setting Up and Using ComfyUI with SDXL 1.0
This paragraph explains how to set up and use ComfyUI as an alternative to automatic1111 for running SDXL 1.0, especially when using the refiner model. It details the process of downloading and extracting ComfyUI, placing the SDXL 1.0 base and refiner models in the correct folders, and launching ComfyUI with a batch file. The script then guides through generating images with positive and negative prompts, adjusting the sampler, and changing image resolutions. It highlights the benefits of using the refiner model for additional image details and provides a comparison of image quality with and without the refiner. The paragraph concludes with a mention of an online option, clipdrop.co, for those who cannot run SDXL locally, and a teaser for future videos on related topics.
Mindmap
Keywords
💡Stable Diffusion XL 1.0
💡Automatic1111
💡ComfyUI
💡Refiner Model
💡Huggingface Repository
💡GitHub Repository
💡Image Generation
💡Positive Prompt
💡Negative Prompt
💡Resolution
💡Sampler
💡Clipdrop.co
Highlights
Stable Diffusion XL 1.0 (SDXL 1.0) is released and offers free image generation with various styles, types, subjects, and backgrounds.
The model can generate realistic and anime-style images, as well as interesting random content.
Three methods for using SDXL 1.0 are presented: local installation with Automatic1111, ComfyUI, and online use.
For local use, the base model of SDXL 1.0 is required, with an optional refiner model available for enhanced capabilities.
The huggingface repository by Stability AI is the source for downloading the .safetensor files of the base and refiner models.
Automatic1111's GitHub repository provides a pre-built file for easy installation without the need for Python or Git.
Instructions are given for downloading, extracting, and setting up the Automatic1111 UI for SDXL 1.0.
The update.bat script updates the code to the latest version supporting SDXL, while run.bat installs dependencies and starts the server.
Users can generate images by entering prompts, with recommendations to use 1024x1024 resolution for optimal results.
ComfyUI is highlighted as an alternative with support for the SDXL refiner and better performance for some users.
ComfyUI's GitHub repository offers direct download links for the latest stable and unstable builds.
After extracting ComfyUI, users place the SDXL 1.0 models in the specified folder and run the application using batch files.
ComfyUI allows users to select models and change settings such as sampler and image resolution for customized image generation.
The refiner model in ComfyUI can be loaded to enhance image details, demonstrated with a comparison of generated images.
For users unable to run SDXL 1.0 locally, clipdrop.co is suggested as an online platform with a limited number of free generations.
A future tutorial on running SDXL on Colab is teased, addressing the need for optimized resources for this model.
The video promises further exploration of SDXL 1.0 with topics like LoRA training, Dreambooth, ControlNet, and running on low VRAM devices.