Generate THE BEST AI Anime Images For FREE! (MeinaMix, CetusMix & More - Stable Diffusion)

Preston Ch.
9 Apr 202311:21

TLDRThis video tutorial guides viewers on setting up AI image generation using the stable diffusion program for free, with adjustments for slow PCs or limited VRAM graphics cards. It covers downloading necessary software, selecting models and VA files for image styling, and configuring settings for optimal image quality. The guide also suggests using for those unable to generate images locally and shares tips for creating compelling prompts and generating high-quality AI images.


  • 🖼️ The video provides a guide on setting up AI image generation using the stable diffusion program for free.
  • 💻 Users with slow PCs or limited VRAM can still generate images with workarounds and alternative solutions like
  • 📋 Requirements include a stable diffusion program, Python, and Git, with specific instructions given for installation and setup.
  • 🔗 Links for downloading necessary software and models are provided in the video description.
  • 🎨 The AI generator uses models or checkpoints to create images in specific styles, and users can choose based on preference.
  • 🖌️ Users can download models and VA (color correction) files to customize their image generation process.
  • 📂 A dedicated folder for the AI generator is recommended for organization and ease of access.
  • 🔄 An automatic update feature can be enabled by editing the webuser.bat file to include 'git pull'.
  • 🛠️ Settings adjustments for sampling method, steps, resolution, and CFG scale can enhance image quality and generation speed.
  • 🔍 Upscaling images using tools like Estergan can improve clarity and reduce blurriness.
  • 📈 The video encourages experimentation with settings and prompts to achieve desired image outcomes.

Q & A

  • What is the main topic of the video?

    -The main topic of the video is about setting up a program called 'stable diffusion' for generating AI images from text or other images, and it provides a step-by-step guide on how to do this for free.

  • What are the system requirements for running 'stable diffusion'?

    -The system requirements for running 'stable diffusion' include having a graphics card with a certain amount of VRAM (preferably more than 4GB) and space for the program which takes up 20GB of space.

  • What are the first two dependencies that need to be installed for 'stable diffusion'?

    -The first two dependencies that need to be installed are Python and Git, with Python installation requiring the addition of Python 3.10 to the system path.

  • What is the workaround for users with slow or low VRAM graphics cards?

    -The workaround for users with slow or low VRAM graphics cards is to use a website called, which allows for the generation of high-quality AI images without needing a good PC.

  • How does one obtain the AI models for 'stable diffusion'?

    -The AI models for 'stable diffusion' can be obtained from a website called, where users can choose from a variety of models or checkpoints that generate images in specific styles.

  • What is the purpose of the VA file in the setup process?

    -The VA file is used for color correction in the AI image generation process. It is recommended to use a specific VA file that works well with most models for optimal results.

  • What does 'git pull' command do in the configuration file?

    -The 'git pull' command ensures that every time the AI generator is opened, it automatically updates to the most recent version, reducing the chances of encountering issues with the program.

  • How can the image quality be improved if the initial generated image is blurry?

    -If the initial generated image is blurry, it can be improved by using the 'upscale' feature with either Ester gun for x or estergan 4X anime, depending on the type of image being generated.

  • What are some of the settings that can be adjusted for better image generation?

    -Some of the settings that can be adjusted for better image generation include sampling method, sampling steps, width and height for resolution, CFG scale for adherence to prompts, and enabling 'Iris fix' for better quality if the user has a good graphics card.

  • What should a user do if they encounter additional issues during the setup or image generation process?

    -If a user encounters additional issues, they can check the comments section of the video for solutions others have shared, or they can reach out to the video creator via Discord for further assistance.

  • Are there any additional resources available for learning how to use 'stable diffusion' effectively?

    -Yes, the video creator has made a separate video that goes in-depth on how to use 'stable diffusion' effectively, including tips on creating better prompts and custom characters.



🖼️ Setting Up AI Image Generation

This paragraph introduces the process of setting up AI image generation using a program called Stable Diffusion. It explains that the video will cover the entire setup process for free and provides solutions for users with slow PCs or limited VRAM on their graphics cards. The paragraph emphasizes the importance of having a compatible PC and offers an alternative online platform,, for those who cannot meet the hardware requirements. It also mentions the program's large storage requirement and proceeds to detail the initial steps for downloading Python and Git, which are necessary for running the AI generator.


🔧 Installation and Configuration

The second paragraph delves into the technical steps required to install and configure the AI image generation tools. It guides the user through downloading Python and Git, setting up the environment variables, and creating a dedicated folder for the Stable Diffusion program. The paragraph also explains how to clone the Stable Diffusion repository from GitHub and where to find different models, known as checkpoints, on It provides specific instructions for downloading a model and a VA (color correction) file from Hugging Face, and concludes with the steps to set up the Fusion web UI for ease of use.


🎨 Customizing and Optimizing AI Image Generation

The final paragraph focuses on customizing the AI image generation process and optimizing it for different hardware configurations. It explains how to modify the settings in the Fusion web UI to improve image quality and reduce blurriness using upscaling techniques. The paragraph provides tips on adjusting various parameters such as sampling method, steps, and CFG scale for better image detail and adherence to prompts. It also addresses the need for additional configuration for users with weak or AMD graphics cards and offers a solution to reduce VRAM usage. The paragraph concludes with a brief overview of how to generate an image using the set up and how to further refine the AI's output through trial and error or by seeking help from the creator's resources.



💡AI-generated images

AI-generated images refer to visual content created by artificial intelligence algorithms based on given inputs or prompts. In the video, the main theme revolves around teaching viewers how to set up a system to create such images using AI, specifically mentioning the creation of images from text descriptions and the use of models to produce content in various styles.


VRAM, or Video RAM, is the dedicated memory used by graphics processing units (GPUs) to store image data that they process. The amount of VRAM a graphics card has is crucial for rendering high-resolution images, as it determines the maximum amount of texture data the GPU can manage at one time. In the context of the video, having 4GB or less of VRAM may limit the resolution of the AI-generated images, and the creator offers a workaround for such scenarios.

💡Stable Diffusion

Stable Diffusion is a program mentioned in the video that is capable of creating images from text descriptions or transforming one image into another style. It operates on Python and requires the use of Git for downloading and managing its components. The program is central to the video's tutorial, as it is the primary tool used for generating AI images.


Python is a high-level, interpreted programming language known for its readability and ease of use. In the context of the video, Python serves as the underlying programming language that the AI image generator, Stable Diffusion, operates on. It is essential for users to have Python installed on their system to run the Stable Diffusion program.


Git is a distributed version control system designed to handle the tracking and management of code changes, particularly in software development. In the video, Git is used to clone and download the Stable Diffusion program and its associated files from a GitHub repository, which is a common practice for obtaining and updating open-source software.

💡Models or Checkpoints

In the context of AI-generated images, models or checkpoints refer to the large datasets of images or neural network states that define the style and quality of the generated content. These models are essential for the AI to produce images in a specific style or manner. The video emphasizes the importance of selecting the right model to achieve the desired output.

💡VA file

VA file, short for Vibrant Appearance file, is a type of file used in the AI image generation process to apply color correction to the generated images. This enhances the visual appeal and ensures that the colors are more vibrant and accurate, which is an important step in achieving high-quality AI-generated images.

💡Web UI

Web UI stands for Web User Interface, which in this context refers to the graphical interface of the Stable Diffusion program that users interact with to generate images. It provides a visual way for users to input text prompts, select models, and adjust settings for the AI image generation process.

💡Sampling method

The sampling method is a technique used in AI-generated image creation that determines how the AI selects pixels to form the final image based on the input prompts. Different sampling methods can result in varying levels of detail and quality in the generated images. The video suggests using methods with 'DPM' in their names for optimal results.

💡CFG scale

CFG scale, or Control Flow Graph scale, is a parameter in AI image generation that adjusts how closely the generated image adheres to the user's input prompts. A higher CFG scale means the AI will more strictly follow the prompts, potentially sacrificing some creativity for a more accurate representation of the input.


Upscaling is the process of increasing the resolution of an image while attempting to maintain or improve its quality. In the context of AI-generated images, upscaling is used to make lower-resolution images clearer and more detailed. The video mentions using specific upscaling tools like 'Ester gun' or 'Estergan' for anime images to enhance the quality of the output.


AI-generated images are showcased in the video, demonstrating the capabilities of the technology.

The video provides a step-by-step guide on setting up a free AI image generation platform, making it accessible to beginners.

A workaround is presented for users with slow PCs or limited VRAM, ensuring that they can still utilize the AI image generation tools.

The importance of downloading and installing Python and Git is emphasized for the stable diffusion AI generator to function.

The video explains how to download and install the stable diffusion program, which is based on Python and uses Git for version control.

A dedicated folder for the AI generator is recommended to keep the setup organized and efficient.

The process of cloning the stable diffusion repository from GitHub is detailed, which is a crucial step for obtaining the AI generator.

The selection and use of models, or checkpoints, are discussed, highlighting the variety of styles available for image generation.

The video introduces the concept of VA files for color correction, which can enhance the quality of the AI-generated images.

Instructions for automatically updating the AI generator are provided to ensure users always have the latest version.

The video addresses the needs of users with weak or AMD graphics cards, offering specific configuration adjustments.

A detailed explanation of the web UI and its setup process is given, guiding users through the interface.

The importance of adjusting settings like sampling method, steps, and CFG scale for optimal image quality is discussed.

The video demonstrates how to upscale AI-generated images using specific tools to improve their clarity and detail.

The creator offers additional resources and support for users facing issues, including a separate video and Discord assistance.