【Stable Diffusion】Loraモデル学習をGoogle Colabで作る方法解説。Kohya LoRA Dreambooth使用。【ジェネレーティブAI】

Shinano Matsumoto・晴れ時々ガジェット
22 Feb 202313:41

TLDRThis tutorial outlines the process of creating a Lora model using Google Colab, specifically with the Kohya LoRA Dreambooth tool. It details the steps from model selection, customization, and training using a variety of images to refine the model's output. The script emphasizes the importance of unique prompts and diverse image sets to enhance the model's learning capabilities. Additionally, it provides troubleshooting tips and explains how to utilize the trained model for generating images, highlighting the iterative nature of the process for achieving desired results.


  • 📚 The tutorial is about creating a Lora model using Google Colab and the KOHYA SD script.
  • 🔧 The Lora model size is 4.8 megs, but it's upgraded to 8 megs for the tutorial.
  • 🛠️ The process involves using the KOHYA tool, which simplifies the model training process.
  • 🔗 A link to the Dream Booth is provided in the video description for easy access.
  • 💡 The training process requires the user to input a unique prompt for the model.
  • 🖼️ Users should provide diverse images for the model to learn from, avoiding repetition in poses or outfits.
  • 📌 It's important to maintain high accuracy in tagging and avoid unrelated files to ensure the model learns correctly.
  • 📈 The tutorial explains how to adjust settings such as batch size and learning steps for optimal training.
  • 🔄 The model training involves epochs and steps, with clear instructions on how to calculate them.
  • 📋 The script outlines the importance of a clear and understandable prompt for effective model training.
  • ⏱️ The training and testing process is time-efficient, with sessions typically lasting less than 10 minutes.

Q & A

  • What is the size of the original Lora model mentioned in the script?

    -The original Lora model mentioned in the script is 4.8 megs in size.

  • What is KOHYA in the context of the script?

    -KOHYA refers to a tool or a script created by users that is being utilized in the process of learning and applying the Lora model.

  • How long does the process of executing the Lora study typically take?

    -The process usually takes around four minutes to execute.

  • What is the purpose of mounting Google Drive in the process?

    -Mounting Google Drive allows the user to access and use the drive for storing and retrieving necessary files for the Lora model application.

  • What type of models can be chosen for stable diffusion in the script?

    -The user can choose between animated models and live-action models for stable diffusion.

  • What is the significance of the prompt when training the model?

    -The prompt is significant as it guides the model on what to learn and focus on, helping to shape the output of the model.

  • How many images are recommended for training the model?

    -The number of images can vary, but the user has found 8 images to work well for their purposes.

  • Why is it important to use images of the subject in different poses, clothes, and moods?

    -Using diverse images helps the model learn and understand the subject matter better, avoiding repetition and ensuring a broader understanding.

  • What happens if the same word is used for learning as is already present in the model?

    -Using the same word that is already present in the model can lead to the original word being covered and not working properly, hence unique words should be used.

  • What is the role of the batch size in the learning process?

    -The batch size determines the number of sheets studied at the same time per study. A larger batch size is suitable for stronger systems, while a smaller size is for weaker systems.

  • How is the number of learning steps calculated?

    -The number of learning steps is calculated by multiplying the number of epochs by the number of sheets to study. For example, 20 epochs multiplied by 8 sheets results in 160 learning steps.



🤖 Introduction to Lora Training and Model Sizes

This paragraph introduces the process of training using Lora and the different model sizes involved. It begins with discussing the original 4.8-megabyte model and the transition to an 8-megabyte model for the training process. The speaker briefly mentions the Lora study and the tool's basic functionality, including the use of a script provided by KOHYA. The paragraph also touches on the importance of collaboration and the potential for using the tool with more extensive collaboration in the future. The speaker provides a link to Dream Booth in the video description for further exploration.


📸 Image Tagging and Model Selection for Training

The second paragraph delves into the specifics of image tagging and model selection for the training process. It discusses the automatic tagging of images provided, the importance of including diverse images to avoid repetition in poses, clothing, and mood. The speaker advises on the avoidance of common tags that may lead to the model being overwhelmed, such as 'Biden', to ensure the learning process remains effective. The paragraph also covers the process of selecting models for training, including the use of VAE and Stable Diffusion models, and the need for a unique prompt for each model to maintain accuracy and avoid confusion.


🚀 Training Execution and Testing

In the final paragraph, the focus shifts to the execution of the training process and subsequent testing. The speaker explains the steps involved in starting the learning process, including the preparation of the model and the setting of various parameters such as batch size and learning steps. The paragraph also discusses the importance of patience, as the learning process can be time-consuming. After training, the speaker describes how to conduct a test to ensure the model has learned effectively, emphasizing the need for a diverse set of prompts and the potential for further refinement through additional training epochs.




Lora is a method used in machine learning and artificial intelligence for fine-tuning pre-trained models. In the context of the video, Lora is used to customize a Stable Diffusion model by training it with specific data sets. This process allows the model to learn and generate images that align with the unique characteristics and styles defined by the user's input data. The video mentions Lora as a crucial part of the model training process, emphasizing its role in creating a tailored AI model.

💡Google Colab

Google Colab is a cloud-based platform that allows users to write and execute Python code in a collaborative environment. It is particularly popular among data scientists and machine learning practitioners for its ease of use and the ability to run code on GPU and TPU hardware without the need for local setup. In the video, Google Colab is the platform where the user is guided to create and train the Lora model, highlighting its utility for AI and machine learning tasks.


Kohya, as mentioned in the video, seems to be a user or a group of users who have contributed to the creation of a script for the Stable Diffusion model. This script is likely a set of instructions that simplifies the process of training and using AI models, making it more accessible to users. The video refers to Kohya's script as a tool that aids in the model training process, indicating its importance in the overall workflow.


Dreambooth is a term used in the context of AI-generated art, referring to a method where an AI model is trained on a specific set of images to produce new images in a similar style. In the video, the user is directed to a link for a Dreambooth setup, which suggests that the video's focus is on teaching viewers how to create personalized AI art models that can generate images based on their preferences and input.

💡Model Size

Model size refers to the amount of data and parameters that a machine learning model contains. A larger model size typically means more complex functionalities and potentially better performance, but it also requires more computational resources. In the video, the transition from a 4.8 meg model to an 8 meg model is mentioned, indicating an upgrade in the model's capacity and capabilities for learning and generating images.


In the context of this video, a token is a unique string of characters that is used to authenticate and gain access to certain services or platforms, particularly when using cloud-based tools like Google Colab. The video instructs the user to paste their own token for the Stable Diffusion model, which is essential for utilizing the model's features and functionalities.

💡Stable Diffusion

Stable Diffusion is a type of generative AI model that is capable of producing high-quality images from textual descriptions. It is based on diffusion models, which are a class of generative models that learn to create data by gradually reversing a diffusion process that turns data into noise. In the video, Stable Diffusion is the core technology being customized and trained using Lora, with the goal of generating images that match user-provided prompts more closely.


VAE stands for Variational Autoencoder, which is a type of neural network used for efficient learning and compression of data. In the context of the video, VAE might be used in conjunction with the Stable Diffusion model to enhance the quality of the generated images or to improve the training process. The video mentions the option to include VAE during the setup process, indicating its role as an additional tool for model optimization.


A prompt, in the context of AI and machine learning, is a piece of text or data that serves as input to guide the model's output. In the video, the user is encouraged to input unique and descriptive prompts to train the Stable Diffusion model. These prompts help the model understand the desired characteristics and style of the images it should generate, such as 'POCHI' for a dog or 'TAMA' for a cat, enhancing the model's ability to produce specific types of content.

💡Training Data

Training data consists of the sample data sets used to teach a machine learning model how to make predictions or generate outputs. In the video, the user is instructed to prepare a diverse set of images representing the subject they want the AI model to learn, ensuring that the images cover various poses, clothing, and moods. This diverse training data is crucial for the model to learn and generate new images accurately and creatively.


Epochs in machine learning refer to the number of times the entire training data set is used to train the model. Each epoch represents a full pass of the data, and multiple epochs are often used to improve the model's learning and accuracy. In the video, the user is advised to run a certain number of epochs during the training process, which is essential for the model to fully learn from the provided data and generate the desired outputs.


The tutorial explains how to create a Lora model using Google Colab, specifically with the Kohya LoRA Dreambooth.

The Lora model size is 4.8 megs, but it is increased to 8 megs for the process.

Kohya has created a simplified SD script for users to utilize in the Lora study process.

The tutorial provides a link to Dream Booth, which can be copied to the Google Drive for use.

Users need to paste their own token for certain steps in the process.

The model selection includes options for animated or live-action models, and the possibility to use Stable Diffusion 2.0.

Users can customize the training data destination and the prompt for their study.

It is recommended to use unique and descriptive names for the study and to avoid common words for better results.

The images for training should be diverse in pose, clothing, and mood to improve learning accuracy.

The tutorial advises against including background elements like the Tokyo Tower to prevent them from being learned as part of the subject.

There is a warning about the potential for electric shocks, although it is not a serious concern.

The process includes automatically tagging the images provided for better learning.

The tutorial explains how to adjust the settings for batch size and learning steps based on the user's capabilities.

The number of epochs and the extension of the saved model are also customizable.

A test is conducted after the learning process to verify its effectiveness.

The tutorial mentions the possibility of additional training epochs for further refinement of the model.

The final output is compared to the initial model to show the improvements made through the Lora learning process.