Unlock LoRA Mastery: Easy LoRA Model Creation with ComfyUI - Step-by-Step Tutorial!

17 Mar 202414:41

TLDRIn this informative video, the creator introduces the concept of Lura, a technique for training large models more efficiently by building on previous knowledge. The process is detailed, walking viewers through creating a dataset, installing necessary nodes, associating descriptions with images, and conducting the training. The video also emphasizes the importance of high-quality datasets and correct tagging for effective model training. Finally, the creator demonstrates testing the trained Lura model, showcasing its impact despite limited training data and epochs.


  • ๐Ÿ“š Introduction to LoRA (Low-Rank Adaptation) as a training technique for large models to learn new tasks efficiently and with less memory.
  • ๐Ÿ’ก LoRA builds upon previous knowledge of the model, adding only new information, which helps in better learning and retention of existing data.
  • ๐Ÿš€ Importance of managing the model's attention during training, with LoRA focusing on important details and making memory usage more efficient.
  • ๐ŸŒ A new node has been released that allows for direct LoRA training from Compy, eliminating the need for alternative interfaces.
  • ๐Ÿ“‚ Creating a dataset is crucial, and it should be of high quality and varied, effectively communicating what the model needs to learn.
  • ๐Ÿ“ Folder structure is important for organizing the dataset, with specific naming conventions for folders (e.g., 'uh_number_description').
  • ๐Ÿ”ง Installation of necessary nodes for image captioning and LoRA training within Compy, with the option to use custom forks for additional features.
  • ๐Ÿ”„ Workflow divided into three parts: associating descriptions with images, performing the actual training, and testing the new LoRA model.
  • ๐Ÿท๏ธ Use of GPT models for tagging images, with the option to choose different models for better tagging accuracy.
  • ๐Ÿ› ๏ธ Detailed settings for the LoRA training node, including model version, network type, precision, and training parameters.
  • ๐Ÿ“ˆ Training involves adjusting various parameters like batch size, epochs, learning rate, and regularization to optimize model performance.
  • ๐ŸŽ‰ Successful training and testing of the LoRA model, even with limited data and training steps, showing significant impact on the model's output.

Q & A

  • What does LoRA stand for and what is its purpose?

    -LoRA stands for Low-Rank Adaptation, and it is a training technique used to teach large models new things faster and with less memory by retaining past learnings and only adding new parts for efficient learning.

  • How does LoRA help in managing a model's attention during learning?

    -LoRA intelligently manages the model's attention by focusing it on important details during learning, which helps in more efficient and targeted training.

  • What is the significance of creating a high-quality dataset for LoRA training?

    -A high-quality dataset is crucial for LoRA training as it ensures the model can clearly understand and imitate what it should learn. Poor quality or inconsistent data can compromise the model's training effectiveness.

  • What is the recommended approach for organizing the folder structure for LoRA training?

    -The recommended folder structure involves creating a general folder for the style or character being trained, with one or more subfolders following a specific naming format (number underscore description) for categorizing the data.

  • How does the 'prefix' field in the LoRA caption save node function?

    -The 'prefix' field serves as a keyword that is used to activate the LoRA model, making it easier to utilize the model for specific tasks.

  • What are some of the key parameters to consider when setting up LoRA training in Compy UI?

    -Key parameters include ckpt unorm name, V2, SIM network module, Precision, save Precision, Network Dimension, Network Alpha, training resolution, data path, batch size, MAX train epox, save every neox, T tokens, Min SNR gamma, learning rate values, learning rate schedule, LR restart Cycles, optimizer type, output name, and output dir.

  • How does the 'Network Dimension' parameter affect LoRA training?

    -The 'Network Dimension' parameter, or 'rank', influences the model's expressive capacity and memory requirements by determining the number of simultaneous interactions the model can consider during data processing.

  • What is the role of 'Network Alpha' in LoRA training?

    -The 'Network Alpha' parameter sets the alpha value to prevent underflow and ensure stable training, which is crucial for numerical stability during optimization.

  • How does the 'Max train epox' parameter impact the training process?

    -The 'Max train epox' parameter sets the number of epochs for training, which balances the training duration and model performance. More epochs typically result in better model performance but require more time and computational resources.

  • What is the purpose of the 'T tokens' parameter in LoRA training?

    -The 'T tokens' parameter controls the shuffling of tags during training, reserving certain tags from shuffling to maintain a focus on specific aspects of the training data.

  • How can the training progress of LoRA be visualized?

    -The training progress can be visualized using TensorBoard, an interface integrated into the LoRA training node that provides a visual representation of the training process.



๐Ÿค– Introduction to Lura and its Benefits

The speaker, Nuked, introduces the concept of Lura (Low Rank Adaptation) as a training technique for large models. Lura allows models to learn new things faster and with less memory by retaining past knowledge and only adding new information. This method improves efficiency, prevents forgetting previously learned information, manages the model's attention effectively, and optimizes memory usage. The speaker expresses a personal interest in understanding model creation and mentions a new node that simplifies the process.


๐ŸŽจ Preparing the Dataset and Folder Structure

The speaker discusses the importance of creating a high-quality dataset for training, using manga-style images as an example. The process involves organizing the images in a specific folder structure, with subfolders named in a particular format (number_description). Although the description is not considered in Lura training, it's crucial for clear communication of what the model should learn. The speaker emphasizes the need for a varied dataset that is of high quality and immediately conveys the desired learning material to the model.


๐Ÿ› ๏ธ Installation and Setup of Custom Nodes

The speaker guides through the installation of necessary nodes for image captioning and Lura training. They mention using their own modified versions of these nodes and provide instructions on how to download and set them up. The speaker also explains the importance of installing dependencies for the nodes to function correctly. They then outline the three parts of the workflow: associating descriptions with images, actual training, and testing the new Lura model.

๐Ÿ” Tagging Images and Training Configuration

The speaker describes the process of tagging images using a GPT model for better tagging than traditional methods. They detail the workflow for associating tags with images and emphasize the importance of checking for consistency and accuracy in the tags to avoid compromising model training. The speaker then explains the configuration settings for the training node, including model version, network type, precision, and various parameters that influence training dynamics, such as batch size, training epochs, and learning rate.

๐Ÿš€ Launching the Training and Evaluating Results

The speaker proceeds to launch the training process, detailing the various settings and their impact on the training. They discuss the importance of shuffling tags, learning rate strategies, and regularization to prevent overfitting. After training, the speaker tests the new Lura model, comparing the results with and without Lura to demonstrate its impact. Despite training on a small dataset and for a short duration, the Lura model shows a significant improvement. The speaker concludes by thanking their supporters and encourages viewers to like, subscribe, and ask questions for further assistance.



๐Ÿ’กLow Rank Adaptation (Lora)

Low Rank Adaptation (Lora) is a training technique that allows large models to learn new things more efficiently by retaining previously learned information and only adding new parts. This concept is central to the video, as it explains how Lora enables models to understand human language more effectively without starting from scratch each time. The video uses Lora as a means to improve the training of a model to recognize and imitate manga style images.

๐Ÿ’กMemory Efficiency

Memory efficiency in the context of the video refers to the ability of Lora to make better use of the computer's memory, allowing the model to learn new things with fewer resources. This is important as it reduces the computational cost and allows for more sustainable training processes.

๐Ÿ’กData Set

A data set, as discussed in the video, is a collection of data used for training the model. In this case, the data set consists of a series of manga style images. The quality and clarity of the data set are crucial for the model to learn effectively and imitate the desired style accurately.


Training in the context of the video refers to the process of teaching the model through Lora by exposing it to the data set. This involves adjusting various parameters and settings to optimize the learning process and ensure the model can effectively learn and adapt to new information.

๐Ÿ’กCompy UI

Compy UI is the interface used in the video for training the Lora model. It is a platform that allows users to interact with and control the training process, including the installation of necessary nodes and the adjustment of training parameters.

๐Ÿ’กGPT Tagger

GPT Tagger is a model used in the video for tagging images with descriptions. This process is essential for providing the model with the necessary context and information to understand what each image represents and to learn effectively from the data set.

๐Ÿ’กModel Performance

Model performance refers to how well the model learns and applies the training it has received. In the context of the video, this is gauged by the model's ability to accurately imitate and generate images in the manga style after being trained with Lora.


TensorBoard is an interface used for visualizing the training progress of models. It provides insights into how the model is learning and allows for adjustments to be made during the training process based on the visualized data.

๐Ÿ’กLearning Rate

The learning rate is a parameter that determines how fast the model learns during the training process. It is crucial for balancing the speed of learning with the model's ability to converge on the correct solution without taking too long or not learning at all.


Epochs refer to the number of times the entire data set is passed through the model during training. The number of epochs affects the duration of training and the performance of the model, with more epochs typically leading to better learning but also increased computational cost.


Introduction to Lura, a training technique for teaching large models new things faster and with less memory.

Lura stands for Low Rank Adaptation, allowing models to retain past learnings and add new ones efficiently.

The technique helps models not forget previously learned information and manages attention during learning.

Lura also improves computer memory usage, enabling models to learn with fewer resources.

A new node has been released that allows direct Lura training from Comfy UI, saving users from installing alternative interfaces.

Creating a high-quality dataset is crucial for Lura training, and it should clearly convey what the model should imitate.

The folder structure for Lura training involves a general folder for the style or character, with specific subfolders following a naming convention.

The installation of necessary nodes for image captioning and Lura training is detailed, with custom nodes and dependencies explained.

The workflow for Lura training is divided into three parts: associating descriptions with images, actual training, and testing the new Lura.

The use of GPT models for tagging images in the dataset, with a preference for the model known as Joy Tagged.

A detailed explanation of the settings and parameters for the Lura training node in Comfy UI.

The importance of checking text files for consistent tags to ensure the model's effective training.

Executing the training process with a focus on the settings and options available in the Lura training node.

Testing the newly trained Lura model, emphasizing the impact of training on a small dataset with few epochs.

The video creator expresses gratitude to supporters and encourages viewers to like, subscribe, and ask questions.