(KRITA AI) STEP-BY-STEP Using Stable Diffusion in Krita
TLDRIn this tutorial, the creator demonstrates using the Stable Diffusion plugin in Krita to enhance a sketch. Starting with a 1000x1000 pixel canvas, they utilize the vxp turbo checkpoint and control nets to refine details like hands and hair. The video covers live mode adjustments, batch image generation, and upscaling techniques. Challenges like maintaining style and dealing with image degradation are addressed, culminating in a detailed and expressive final image.
Takeaways
- 🎨 Start with a small canvas size (e.g., 1000x1000 pixels) for faster generation when using Stable Diffusion in Krita.
- 🖌️ Use the 'vxp turbo checkpoint' for style consistency and include descriptive prompts for better image generation.
- 🔍 Add a control net with a default strength of 100% to retain details from the original sketch.
- ⚙️ Adjust control net strength and range to avoid unwanted stylistic effects like neon outlines.
- 🔄 Increase the batch size to generate multiple images and select the most promising one for further refinement.
- 🖱️ Use live mode to manually clean up or adjust areas of the image that need fine-tuning.
- 🔍 Employ a control net with line art to maintain the original sketch's integrity during live mode edits.
- 📈 Upscale the image carefully, using a model that complements the original style to avoid detail loss.
- 🖋️ Experiment with different control nets and strengths to achieve the desired level of detail and style in upscaled images.
- 🎭 Use style transfer with a control net to apply the style of an old piece of art to the current image, enhancing the creative process.
Q & A
What is the recommended starting canvas size when using the Stable Diffusion plugin in Krita?
-The recommended starting canvas size is 1,000 by 1,000 pixels. However, for those with slower hardware, it might be advisable to start with a smaller size like 768 x 768 pixels.
What is the purpose of using a control net in the Stable Diffusion plugin?
-A control net is used to guide the AI in generating an image that is more aligned with a specific style or to retain certain features from an existing image, such as line art or depth.
What is the 'vxp turbo checkpoint' mentioned in the script, and how is it used?
-The 'vxp turbo checkpoint' is an sdxl model used for a specific style in the Stable Diffusion plugin. It is used to influence the style of the generated image based on the user's prompt.
How does the batch size setting affect the image generation process in the Stable Diffusion plugin?
-The batch size setting determines how many images the plugin will generate at once. A higher batch size allows the user to review and select from multiple generated images to find the most suitable one.
What is the 'live mode' in the Stable Diffusion plugin, and how is it used in the script?
-The 'live mode' allows users to make real-time adjustments to the image by painting or drawing directly onto the canvas, which the AI then attempts to refine based on the control net and other settings.
Why might the hands in the generated image not turn out as expected, and how can this be addressed?
-The hands might not turn out as expected due to the complexity of the detail. To address this, the user can adjust the control net strength or use the live mode to manually refine the hands.
What is the significance of the 'strength' setting in the control net, and how does it affect the image?
-The 'strength' setting in the control net determines the influence of the control net on the image generation. A higher strength means the control net has a more significant impact on retaining the features of the reference image, while a lower strength allows for more variation.
How can the upscale model be used effectively in the image generation process?
-The upscale model can be used to increase the resolution of the generated image. However, to maintain detail and avoid a blurry outcome, it's recommended to use a control net with a suitable strength and possibly adjust the denoising strength.
What is the 'style control net' and how does it influence the generated image?
-The 'style control net' is used to transfer the style of a reference image onto the generated image. It influences the overall look and feel of the image to match the style of the reference, such as making it more anime-like or matching a specific artwork's style.
Why might the AI-generated image appear washed out, and how can this be resolved?
-An AI-generated image might appear washed out due to the resampling process, especially with sdxl models. To resolve this, the user can increase the control net strength and adjust the denoising strength to retain more details and avoid a blurry look.
Outlines
🎨 Introduction to Creating Art with Stable Diffusion
The speaker begins by introducing the process of creating art using the stable diffusion plugin for CR. They start with a blank canvas, emphasizing the importance of beginning with a smaller canvas size for faster generation, especially for those with slower hardware. The speaker then discusses different starting points, such as live painting or generating something new. They decide to import a recent sketch to experiment with the plugin, using the vxp turbo checkpoint for style. The prompt is kept simple, listing the elements in the image, and additional elements like 'space' and 'stars' are added to maintain a Sci-Fi theme. The speaker also mentions the settings and negative prompts used to influence the style of the generated image.
🖌️ Refining Art with Control Nets and Live Mode
The speaker experiments with a control net to enhance the sketch, adjusting the strength and range to improve the details, particularly the hands. They encounter an issue with the neon effect when the control net strength is too high and adjust it accordingly. The speaker then generates multiple images to select the best outcome. They discuss the use of live mode to clean up areas like the hands and hair, creating new layers and using control nets to refine details. The speaker also talks about the challenges of upscaling the image while maintaining details and the use of different models for upscaling.
🔍 Advanced Techniques with Control Nets and Style Transfer
The speaker explores advanced techniques by using a control net to resample the image at a higher strength, avoiding blurry results. They experiment with different control net strengths and denoising levels to improve detail. The speaker also attempts to transfer the style from an old piece of art onto the current image, using the style control net. They encounter issues with the style layer referencing the wrong image and resolve it by duplicating and renaming the layer. The speaker refines the image further, focusing on the face, and uses opacity and layer adjustments to blend the details seamlessly.
🖋️ Final Touches and Conclusion
In the final steps, the speaker adds resolution to the face, cleans up details by hand, and considers adding a filter layer for color adjustments and unsharp mask for sharpening the image. They reflect on the process, noting that the AI was primarily used for cleaning up the arms. The speaker concludes by suggesting the possibility of using the AI-generated image as a reference to complete the original sketch manually. They also invite viewers to check out more videos for further information on the generative AI plugin for CR.
Mindmap
Keywords
💡Stable Diffusion
💡Krita
💡vxp turbo checkpoint
💡Prompt
💡Control Net
💡Live Mode
💡Batch Size
💡Upscale
💡Denoising
💡Style Transfer
Highlights
Introduction to using the Stable Diffusion plugin in Krita for image creation.
Advising to start with a small canvas size for faster generation and iterations.
Explanation of the image scaling process in Krita to adjust canvas size.
Discussion on starting with a live mode or generating a new image from scratch.
Utilizing a vxp turbo checkpoint for a specific style in the Stable Diffusion plugin.
Crafting an effective prompt with essential elements of the image.
Incorporating style elements into the prompt to influence the generated image.
Adding a control net for line art to refine the image generation process.
Adjusting control net strength and range for better image results.
Experimenting with different control net settings to avoid unwanted neon effects.
Using batch generation to produce multiple images and select the best outcome.
Applying the generated image and addressing areas needing manual cleanup.
Switching to live mode in Krita for detailed manual adjustments.
Demonstrating the use of control net layers for specific parts of the image.
Addressing issues with line art and control net strength in live mode.
Upscaling the image using the Stable Diffusion plugin with considerations on model choice.
Refining upscaled images with control nets to maintain details.
Experimenting with different control nets to avoid blurry results in upscaled images.
Integrating an old piece of art to influence the style of the generated image.
Final touches and cleanup in Krita to achieve the desired look.
Conclusion and summary of the workflow using Stable Diffusion in Krita.