* This blog post is a summary of this video.

Mastering Stable Diffusion Model Inside Condifent for High-Quality AI Art

Author: Scott DetweilerTime: 2024-03-23 06:00:00

Table of Contents

Setting Up The Basic SDXL Graph in Condifent

The core SDXL graph that is used as a starting point and for quality assurance when working in Condifent begins with adding a checkpoint node. This checkpoints the current model being used. Next, the SDXL node is added, which handles the core image generation. The SDXL node output is then connected to the SDXL clip conditioner node, which prepares the output to be sampled.

The clip conditioner node has inputs for positive and negative prompts, which are configured using primitive nodes to allow easily reusing the prompts. The positive and negative prompt nodes are duplicated, with one set colored green for positive and one set red for negative. This makes the graph easier to parse.

After clip conditioning, the advanced sampler node samples from the conditioned clip using an empty latent vector to provide the base noise. The output latent from the sampler is then decoded to generate the final image. The sampler configuration uses 20 steps with a fixed seed for determinism while initially testing. The end step is left at the default 10,000 value.

Adding Checkpoint and Conditioning Nodes

The first node added to the graph is a checkpoint, which checkpoints the model being used for generation. Then the SDXL node is added which will handle running the diffusion process. The output of the SDXL node is connected to the SDXL clip conditioner node. This conditions the clip appropriately before sampling. Conditioning uses separate positive and negative prompt primitive nodes. This allows easily reusing the prompts across runs.

Connecting Positive and Negative Prompts

The positive and negative prompt primitive nodes are duplicated - one set colored green for positive and one set red for negative. This color coding helps visually distinguish the different prompts within the graph. Duplicating the sets of nodes allows reusing the prompts easily across multiple parts of the graphs by referring to the centralized primitive nodes.

Using Refinement Model to Enhance Generated Images

The refinement model can be added to the graph to help enhance the final images. First, a checkpoint node is added for the refinement model, then another SDXL clip conditioner specifically for the refiner.

A second sampler node chains off the first sampler and continues the diffusion process starting at step 12. This allows the first sampler to create a base image, then the second sampler and refinement model improve details. The steps are configured to share noise information between the samplers.

Comparing the base sampler output to the refinement model output shows clearer details in the refined version, demonstrating the value of adding the refinement model to enhance results.

Conditioning Latent Noise Before Sampling for Unique Results

An additional refinement model sampler can be inserted before the base sampler to 'condition' the latent noise vector.

This mixer sampler performs only 3 initial steps to initialize the noise vector before handing off to the base model sampler. This leads to more unique results compared to going directly from the empty latent vector on each run.

Feel free to experiment with different conditioning approaches to find new techniques for getting better variation in your generated images across runs.

Tips for Experimenting and Customizing Your Own Workflow

After setting up the base graph, try tweaking parameters like the start steps, number of steps, and conditioning approaches to see the impact.

The full graph can be reloaded just by dragging and dropping a previously generated image with the metadata back into Condifent. This allows easily resuming work on an existing graph.

Be careful about sharing full resolution PNGs though, as they can contain the full graph metadata! Convert images to JPEG first before publicly sharing to strip metadata if desired.

Conclusion and Next Steps for Exploring Stable Diffusion in Condifent

This covers the core Condifent SDXL workflow used at Stability AI as a starting point. Feel free to branch out and get creative from there based on your own style and needs!

Let us know in the comments if you have any other tips or tricks you've found useful for experimenting with Stable Diffusion in Condifent. And be sure to share your unique creations.

FAQ

Q: What is the benefit of using Stable Diffusion inside Condifent?
A: Condifent allows you to create custom graphs and workflows using SD models for advanced control over image generation and quality.

Q: How do I set up positive and negative prompts in Condifent?
A: Use primitive nodes to store your text prompts. Connect them to the clip conditioning nodes for positive and negative prompts.

Q: What is latent noise conditioning?
A: You can condition the latent noise before sampling to influence the output. This leads to more unique, interesting results.

Q: Should I share PNG images with embedded Condifent graphs?
A: No, the PNG will contain metadata about your entire Condifent graph. Convert images to JPEG first if you want to share them without exposing your workflow.