Create Consistent, Editable AI Characters & Backgrounds for your Projects! (ComfyUI Tutorial)

29 Apr 202411:08

TLDRThis tutorial demonstrates how to create AI characters and backgrounds with consistent styles using Stable Diffusion 1.5 and SDXL. It covers generating multi-view character images, integrating them into AI backgrounds, and controlling emotions with prompts. The guide includes a free post sheet for character bones, a step-by-step setup guide, and tips for refining character generation. The workflow is adaptable for various projects, from children's books to AI influencers, showcasing the creation of a cheese influencer character as an example.


  • 😀 The video tutorial demonstrates creating AI characters and backgrounds for projects using Stable Diffusion 1.5 and SDXL.
  • 🎨 A post sheet is introduced, which can be downloaded for free and used to generate characters from different angles.
  • 📚 The workflow includes a custom setup in ComfyUI, with a step-by-step guide provided for installation and setup.
  • 🤖 Control Net is utilized to generate characters based on the bones depicted in the post sheet.
  • 👩‍🦱 The video shows how to create an AI influencer, specifically a character for cheese, with a unique niche and personality.
  • 🔍 Descriptive prompts are essential for generating consistent and well-posed characters.
  • 👨‍🦳 Adjustments such as adding a mustache can be made to improve the character's appearance.
  • 🖼️ The process includes upscaling images, face detailing, and saving different poses as separate images.
  • 😉 The tutorial covers how to generate expressions and integrate them into the character's design, with tips for maintaining consistency.
  • 🌄 The final workflow allows for placing characters into different backgrounds and adjusting expressions and poses.
  • 🧀 The video concludes with a demonstration of integrating the cheese influencer character into various Alpine scenes.

Q & A

  • What is the main purpose of the video tutorial?

    -The main purpose of the video tutorial is to demonstrate how to create consistent AI characters, pose them automatically, integrate them into AI-generated backgrounds, and control their emotions using simple prompts.

  • Which software versions does this workflow support?

    -The workflow supports Stable Diffusion 1.5 and SDXL, allowing for any style of character generation.

  • What is the significance of the post sheet mentioned in the video?

    -The post sheet is significant as it depicts a character's bones from different angles in the open pose format, which is used to generate multiple views of the character in the same image using control net.

  • How can I obtain the post sheet used in the video?

    -The post sheet can be downloaded for free on the creator's Patreon page.

  • What is the role of the 'control net' in the workflow?

    -The control net is used to generate characters based on the bones depicted in the post sheet, allowing for the creation of characters in various poses.

  • What is the character generation process like in the video?

    -The character generation process involves importing the post sheet, choosing a model, setting the K sampler settings, and inputting a prompt to generate the character in different poses.

  • How can one improve the quality of the generated character's face?

    -The quality of the generated character's face can be improved by using the face detailer, which automatically detects and redifuses faces in the image for consistency.

  • What is the purpose of the 'expressions' part of the workflow?

    -The 'expressions' part of the workflow allows for the generation of different facial expressions for the character, enhancing the character's emotive range.

  • How can the character be integrated into different backgrounds?

    -The character can be integrated into different backgrounds by using the controllable character workflow, which includes posing the character, generating a fitting background, and compositing the character onto the background.

  • What are some additional applications of the character sheet and workflow?

    -Additional applications include training your own model based on the character images, using the character reference tool in Mid Journey for placing the character in different locations, and customizing the workflow for various projects like children's books or AI movies.

  • How can I access exclusive example files and additional resources for this workflow?

    -Exclusive example files, additional resources, and access to the Discord Community can be obtained by supporting the creator on Patreon.



🎨 Creating AI Characters with Stable Diffusion 1.5

This paragraph introduces a video tutorial on creating AI-generated characters using Stable Diffusion 1.5 and SDXL. The workflow allows for automatic posing, integration into AI backgrounds, and emotion control through simple prompts. The creator shares a free downloadable post sheet for character bone generation and discusses the use of ControlNet with the open pose format. The video demonstrates generating a character with a unique niche, adjusting prompts for better results, and using a face detailer for consistency. The process includes upscaling, saving different poses, and generating expressions, with tips on customizing the workflow for individual needs.


🖼️ Integrating Characters into Backgrounds with Mid Journey

The second paragraph delves into integrating AI-generated characters into different locations using Mid Journey's character reference tool. It explains how to upload character images, use prompts for desired actions, and adjust parameters for character consistency. The paragraph also outlines a workflow for posting characters into backgrounds, which includes posing the character, generating a fitting background, and integrating the character with the background. Techniques for fixing seams, adjusting focal planes, and matching lighting are discussed, along with the use of Openpose AI for creating character poses and the importance of using IP adapters to maintain character likeness.


🧀 Customizing Character Presentations and Workflow Enhancements

The final paragraph focuses on customizing character presentations, such as adding cheese to a character's hands, and experimenting with different poses and expressions. It touches on the flexibility of the workflow, allowing for manual pose creation or automatic generation using Stable Diffusion. The paragraph also encourages viewers to explore and personalize the workflow, mentioning the possibility of training a model based on the generated images. The video concludes with an invitation to access exclusive files, join a Discord community, and support the creator on Patreon, highlighting the time investment involved in producing such content.



💡AI Characters

AI Characters refer to artificially intelligent entities created for various media, such as films, books, or digital platforms. In the context of the video, AI Characters are generated using software like Stable Diffusion 1.5 or SDXL, allowing for a wide range of styles and applications. The video demonstrates how to create these characters with consistency in appearance and pose, which is crucial for projects like children's books or AI movies.

💡Post Sheet

A Post Sheet, as mentioned in the video, is a tool used to depict a character's skeleton from different angles in an open pose format. It is essential for generating multiple views of a character in the same image, which is a key part of the workflow for creating AI characters. The video creator provides a free downloadable Post Sheet on their Patreon, which is used in conjunction with control net to generate characters based on these bone structures.

💡Control Net

Control Net is a technology used in conjunction with AI character generation software to control the pose and structure of the generated characters. It helps in creating characters that are consistent and accurately reflect the desired pose, as shown in the video where the character's bones are used to generate characters from multiple angles.

💡Stable Diffusion 1.5

Stable Diffusion 1.5 is a specific version of an AI model that is capable of generating images based on textual prompts. The video mentions that the workflow for creating AI characters and backgrounds is compatible with Stable Diffusion 1.5, indicating its versatility and the ability to produce a wide range of styles.

💡Emotion Control

Emotion Control in the video refers to the ability to manipulate the emotional expression of AI-generated characters through simple textual prompts. This feature is important for creating characters that can convey a range of emotions, enhancing the storytelling aspect of projects like AI movies or children's books.

💡AI Influencers

AI Influencers are virtual personalities created using AI technology, designed to engage audiences on social media or other platforms. The video discusses the potential of using the workflow to create AI influencers that could generate income, suggesting the commercial applications of AI character generation.

💡Cheese Influencer

In the script, the term 'Cheese Influencer' is used to illustrate a niche market for AI characters. The video creator humorously decides to create an AI character that is an influencer for cheese, showing how the workflow can be tailored to specific themes or products.

💡Face Detailer

The Face Detailer is a tool within the workflow that automatically detects faces in an image and refines them for better detail and consistency. It is used in the video to improve the quality of the generated character's face, making it look more realistic or in line with specific styles like Pixar characters.


Upscaling in the context of the video refers to the process of increasing the resolution of an image from 1K to 2K, which enhances the overall quality. The video demonstrates the use of upscaling to improve the appearance of the generated AI character's face, particularly in the face detailer step.

💡IP Adapter

An IP Adapter, as discussed in the video, is a component of the workflow that takes the likeness of a character and transfers it into a prompt. This ensures that all generated characters closely resemble the original character, maintaining consistency across different images or scenes.

💡Openpose AI

Openpose AI is a tool used to create and manipulate poses for characters in a digital format. The video shows how to use Openpose AI to adjust a skeleton into a desired pose, which can then be used to guide the AI in generating characters in specific poses, adding a level of control and customization to the character creation process.


This video demonstrates how to create consistent AI characters and backgrounds for various projects.

Workflow is compatible with Stable Diffusion 1.5 and SDXL, allowing any style creation.

A free downloadable post sheet is introduced for generating multiple character views.

Control Net is utilized to generate characters from the post sheet bones.

Custom workflow in ComfyUI is used for automatic character generation.

A step-by-step guide is provided for installing and setting up the workflow.

The video shows the creation of an AI influencer for cheese, a niche market.

Descriptive prompts and sampler adjustments are key for character consistency.

The face detailer tool is used to improve facial features and expressions.

Expressions can be generated with added prompts for a Pixar character style.

Different character poses can be saved as separate images.

The character sheet for 'Hansal Cheese Fencer' is completed with expressions and upscaling.

Background integration and expression adjustment are part of the workflow. is used for creating and adjusting character poses.

IP adapters ensure the generated character closely resembles the original.

Background and character integration can be improved with various techniques.

The workflow allows for automatic generation of hundreds of character images in different poses and locations.

Exclusive example files and resources are available for Patreon supporters.