Have TOTAL CONTROL with this AI Animation Workflow in AnimateLCM! // Civitai Vid2Vid Tutorial Stream

14 Mar 202477:44

TLDRTyler from Civii introduces new animate diff workflows for AI animation and video, focusing on subject and background isolation. He compares the quality and generation speed between LCM and V3 workflows, highlighting the benefits of using separate IP adapters for characters and backgrounds. Tyler demonstrates the process live, showcasing the power of AI in creating unique animations and encourages users to share their creations.


  • 🎥 The stream is a tutorial on new animate diff workflows released on the presenter's civy profile.
  • 🌟 Two workflows are discussed, one based on animate LCM and the other on animate diff V3.
  • 💻 The choice between workflows depends on the user's VRAM; LCM is faster and better for lower VRAM.
  • 🎨 Quality differences are noted between the two workflows, with V3 offering higher quality if VRAM allows.
  • 📹 The stream demonstrates the use of the workflows by combining different characters and backgrounds.
  • 👾 The presenter emphasizes the importance of using high-quality IP adapter images for better results.
  • 🖼️ Alpha masks are required for the workflow and can be generated using other workflows or tools.
  • 💡 The presenter suggests using the word 'sculpture' in prompts to avoid humanizing non-human subjects.
  • 🚀 The stream highlights the power of AI in creating animations and the potential of separate IP adapters for characters and backgrounds.
  • 🎥 The presenter shows how to upscale videos using the highres fix and other post-processing techniques.
  • 📅 Next week's streams will include guest streams with experts from various fields, starting with a prompting magician.

Q & A

  • What is the main focus of Tyler's Civii Office Hours session in the transcript?

    -The main focus of Tyler's Civii Office Hours session is to provide a tutorial or walkthrough of new animate diff workflows that he released on his Civii profile, specifically discussing and comparing two different workflows based on Animate LCM and Animate Diff V3.

  • What are the two workflows Tyler discusses, and what is the primary difference between them?

    -The two workflows Tyler discusses are based on Animate LCM and Animate Diff V3. The primary difference is that the Animate LCM workflow is better for users with limited VRAM and generates results faster, while the Animate Diff V3 workflow offers higher quality output if the user has more VRAM.

  • What is the purpose of the alpha mask in the new workflow?

    -The alpha mask in the new workflow is used to separate the subject (character) from the background in the video, allowing for more precise control over the animation and ensuring that the character and background integrate seamlessly in the final output.

  • What are the benefits of using the Animate LCM workflow over the Animate Diff V3 workflow?

    -The Animate LCM workflow is beneficial for users with limited VRAM, as it generates results faster and requires less computational resources. It is also better for live demonstrations, as it can quickly produce outputs without significant quality loss.

  • What is Tyler's recommendation for the model to use with the LCM workflow?

    -Tyler recommends using the Photon LCM model for the LCM workflow, as he has found it to produce excellent results. He also suggests using the stable diffusion 1.5 LCM Laura to achieve a low CFG for faster generation times.

  • How does Tyler handle the control nets in the workflow?

    -Tyler organizes the control nets into individual group boxes with fast bypassers, allowing for quick toggling on and off of the control nets. He often uses a combination of depth and open pose control nets, along with the control GIF control net for smoothing out the animations.

  • What is the significance of the two separate IP adapters used in the workflow?

    -The two separate IP adapters are used for the subject (character) and the background, respectively. This allows for more control over the style and texture of both elements, enabling the creation of more visually appealing and stylistically consistent animations.

  • What is Tyler's approach to handling videos with a high frame count in the Animate Diff V3 workflow?

    -For high frame count videos in the Animate Diff V3 workflow, Tyler suggests using the bilinear upscaler instead of the NN latent upscaler to avoid CUDA errors. He also recommends reducing the upscale buy to 1.5 to ensure a clean output at a lower resolution before doing the final upscale in a program like Topaz.

  • How does Tyler address issues with the Reactor face swapper installation?

    -Tyler explains that the Reactor face swapper installation issues are typically due to missing dependencies, specifically Visual Studio Code and C++. Users need to install these to resolve the problem and successfully use the Reactor node.

  • What is the purpose of the video combine node in the workflow?

    -The video combine node is used to merge the processed character and background elements into a single video output. It can also be connected to a custom node like Mikey's File Name Prefix to organize outputs into specific folders within the output directory.



📹 Introduction to New Animation Workflows

Tyler from Civii introduces a special tutorial focused on new animation workflows released on his Civii profile. He shares links for two workflows based on 'Animate LCM' and 'Animate Diff V3', highlighting their differences in quality and performance. The LCM workflow is recommended for those with limited VRAM, as it generates animations quickly. Tyler plans to demonstrate the LCM workflow live, promising a fast and straightforward walkthrough, followed by live examples using images submitted by viewers. The tutorial aims to demystify the workflows, making them accessible even to beginners.


🔧 Setting Up and Understanding the Workflow

Tyler guides users on how to locate and recenter the workflow within their comfy UI after importing. He emphasizes the organization of the workflow into numbered groups for a sequential approach, starting with video source and resolution settings. Tyler advises on the preferred resolution for videos and explains the process of selecting models for the LCM workflow, recommending the Photon LCM model for its efficiency. The workflow includes various control nets and IP adapters for subject and background isolation, aiming to simplify and speed up the animation process.


🎨 Advanced Techniques for Subject and Background Isolation

Delving deeper into the workflow, Tyler introduces advanced techniques for isolating subjects and backgrounds using IP adapters and alpha masks. He provides tips for creating effective alpha masks and utilizing video combine nodes for precise subject-background separation. Tyler's walkthrough includes practical advice on adjusting the workflow for optimal results, emphasizing the importance of detailed mask creation for achieving high-quality animations.


🛠 Fine-Tuning the Workflow and Additional Tips

Tyler shares additional tips for fine-tuning the animation workflow, including adjusting IP adapter settings and understanding the role of control nets. He explains how to achieve desired animation styles and movements by experimenting with control net combinations. Tyler also addresses common concerns such as workflow adjustments for different VRAM capacities and provides solutions for installing essential components like the reactor face swapper.


🚀 Live Demonstration and Audience Participation

In a live demonstration, Tyler puts the workflow to the test by creating animations with audience-submitted images. He experiments with various character and background combinations, showcasing the workflow's versatility. Tyler emphasizes the importance of descriptive prompts in guiding the animation process and invites the audience to contribute ideas for characters and backgrounds, fostering an interactive and creative environment.


🤖 Experimenting with No Prompts and Advanced Customizations

Tyler accepts a challenge to run the workflow without any descriptive prompts, testing the limits of the IP adapters' capabilities. The experiment yields mixed results, underscoring the importance of precise prompts in achieving specific animations. Tyler also explores advanced customizations, such as modifying control nets, to refine the animations further. The session emphasizes the balance between automation and manual input in crafting high-quality AI animations.


🔍 Concluding Demonstrations and Final Thoughts

In the final segment, Tyler conducts more experimental animations, including a whimsical 'space cat' skateboarder, demonstrating the workflow's flexibility. He compares results from different workflow versions and shares personal preferences based on quality and aesthetic appeal. Tyler concludes the session by encouraging viewers to explore and experiment with the workflows, inviting them to share their creations on social media and the Civii platform.



💡Animate Diff

Animate Diff refers to a technique or workflow in the context of AI animation, specifically utilizing AI models to generate or alter animations. In the script, it's mentioned in conjunction with versions (e.g., V3), indicating different iterations or improvements of the technique or tool. This method is contrasted with Animate LCM, suggesting different approaches or algorithms within AI animation, possibly offering varying levels of quality, speed, or resource requirements. The script discusses the differences in quality and resource usage between Animate Diff V3 and Animate LCM, guiding users on which might be more suitable based on their hardware capabilities.


In the context of the video, 'workflow' refers to a defined series of tasks within a software environment aimed at accomplishing a specific goal, in this case, AI-driven animation. The script outlines specific 'workflows' for creating animations using AI, detailing step-by-step processes involving various tools and techniques such as Animate Diff, Animate LCM, IP adapters, and control nets. These workflows are designed to streamline the animation process, making it more efficient and accessible to users.


VRAM (Video Random Access Memory) is mentioned in the script in the context of hardware requirements for running certain AI animation workflows. It is crucial for handling the intensive computational tasks associated with processing AI-generated animations. The script makes a distinction between workflows suitable for systems with different VRAM capacities, indicating that some methods, like Animate LCM, may be more efficient and faster for systems with lower VRAM, thereby making AI animation more accessible to users with varying hardware setups.

💡LCM Workflow

LCM Workflow refers to a specific process or method within AI animation utilizing the 'LCM' version of an AI model. The script discusses it as an alternative to Animate Diff V3, particularly highlighting its efficiency and speed in generating animations, making it suitable for live demonstrations or users with lower VRAM. This implies that LCM Workflow is optimized for quicker generation times, possibly at the expense of some quality or detail compared to other methods.

💡IP Adapter

An IP Adapter, in the context of this script, appears to be a tool or component within the AI animation workflows that integrates images or pre-existing visual content to influence or guide the animation output. The script discusses using IP adapters for character and background separation, suggesting a customization aspect where users can input specific images to achieve desired styles or themes in their animations. This tool enables greater control and creativity within the animation process by allowing for the direct incorporation of visual references.

💡Control Nets

Control Nets are mentioned as part of the AI animation workflow, likely serving as a mechanism to influence or direct the animation process. They could be used to manage or modify specific aspects of the animation, such as motion or depth, to achieve more refined results. The script suggests that users can toggle these on and off, indicating a level of customization and control over how the AI interprets and applies motion within the animated output.

💡Alpha Mask

An Alpha Mask in the context of the video is used within the AI animation workflows to isolate subjects from their backgrounds. This technique allows for separate processing of characters and their surroundings, enabling more precise and creative animations. The script discusses generating an alpha mask video, which suggests using a video where the subject is highlighted against a uniform background, facilitating the AI's ability to distinguish and animate the subject independently of the background.

💡Highres Fix

Highres Fix likely refers to a step or tool within the AI animation workflow designed to enhance the resolution or detail of the animated output. Given the context in the script, this process might involve upscaling the animation, improving its quality for high-resolution displays. The mention of 'bilinear' as an alternative setting within this step suggests that users have options regarding how the resolution enhancement is applied, balancing between quality and computational demand.

💡Comfy UI

Comfy UI seems to refer to a software interface or platform used for managing and executing the AI animation workflows mentioned in the script. It's implied to be a user-friendly environment where the various components of the workflow, like IP adapters, control nets, and animation models, are accessible and configurable. The mention of a 'zoom out' feature to view the entire workflow suggests a visual programming or node-based interface where users can visually manage the components and their connections.

💡Subject and Background Isolation

This concept refers to the technique of separating the main subject of an animation from its background, allowing for independent processing and customization of each. In the script, this is achieved through the use of alpha masks and dual IP adapters, enabling the AI to apply different styles or motions to the subject and background. This separation enhances the creative possibilities within AI animation, allowing for more complex and dynamic scenes where the subject and background can be individually tailored.


Tyler introduces two new animate diff workflows for video creation and provides links in the chat for access.

The workflows cater to different VRAM capacities, with one optimized for lower VRAM and the other for higher VRAM.

The tutorial covers the differences in quality between the two workflows and guides users on which to choose based on their system specifications.

Tyler demonstrates the process of using the workflows, including setting up the video source and resolution.

The importance of using the correct models for the LCM workflow is emphasized for optimal results.

Control nets are introduced as a way to refine the output and tailor the animation to specific styles.

The innovative use of separate IP adapters for subjects and backgrounds allows for greater control over the animation.

Tyler shares tips on using alpha masks for subject-background separation and provides resources for creating these masks.

The stream includes live demonstrations of the workflows, showcasing the animation of various characters in different settings.

The practical application of the workflows is highlighted through the creation of an astronaut cat skateboarding in a vintage park.

Tyler discusses the use of prompts in conjunction with IP adapters to guide the animation process.

The stream addresses common issues such as Cuda errors and provides solutions for troubleshooting.

The importance of aspect ratio and image quality in IP adapters for achieving the desired animation effects is discussed.

Tyler announces the addition of a fifth streaming day featuring guest streams with experts from various fields.

The stream concludes with a call to action for viewers to share their creations using the workflows and to follow for updates.