Civitai AI Video & Animation // Motion Brush Img2Vid Workflow! w/ Tyler

Civitai
18 Apr 202464:46

TLDRIn this engaging live stream, Tyler from Civitai AI Video & Animation shares an innovative workflow for animating images using a motion brush in Comfy UI with Anime Diff. The process involves selecting specific parts of images to animate, such as eyes, hair, and clothing, to bring them to life. Tyler demonstrates the workflow using various images submitted by the Discord community, showcasing the potential for creating dynamic animations with different motion layers. He emphasizes the importance of choosing the right motion layer and adjusting settings for the best results. The stream also includes a discussion about the community's collaborative nature and the benefits of sharing knowledge and resources. Tyler gives a shoutout to VK, the creator of the workflow, and encourages viewers to follow him on Instagram. The session ends with a teaser for the next day's special guest, Noah Miller, who will discuss AI animation and his work on a sci-fi film called 'Zero'.

Takeaways

  • 🎨 Tyler introduces a new workflow for animating images using a motion brush in Comfy UI with anime diff, allowing specific parts of images to come to life.
  • 🖼️ The starting image for the animation was a pixelated, low-resolution picture, which gets cleaned up when upscaled with an AI model.
  • 🌟 Tyler shares a successful example where he animated clouds, hair, and reflective parts of an outfit to make it look like they were blowing in the wind.
  • 🚀 The workflow can be finicky, requiring multiple iterations and the right motion layers to achieve good results.
  • 💾 The workflow was created by VK and shared with permission, showcasing the collaborative nature of the AI art community.
  • 📱 VK can be found on Instagram @v.amv, contributing to the community with hilarious anime edits and AI creations.
  • 🔍 The workflow is low VRAM friendly, making it accessible for users with lower-end graphics cards.
  • 🎥 Tyler explains the process of using the IP adapter and clip vision model, emphasizing the importance of correct model selection.
  • 🌈 The use of a 'grow mask with blur' node helps to create a smooth fall off in motion, avoiding a sharp or fragmented look.
  • 📉 The motion scale of anime diff can be adjusted to control the intensity of the motion effects.
  • 📚 Tyler encourages viewers to experiment with different nodes, motion layers, and techniques to enhance their animations.
  • 🔗 The workflow and a link to VK's Instagram will be available on Tyler's Civitai profile after the stream.

Q & A

  • What is the main topic of the video stream?

    -The main topic of the video stream is about sharing a workflow that revolves around using a motion brush in a comfy UI with anime diff to animate specific parts of images.

  • Who is the host of the stream?

    -The host of the stream is Tyler.

  • What is the purpose of using the motion brush in the workflow?

    -The purpose of using the motion brush is to bring specific parts of images to life, creating an animated effect on selected areas such as eyes, hair, or other reflective parts.

  • What is the significance of the IP adapter and clip Vision model in the workflow?

    -The IP adapter and clip Vision model are standard components used in the workflow to ensure the correct models are in place for generating the animations, contributing to the overall quality and style of the output.

  • How does the control net feature contribute to the workflow?

    -The control net feature, specifically the control GIF animate diff, helps in smoothing out animations and tempering saturation, leading to more refined and controlled motion effects in the final output.

  • What is the role of the motion Laura in the workflow?

    -The motion Laura is used to define the type of motion that will be applied to the animated parts of the image. Different motion Lauras can create various effects, from liquid dripping to rushing waterfalls, influencing the final animation significantly.

  • Why is the workflow considered low VRAM friendly?

    -The workflow is considered low VRAM friendly because it allows for the generation of animations at a lower resolution without significant loss in quality, thus reducing the amount of video memory required for the process.

  • How does the 'grow mask with blur' node function in the workflow?

    -The 'grow mask with blur' node expands the mask beyond the area that was painted and applies a blur effect. This creates a smooth fall off in the motion, preventing the animation from appearing too sharp or fragmented.

  • What is the importance of the frame count box in the workflow?

    -The frame count box determines the number of frames the animation will generate. It allows users to specify the duration of the animation by setting the desired frame count.

  • What is the recommended way to share images for animation during the stream?

    -Viewers can share images they would like to see animated by sending them in the chat on Discord.

  • What is the future plan mentioned for the workflow?

    -The host, Tyler, plans to upload the workflow on his Civitai profile after the stream and also share the link on his Twitch and Discord chats for others to use and experiment with.

Outlines

00:00

🎨 Introduction to the AI Video and Animation Stream

Tyler, the host, welcomes viewers to the AI video and animation stream, expressing excitement for the day's content. He introduces a workflow that uses a motion brush in Comfy UI with Anime Diff to animate specific parts of images. Tyler invites viewers from Discord to send images for animation, mentions a previous guest stream with Spencer, and shares a personal project involving audio reactivity. The current project involves animating a pixelated image to make it appear as if it's dripping.

05:02

📝 Workflow Details and Community Contributions

Tyler explains the workflow's setup, emphasizing the importance of having the correct Clip Vision model and IP adapter model. He discusses the use of the IP adapter Advanced node, Laura loader, and ControlNet for smoother animations. Tyler also provides a link to the ControlNet and mentions using two different checkpoints for anime-style animations. The frame count for animations and the standard EMA settings are covered. Tyler credits VK, the creator of the workflow, and encourages following VK on Instagram for funny anime edits and AI work.

10:03

🖌️ Painting Key Animation Parts and Masking Techniques

The process involves dragging an image into the 'image to animate' node and using the mask editor to paint key parts for animation. Tyler discusses painting the eyes, eyelids, and other parts for more pronounced animation. He mentions the option to invert the mask for different effects and the use of the 'grow mask with blur' node to create a smooth fall off in motion. The importance of choosing the right motion Laura for good results is highlighted.

15:04

🕹️ Adjusting Animation Controls and Testing Different Effects

Tyler shows how to control the tightness of the motion area's grip on the painted mask and discusses the 'grow mask with blur' node's role in expanding and blurring the mask. He experiments with different motion Lauras and the multiv dynamic node to control the scale of motion. The use of a film vfi node for smoothing out animations is also covered. Tyler emphasizes the workflow's low VRAM usage and previews the animation at different frame rates.

20:05

🔄 Iterating Animations and Preparing for Upscaling

Tyler iterates on the animation, making adjustments to the motion and trying different motion Lauras. He discusses the potential for artifacts at certain settings and the benefits of upscaling for cleaning up outputs. Tyler also talks about the importance of starting with a high-quality image to maintain detail after upscaling.

25:07

🎭 Experimenting with Anime Styles and Motion Descriptors

Tyler experiments with anime-style animations, using different motion descriptors and checkpoints. He paints various elements of the image, such as eyes, hair, and hands, to see how they animate. The conversation includes the potential for creating a Civetta badge and the use of motion descriptors like 'blinking' and 'hand grabbing' to influence the animation.

30:08

🌊 Applying Motion to Images with Dynamic Elements

Tyler works on images with dynamic elements like flames, smoke, and water, using motion descriptors to enhance the animation. He discusses the use of different motion Lauras, such as 'temporal eyes' and 'wave pulse,' to achieve desired effects. The importance of a clear image with definable elements for driving motion is emphasized.

35:08

🎨 Final Touches and Preparing the Workflow for Sharing

Tyler makes final adjustments to the animations, including painting details and selecting appropriate motion Lauras. He discusses the process of exporting the workflow, compressing it into a zip file, and tagging it for easy discovery. Tyler also mentions planning to upload the stream to YouTube for future reference.

40:09

📚 Organizing Outputs and Promoting Community Engagement

Tyler organizes the output folder, selects images for the workflow page, and emphasizes the importance of community sharing. He encourages using a specific hashtag on Instagram to increase visibility and engagement. Tyler also teases upcoming guest creator streams and expresses gratitude to the community for their participation.

45:12

🌟 Wrapping Up and Previewing Future Streams

In the final paragraph, Tyler wraps up the stream by summarizing the day's activities and expressing excitement for future streams. He gives a shoutout to VK for sharing the workflow, previews a conversation with Noah Miller about AI evolution, and mentions the return of Phil for more Comfy UI content. Tyler thanks the viewers for joining and looks forward to the next stream.

Mindmap

Keywords

💡Motion Brush

Motion Brush is a tool used within the context of the video for animating specific parts of images. It is a key component of the workflow shared by Tyler, allowing users to bring selected areas of their artwork to life with movement. In the video, Tyler demonstrates how to use the Motion Brush with various 'motion layers' to create animations, such as making the character's eyes blink or hair blow in the wind.

💡Anime Diff

Anime Diff, short for Anime Differentiator, is a term referring to a model or process that distinguishes and applies anime-style characteristics to animations or images. In the video, Tyler discusses using the Motion Brush with an anime diff model to generate anime-style animations, which is central to the theme of creating dynamic and stylistic content.

💡Comfortable UI (Comfy UI)

Comfy UI is likely a reference to a user interface that is easy to use or designed to enhance user experience. Tyler mentions using a Motion Brush within a 'comfy UI', suggesting that the tools and workflow are accessible and user-friendly for creating animations, which is important for the video's instructional purpose.

💡IP Adapter

IP Adapter in the context of the video is a node or component within the animation workflow that helps in the image processing task. Tyler refers to setting the 'IP adapter model' and adjusting its weight to achieve a cleaner input for the animations, indicating its role in refining the image before applying the animation effects.

💡ControlNet

ControlNet is a term used to describe a network or system that controls or manages certain aspects of the animation process. Tyler mentions a 'control GIF animate diff' ControlNet, which is integral in smoothing out animations and adjusting saturation, showcasing its importance in achieving the desired visual outcomes in the animations.

💡Checkpoints

In the video, Checkpoints refer to specific models or states within the animation software that users can switch between to achieve different effects. Tyler discusses using different checkpoints like 'Boton LCM' and 'Every Journey LCM' for anime-based animations, highlighting their utility in customizing the animation style.

💡VRAM

VRAM, or Video RAM, is the memory used by the graphics processing unit (GPU) to store image data for manipulation. Tyler mentions the workflow being 'low VRAM friendly,' which means it doesn't require a high-end graphics card to run, making it accessible to users with varying hardware capabilities.

💡Interpolation

Interpolation in the video is a technique used to create smooth transitions between frames in an animation. Tyler talks about using an interpolation node set to 15 FPS, which is then doubled to achieve 30 FPS, emphasizing its role in producing fluid animations.

💡Mask Editor

The Mask Editor is a tool that allows users to select and isolate specific parts of an image for targeted animation. Tyler demonstrates using the Mask Editor to paint over the parts of the image, like the eyes and hair, that should be animated, which is a critical step in the Motion Brush workflow.

💡Upscaling

Upscaling is the process of increasing the resolution of an image or video. Tyler refers to upscaling in the context of cleaning up artifacts and blurriness from the animation outputs, indicating its importance in achieving high-quality final animations.

💡Workflow

A workflow in the video refers to a specific sequence of steps or processes used to create animations. Tyler shares a 'workflow' created by VK, which is a set of instructions or a guide that viewers can follow to achieve similar animation results, showcasing its importance in educating and enabling the audience.

Highlights

Tyler shares a new workflow for animating images using a motion brush in comfy UI with anime diff.

The workflow allows users to bring specific parts of images to life, such as making eyes blink or hair blow in the wind.

Tyler emphasizes the workflow's low VRAM usage, making it accessible for users with lower-end graphics cards.

The process involves painting key parts of an image to animate them more prominently.

Different motion layers can be applied to achieve various animation effects.

The use of a 'grow mask with blur' node helps to smooth out the transitions in the animations.

Tyler demonstrates the workflow using various images submitted by the Discord community.

The 'Every Journey LCM' model is highlighted for its effectiveness in anime-style animations.

The importance of choosing the right motion layer for the desired animation effect is discussed.

Tyler shows how to adjust the motion scale to control the intensity of the animations.

The process is demonstrated on an image of a character with fire, using a 'flaming fire' prompt for the animation.

The workflow is credited to VK, who gave permission for Tyler to share it with the community.

Tyler provides a link to VK's Instagram for those interested in following his work.

The workflow will be uploaded to Civitai after the stream for others to use.

Tyler discusses the use of different checkpoints in the workflow, such as Photon and Every Journey LCM.

The potential for creating a Civitai badge using the workflow is mentioned.

The final output of the workflow is shown, including an animated character eating spaghetti.

Tyler provides tips for using the workflow effectively and encourages experimentation.