Civitai AI Video & Animation // Motion Brush Img2Vid Workflow! w/ Tyler
TLDRIn this engaging live stream, Tyler from Civitai AI Video & Animation shares an innovative workflow for animating images using a motion brush in Comfy UI with Anime Diff. The process involves selecting specific parts of images to animate, such as eyes, hair, and clothing, to bring them to life. Tyler demonstrates the workflow using various images submitted by the Discord community, showcasing the potential for creating dynamic animations with different motion layers. He emphasizes the importance of choosing the right motion layer and adjusting settings for the best results. The stream also includes a discussion about the community's collaborative nature and the benefits of sharing knowledge and resources. Tyler gives a shoutout to VK, the creator of the workflow, and encourages viewers to follow him on Instagram. The session ends with a teaser for the next day's special guest, Noah Miller, who will discuss AI animation and his work on a sci-fi film called 'Zero'.
Takeaways
- 🎨 Tyler introduces a new workflow for animating images using a motion brush in Comfy UI with anime diff, allowing specific parts of images to come to life.
- 🖼️ The starting image for the animation was a pixelated, low-resolution picture, which gets cleaned up when upscaled with an AI model.
- 🌟 Tyler shares a successful example where he animated clouds, hair, and reflective parts of an outfit to make it look like they were blowing in the wind.
- 🚀 The workflow can be finicky, requiring multiple iterations and the right motion layers to achieve good results.
- 💾 The workflow was created by VK and shared with permission, showcasing the collaborative nature of the AI art community.
- 📱 VK can be found on Instagram @v.amv, contributing to the community with hilarious anime edits and AI creations.
- 🔍 The workflow is low VRAM friendly, making it accessible for users with lower-end graphics cards.
- 🎥 Tyler explains the process of using the IP adapter and clip vision model, emphasizing the importance of correct model selection.
- 🌈 The use of a 'grow mask with blur' node helps to create a smooth fall off in motion, avoiding a sharp or fragmented look.
- 📉 The motion scale of anime diff can be adjusted to control the intensity of the motion effects.
- 📚 Tyler encourages viewers to experiment with different nodes, motion layers, and techniques to enhance their animations.
- 🔗 The workflow and a link to VK's Instagram will be available on Tyler's Civitai profile after the stream.
Q & A
What is the main topic of the video stream?
-The main topic of the video stream is about sharing a workflow that revolves around using a motion brush in a comfy UI with anime diff to animate specific parts of images.
Who is the host of the stream?
-The host of the stream is Tyler.
What is the purpose of using the motion brush in the workflow?
-The purpose of using the motion brush is to bring specific parts of images to life, creating an animated effect on selected areas such as eyes, hair, or other reflective parts.
What is the significance of the IP adapter and clip Vision model in the workflow?
-The IP adapter and clip Vision model are standard components used in the workflow to ensure the correct models are in place for generating the animations, contributing to the overall quality and style of the output.
How does the control net feature contribute to the workflow?
-The control net feature, specifically the control GIF animate diff, helps in smoothing out animations and tempering saturation, leading to more refined and controlled motion effects in the final output.
What is the role of the motion Laura in the workflow?
-The motion Laura is used to define the type of motion that will be applied to the animated parts of the image. Different motion Lauras can create various effects, from liquid dripping to rushing waterfalls, influencing the final animation significantly.
Why is the workflow considered low VRAM friendly?
-The workflow is considered low VRAM friendly because it allows for the generation of animations at a lower resolution without significant loss in quality, thus reducing the amount of video memory required for the process.
How does the 'grow mask with blur' node function in the workflow?
-The 'grow mask with blur' node expands the mask beyond the area that was painted and applies a blur effect. This creates a smooth fall off in the motion, preventing the animation from appearing too sharp or fragmented.
What is the importance of the frame count box in the workflow?
-The frame count box determines the number of frames the animation will generate. It allows users to specify the duration of the animation by setting the desired frame count.
What is the recommended way to share images for animation during the stream?
-Viewers can share images they would like to see animated by sending them in the chat on Discord.
What is the future plan mentioned for the workflow?
-The host, Tyler, plans to upload the workflow on his Civitai profile after the stream and also share the link on his Twitch and Discord chats for others to use and experiment with.
Outlines
🎨 Introduction to the AI Video and Animation Stream
Tyler, the host, welcomes viewers to the AI video and animation stream, expressing excitement for the day's content. He introduces a workflow that uses a motion brush in Comfy UI with Anime Diff to animate specific parts of images. Tyler invites viewers from Discord to send images for animation, mentions a previous guest stream with Spencer, and shares a personal project involving audio reactivity. The current project involves animating a pixelated image to make it appear as if it's dripping.
📝 Workflow Details and Community Contributions
Tyler explains the workflow's setup, emphasizing the importance of having the correct Clip Vision model and IP adapter model. He discusses the use of the IP adapter Advanced node, Laura loader, and ControlNet for smoother animations. Tyler also provides a link to the ControlNet and mentions using two different checkpoints for anime-style animations. The frame count for animations and the standard EMA settings are covered. Tyler credits VK, the creator of the workflow, and encourages following VK on Instagram for funny anime edits and AI work.
🖌️ Painting Key Animation Parts and Masking Techniques
The process involves dragging an image into the 'image to animate' node and using the mask editor to paint key parts for animation. Tyler discusses painting the eyes, eyelids, and other parts for more pronounced animation. He mentions the option to invert the mask for different effects and the use of the 'grow mask with blur' node to create a smooth fall off in motion. The importance of choosing the right motion Laura for good results is highlighted.
🕹️ Adjusting Animation Controls and Testing Different Effects
Tyler shows how to control the tightness of the motion area's grip on the painted mask and discusses the 'grow mask with blur' node's role in expanding and blurring the mask. He experiments with different motion Lauras and the multiv dynamic node to control the scale of motion. The use of a film vfi node for smoothing out animations is also covered. Tyler emphasizes the workflow's low VRAM usage and previews the animation at different frame rates.
🔄 Iterating Animations and Preparing for Upscaling
Tyler iterates on the animation, making adjustments to the motion and trying different motion Lauras. He discusses the potential for artifacts at certain settings and the benefits of upscaling for cleaning up outputs. Tyler also talks about the importance of starting with a high-quality image to maintain detail after upscaling.
🎭 Experimenting with Anime Styles and Motion Descriptors
Tyler experiments with anime-style animations, using different motion descriptors and checkpoints. He paints various elements of the image, such as eyes, hair, and hands, to see how they animate. The conversation includes the potential for creating a Civetta badge and the use of motion descriptors like 'blinking' and 'hand grabbing' to influence the animation.
🌊 Applying Motion to Images with Dynamic Elements
Tyler works on images with dynamic elements like flames, smoke, and water, using motion descriptors to enhance the animation. He discusses the use of different motion Lauras, such as 'temporal eyes' and 'wave pulse,' to achieve desired effects. The importance of a clear image with definable elements for driving motion is emphasized.
🎨 Final Touches and Preparing the Workflow for Sharing
Tyler makes final adjustments to the animations, including painting details and selecting appropriate motion Lauras. He discusses the process of exporting the workflow, compressing it into a zip file, and tagging it for easy discovery. Tyler also mentions planning to upload the stream to YouTube for future reference.
📚 Organizing Outputs and Promoting Community Engagement
Tyler organizes the output folder, selects images for the workflow page, and emphasizes the importance of community sharing. He encourages using a specific hashtag on Instagram to increase visibility and engagement. Tyler also teases upcoming guest creator streams and expresses gratitude to the community for their participation.
🌟 Wrapping Up and Previewing Future Streams
In the final paragraph, Tyler wraps up the stream by summarizing the day's activities and expressing excitement for future streams. He gives a shoutout to VK for sharing the workflow, previews a conversation with Noah Miller about AI evolution, and mentions the return of Phil for more Comfy UI content. Tyler thanks the viewers for joining and looks forward to the next stream.
Mindmap
Keywords
💡Motion Brush
💡Anime Diff
💡Comfortable UI (Comfy UI)
💡IP Adapter
💡ControlNet
💡Checkpoints
💡VRAM
💡Interpolation
💡Mask Editor
💡Upscaling
💡Workflow
Highlights
Tyler shares a new workflow for animating images using a motion brush in comfy UI with anime diff.
The workflow allows users to bring specific parts of images to life, such as making eyes blink or hair blow in the wind.
Tyler emphasizes the workflow's low VRAM usage, making it accessible for users with lower-end graphics cards.
The process involves painting key parts of an image to animate them more prominently.
Different motion layers can be applied to achieve various animation effects.
The use of a 'grow mask with blur' node helps to smooth out the transitions in the animations.
Tyler demonstrates the workflow using various images submitted by the Discord community.
The 'Every Journey LCM' model is highlighted for its effectiveness in anime-style animations.
The importance of choosing the right motion layer for the desired animation effect is discussed.
Tyler shows how to adjust the motion scale to control the intensity of the animations.
The process is demonstrated on an image of a character with fire, using a 'flaming fire' prompt for the animation.
The workflow is credited to VK, who gave permission for Tyler to share it with the community.
Tyler provides a link to VK's Instagram for those interested in following his work.
The workflow will be uploaded to Civitai after the stream for others to use.
Tyler discusses the use of different checkpoints in the workflow, such as Photon and Every Journey LCM.
The potential for creating a Civitai badge using the workflow is mentioned.
The final output of the workflow is shown, including an animated character eating spaghetti.
Tyler provides tips for using the workflow effectively and encourages experimentation.