Make INSANE AI videos: FULL Workflow

8 Jan 202438:14

TLDRIn this video, Tyler, also known as JBS or jbooksdocreative, shares his expertise on creating AI-powered animations and videos. As a freelance creative director, videographer, and editor, Tyler has been deeply involved with AI video and animation for the past eight months. He introduces his YouTube channel and offers valuable resources for beginners, intermediate, and advanced users. Tyler provides a detailed walkthrough of his 'vidto vid animate diff workflow version 3', explaining how to download and use the workflow, including setting up the comfy UI manager and installing necessary nodes. He also discusses the importance of prompts, control nets, and the IP adapter for refining animations. The video is a comprehensive guide for those looking to dive into the world of AI video creation.


  • ๐ŸŽฅ The speaker, Tyler (JBS), is a freelance creative director, videographer, and editor focused on AI video and animation.
  • ๐Ÿš€ Tyler has been developing a workflow for creating AI-based animations and shares resources for beginners and intermediate users.
  • ๐Ÿ“š For beginners, Tyler recommends an article by Inner Reflections AI and a tutorial by Enigmatic E on installing and running Comfy UI.
  • ๐Ÿ”— To access the workflow, users must create an account on and download a PNG file, which is then imported into Comfy UI.
  • ๐Ÿ“ˆ The workflow includes various nodes for different aspects of the animation process, such as control nets, samplers, and upscaling.
  • ๐Ÿ–ผ๏ธ The IP Adapter allows users to inject reference images into the animation to influence the style and content.
  • ๐ŸŽจ The Prompt node is crucial for defining the animation's direction and includes a batch prompt scheduler for multiple prompts and keyframes.
  • ๐ŸŒ The speaker uses the workflow for video-to-video and text-to-video animations, as well as creating images from QR code images.
  • ๐Ÿ’ป Tyler emphasizes the importance of understanding the node-based workflow and offers support through Twitch streams for further guidance.
  • ๐Ÿ“น The workflow is designed to be adaptable, with options to bypass or enable different nodes based on the user's needs.
  • ๐ŸŽž๏ธ The final output includes a low-resolution preview and an upscaled video, with the option to further refine the animation in post-processing.

Q & A

  • Who is the speaker in the video and what are his professional roles?

    -The speaker in the video is Tyler, also known as JBS or jbooks on Instagram. He is a freelance creative director, videographer, and editor.

  • What is the main focus of the YouTube channel mentioned in the video?

    -The main focus of the YouTube channel is AI video and animation, specifically diving into animate diff and comfy UI.

  • What are the resources recommended for beginners to get started with anime diff and comfy UI?

    -For beginners, there is an article written by Inner Reflections AI and a YouTube tutorial by enigmatic E on how to get comfy UI installed and running on their machine.

  • How does one download the workflow for the JBS creative and machine learner vidto vid animate diff?

    -To download the workflow, one must go to, create an account, and then click the download button for the most current version of the workflow, which will be a PNG file.

  • What is the purpose of the comfy UI manager and how is it installed?

    -The comfy UI manager is used to install missing custom nodes. It is installed by following the links provided in the video description.

  • What is the significance of the IP adapter in the workflow?

    -The IP adapter allows the use of up to four reference images to influence the animation, which is powerful for getting the animation to match a specific style or character.

  • How does the speaker handle the control nets in his workflow?

    -The speaker typically uses control nets like open pose and line art, setting the control net stacker to around 85% of the diffusion process steps for most animations.

  • What are the recommended settings for the upscaler in the workflow?

    -The upscaler settings recommended by the speaker include using bilinear scale by 1.5, 30 steps with a CFG of 7, and a denoise strength of 7.

  • How does the speaker address issues with the image save node?

    -If the image save node does not work after installing all other node groups, the speaker suggests installing the was node Suite manually and restarting Art Com UI for it to function properly.

  • What additional support does the speaker offer for those using the workflow?

    -The speaker offers live support through Twitch streams every Thursday at 3:00 p.m. Pacific on, where he works for and answers questions directly.



๐ŸŽฅ Introduction to AI Video and Animation Workflow

The speaker, Tyler (JBS), introduces himself as a freelance creative director, videographer, and editor. He shares his obsession with AI video and animation over the past 8 months and invites viewers to dive into the world of animate diff and comfy UI. For beginners, he recommends resources including an article by Inner Reflections AI and a tutorial by Enigmatic E on YouTube for getting started with comfy UI. Tyler also empathizes with the anxiety of working with node-based workflows and assures that with the right resources, anyone can master it.


๐Ÿ“š Setting Up the Workflow

Tyler explains the process of setting up the workflow for AI video and animation. He instructs viewers to download a specific workflow from and install it using comfy UI. He emphasizes the importance of installing the comfy UI manager and custom nodes to ensure a smooth workflow. Tyler also provides guidance on how to handle initial errors and missing nodes, reassuring beginners that with patience and the right resources, they can overcome these challenges.


๐ŸŒŸ Exploring the Workflow's Features

The speaker delves into the features of the workflow, highlighting its capabilities for video-to-video and text-to-video animations. He mentions the use of an Alpha mask for image creation, a four-image IP adapter for influencing animations, a prompt scheduler for multiple prompts, and multiple laura nodes for control. Tyler also discusses the importance of control nets and an upscaler for maintaining detail and avoiding loss of control in animations.


๐Ÿ“ธ Uploading and Configuring Video Inputs

Tyler guides viewers on how to upload and configure video inputs for the animation process. He explains the importance of selecting the right resolution and frame rate for the base video, as well as setting the frame load cap for rendering specific frames of the video. He also discusses the use of the skip first frames option for testing different parts of the video and the select every option for rendering frames continuously or at intervals.


๐ŸŽจ Customizing Animation with Anime Diff Nodes

In this section, Tyler discusses the customization of animations using anime diff nodes. He explains the selection of the anime diff motion module, the use of uniform context options, and the importance of motion Lura nodes for adding specific types of motion to the animation. He also assures viewers that these nodes can be bypassed for most animations unless specific effects are desired.


๐Ÿ–Œ๏ธ Utilizing Prompts and IP Adapter for Style Injection

Tyler explains the significance of prompts in defining the animation's style and outcome. He details the syntax and structure of prompts, emphasizing the importance of correct punctuation and frame numbers. He also introduces the IP adapter, which allows the use of reference images to inject specific styles into the animation. Tyler provides a step-by-step guide on setting up the IP adapter, including the preparation of images and the selection of weights and strengths for style injection.


๐ŸŽฅ Control Nets and Sampler Settings

The speaker discusses the role of control nets in refining animations, providing options for different control net models and their settings. He explains the process of selecting and configuring control nets for various animation elements. Tyler also covers the sampler settings, including steps, CFG, and denoising strength, and shares his personal preferences for achieving high-quality animations.


๐Ÿ”„ Upscaling and Finalizing the Animation

Tyler talks about the upscaling process to achieve higher resolution videos. He outlines the settings and considerations for the upscaler, including the method, steps, and scheduler. He also discusses the importance of maintaining the same control nets and settings for consistency in the upscaled video. The speaker mentions his plans for a future video on post-processing and upscaling, providing a glimpse into further refinements of the animation process.

๐Ÿš€ Conclusion and Additional Resources

In the conclusion, Tyler recaps the workflow and encourages viewers to engage with him through his Twitch streams for further assistance and guidance. He expresses hope that the video was helpful and invites feedback for improvement. Tyler signs off, reiterating his identity as Jay Boogs and wishing peace to the viewers.



๐Ÿ’กAI video and animation

AI video and animation refer to the process of creating moving images or video content using artificial intelligence. In the context of the video, it involves using AI to generate and manipulate video footage, often for creative or artistic purposes. The main theme revolves around the speaker's obsession with AI in video creation and their experience with various AI tools and workflows.

๐Ÿ’กFreelance creative director

A freelance creative director is a self-employed professional who oversees the creative aspects of projects, such as advertising campaigns, video productions, or other media content. In the video, the speaker identifies themselves as a freelance creative director, which implies they have expertise in guiding the creative vision for various projects and may utilize AI tools to enhance their work.

๐Ÿ’กComfy UI

Comfy UI is a user interface for a specific AI-based video editing program. It is mentioned as a tool that the speaker uses and finds challenging initially due to its node-based workflow. The video provides resources for beginners to learn Comfy UI and for advanced users to navigate it more effectively.

๐Ÿ’กAnime diff and comfy UI

An 'anime diff' likely refers to a diffusion model or technique used in AI animation that creates a stylized, anime-like appearance in videos. Comfy UI is the user interface for the AI video editing program. The speaker mentions these terms in the context of resources for beginners and intermediate users to learn and navigate the AI animation workflow.


In the context of the video, a workflow refers to a specific sequence of steps or processes used to create AI animations. The speaker shares their personal workflow, which includes various nodes and settings in Comfy UI that help in generating the final animated video.

๐Ÿ’กControl Nets

Control Nets in AI video editing are tools or algorithms used to influence and control the output of the AI, particularly in terms of maintaining certain features or aspects of the animation. The speaker discusses using Control Nets to achieve desired results in their animations without making them too stiff or restrictive.


Upscaling in video editing refers to the process of increasing the resolution of a video, often to improve its quality or to prepare it for different display formats. In the video, the speaker discusses an upscaling process that involves increasing the resolution of their low-resolution animation to a higher one while maintaining quality.

๐Ÿ’กFace Swap

Face swap is a technique that involves replacing the face of a person in a video with another face. The speaker mentions using a face swap node in their workflow to add a specific face onto the subject of their animation, which can be useful for creating personalized or character-based animations.

๐Ÿ’กIP Adapter

The IP Adapter in the context of AI video editing is a tool that allows users to inject reference images into their animations to influence the style or appearance of the animation. The speaker discusses using the IP Adapter to inject up to four reference images to achieve a desired look in their animation.


In AI video editing, a sampler refers to a tool or algorithm that selects different iterations or samples from the AI model to create the final output. The speaker discusses using a sampler in their workflow to determine how many iterations of the diffusion process to go through before producing the animation.


The speaker, Tyler (JBS), is a freelance creative director, videographer, and editor who specializes in AI video and animation.

Tyler has been focusing on AI video and animation for the past 8 months and shares his knowledge through his YouTube channel.

The video discusses an in-depth workflow for creating AI animated videos using Comfy UI, a node-based workflow software.

Comfy UI can be intimidating at first, but Tyler recommends resources for beginners to get started and become familiar with the platform.

The workflow involves using a variety of nodes and settings to create animations, including video to video, text to video, and QR code image creations.

Tyler provides a step-by-step guide on downloading and installing the necessary components for the workflow, including the Comfy UI manager and custom nodes.

The workflow includes features like the IP adapter for using reference images, control nets for detailed animations, and a sampler for the diffusion process.

Tyler emphasizes the importance of the prompt in the workflow, which dictates the style and content of the animation.

The video also covers how to use the face swap feature in the workflow for adding specific faces onto animated subjects.

Tyler shares his personal settings and preferences for various nodes in the workflow, offering insights into achieving high-quality animations.

The workflow allows for upscaling the resolution of the animation while maintaining quality, using specific settings and control nets.

Tyler provides solutions for common issues, such as problems with the image save node, and encourages seeking help through his Twitch streams.

The video is the first in a series, with Tyler planning to create more content on videography, editing, and AI video animation.