Wild AI Video Workflow with Viggle, Kaiber, Leonardo, Midjourney, Gen-2, and MORE!
TLDRIn this video, the creator shares an innovative AI filmmaking workflow that spans from pre-production to generating short films. Inspired by the 2016 film 'Rogue One', the process involves using AI tools to create a hybrid storyboard animatic animation. The creator demonstrates the workflow using the 'Are you not entertained?' scene from 'Gladiator', incorporating elements from 'John Carter of Mars' and 'Warhammer 40K'. The video explores various AI platforms like Vigle, Midjourney, and Kyber, highlighting their strengths and limitations in generating characters, backgrounds, and editing for a cohesive cinematic output. The creator concludes that while the method may not be perfect for full-length features, it's promising for short films and pre-production planning.
Takeaways
- 🎬 The speaker shares an AI filmmaking workflow that has potential from pre-production to generating short films.
- 🚀 The inspiration comes from the 2016 film Rogue One, specifically an interview with the editor about creating a feature-length story reel before the script was finished.
- 🧠 The idea is to use AI tools to create a hybrid storyboard, animatic, and animation, taking inspiration from the method used in Rogue One.
- 🎥 The process involves clipping reference footage, using Vigle for initial video generation, and Midjourney for character design.
- 🌟 The workflow includes using AI to augment footage, with examples like the 'Are you not entertained?' scene from Gladiator and elements from Warhammer 40K.
- 📸 Vigle's 2.0 update is utilized for its improved capabilities, but it has limitations, especially with camera movement.
- 🖼️ Leonardo is used to create character images in a 9:16 format, ensuring full-body shots for detailed reference.
- 🎞️ The output from Vigle is refined using Kyber's motion 3.0 feature for a more cohesive and stylized result.
- 🌐 Backgrounds are created with movement and life using Gen 2, and then combined with the Kyber-generated character for a unified look.
- 🎨 Video editing software like Premiere or DaVinci is used for compositing the character and background, with adjustments for a more cinematic feel.
- 🎙️ Audio elements like crowd chanting are generated using AI, and text-to-speech is utilized for dialogue, although finding the right model can be challenging.
Q & A
What is the main topic of the video?
-The main topic of the video is an AI film making workflow that covers various stages from pre-production to generating short films, using a combination of different AI tools.
What film inspired the creation of this workflow?
-The 2016 film Rogue One, directed by Gareth Edwards, inspired the creation of this workflow due to its historical significance and innovative use of AI in film production.
What specific scene from another movie is used as a reference in the video?
-The 'Are you not entertained?' scene from the movie Gladiator is used as a reference, with elements from John Carter of Mars and Warhammer 40K added to create a unique AI-generated scene.
Which AI tool is used for clipping out reference footage?
-Vigle is the AI tool used for clipping out reference footage.
What is the significance of the 2.0 update in Vigle?
-The 2.0 update in Vigle improves the quality of the AI-generated content, including better handling of dancing and other movements, and provides a more refined output for the user's projects.
How does the video creator overcome the limitations of Vigle in handling camera movement?
-The video creator suggests utilizing Leonardo for image-to-image references and Kyber for additional stylization and consistency in character appearance to overcome the limitations of Vigle's shaky and inconsistent camera movement handling.
What AI tool is used to add movement and life to the background?
-Gen 2 is used to add movement and life to the background by applying simple commands like moving to the right, which helps create a dynamic and engaging visual effect.
How is the final composite of character and background achieved?
-The final composite is achieved by bringing both the Kyber character and the Gen 2 background into a video editor like Premiere or DaVinci, using chroma key removal, and adjusting settings like choke, soften, and contrast to seamlessly blend the elements.
What tools are used for generating audio in the video?
-AudioGen is used for generating crowd chanting as background sound, and typcast is used for text-to-speech conversion for dialogue, providing a complete audio-visual experience for the AI-generated film.
What is the creator's overall assessment of this AI film making workflow?
-The creator believes that while the workflow is not perfect and may not be suitable for full feature films, it is quite effective for short films and pre-production stages, offering a more productive and innovative approach than traditional methods.
Outlines
🎬 AI Filmmaking Workflow Introduction
The speaker introduces an AI filmmaking workflow that has potential from pre-production to generating short films. They mention that the workflow is a kitbash of various tools and techniques, but the results are promising as seen in examples from friends of the channel. The inspiration comes from the 2016 film Rogue One, which featured the first major film with a fully deep faked character. The speaker plans to share their learnings and hopefully save time for those interested in trying out this workflow.
🌟 Utilizing AI for Hybrid Storyboard and Animation
The speaker discusses the idea of using AI to create a hybrid storyboard animatic animation. They reference a 2017 interview with the editor of Rogue One, Colin Ghoul, who talked about creating a feature-length story reel before the script was finished. The speaker then describes their attempt to recreate a scene from Gladiator using AI, incorporating elements from John Carter and Warhammer 40K. They explain the process of using reference footage, AI tools like Vigle for initial video generation, and Midjourney for character creation.
📸 Enhancing AI Generated Content with Additional Tools
The speaker details the process of refining the AI-generated content by using additional tools like Kyber for motion enhancement and Leonardo for background creation. They discuss the challenges of camera movement in AI videos and how using image references can improve the results. The speaker also talks about adding character depth and stylization, as well as the importance of background consistency in the final video output.
🎥 Post-Production and Audio Integration
In the final paragraph, the speaker talks about post-production processes such as compositing the character and background in a video editor like Premiere or DaVinci, using chroma key and color correction to integrate them seamlessly. They mention adding cinematic touches like black bars for a letterbox effect. For audio, the speaker describes using a free site called audiogen for crowd chanting and another free source called typcast for dialogue, using the Frankenstein model to generate a voiceover that fits their scene.
Mindmap
Keywords
💡AI film making workflow
💡Pre-production
💡Deepfake
💡Hybrid storyboard animatic
💡Vigle
💡Mid journey
💡Leonardo
💡Kyber
💡Chroma key
💡Audio generation
💡Text-to-speech
Highlights
The speaker shares an AI filmmaking workflow that covers pre-production to generating short films, showcasing its potential.
The inspiration for this workflow comes from the 2016 film Rogue One, which featured the first major film with a fully deep faked character.
Editor Colin Ghoul's 2017 interview discussed creating a feature-length story reel for Rogue One before the script was finished, using hundreds of movies.
The speaker's goal is to create a hybrid storyboard animatic animation using AI tools, taking the concept from Rogue One's pre-production.
The process begins by clipping out reference footage and using Vigle's 2.0 update for the initial generation of the video.
Vigle's 2.0 update includes dancing features, which can be used creatively, as demonstrated by a T800 T1000 dance-off.
The character model is created using Midjourney, focusing on a full-body image to ensure consistency.
Vigle's output can be improved by using Leonardo for image-to-image references, especially for fine-tuning character movements.
Kyber's new motion 3.0 feature is praised for its unique AI video generation capabilities.
The character's arm raise was fixed by using Leonardo with a low image strength and a specific prompt, resulting in a more coherent output.
Backgrounds are made dynamic and cohesive with the character by using Gen 2 and Kyber's stylization.
The final compositing is done in a video editor like Premiere, using chroma key and other adjustments for a polished result.
Crowd chanting audio was generated using the free site audiogen to enhance the short film's atmosphere.
Dialogue was created using typcast's Frankenstein model, which provided a more suitable result than other text-to-speech options.
The speaker concludes that while the method is not perfect for full feature films, it is useful for short films and pre-production work.
The workflow demonstrates the potential of kit bashing with AI tools to create engaging content quickly and efficiently.