Midjourney's Amazing New Feature PLUS: StableVideo 1.1 from Stablity.AI!
TLDRThe video discusses a mid-journey update on style consistency in AI image generation, introducing a new feature that combines image prompting with style tuning. It explores the use of style references and multiple image URLs to create a new style, demonstrating the process using the Mid Journey Alpha website. The video also delves into the capabilities and limitations of the feature, including the influence of single and combined style references. Additionally, it covers the early access to Stable Video from Stability, highlighting its open-source platform and features like camera motion and zooming, while noting that some features are still in development.
Takeaways
- 🚀 Introduction of a new feature in mid-journey for style consistency, combining image prompting and style tuning.
- 🎨 Utilization of image URLs with prompts to create a new style, accessible via the MID Journey Alpha website.
- 📈 The requirement of having generated a certain number of images to access the MID Journey Alpha website.
- 🔗 Explanation of the --s, ref command for referencing images and its application in creating styled images.
- 🌟 Demonstration of how the new feature can yield different results by adjusting the influence of reference images.
- 📸 Difference between style referencing and simple image referencing, with examples of the outcomes.
- 🤖 Limitations of the feature, such as not being able to create consistent characters yet.
- 🔄 The ability to combine multiple images for style references, resulting in blended and unique outputs.
- 📚 Availability of a free PDF guide on gumroad for more information on the new feature.
- 🎥 Stability's platform for stable video diffusion 1.1 in beta, with options to start with an image or text prompt.
- 🌐 The inclusion of camera motion options like lock, shake, tilt, orbit, and pan in stable video.
- 🎞️ Showcase of the quality of generated videos and the potential for creative use in various scenarios.
Q & A
What is the main focus of the mid Journey update discussed in the transcript?
-The main focus of the mid Journey update is style consistency, specifically the introduction of a new feature that combines image prompting with style tuning to create a new style based on provided image URLs or multiple image URLs.
How does the new style reference feature work in mid Journey?
-The style reference feature works by issuing the --s ref command along with the image you are referencing. This allows the user to create an image in a style that is influenced by the provided reference image URL.
What is the current access status for the new mid Journey Alpha website?
-Access to the new mid Journey Alpha website has been opened to users who have generated more than 5,000 images, and users who have generated 1,000 images are expected to gain access soon.
How can users control the influence of each image URL in the style reference feature?
-Users can control the overall influence of each image URL by using the waiting command, which allows them to adjust the intensity of the style reference, with options ranging from 1 to 1,000.
What is the difference between style referencing and simple image referencing?
-Style referencing is different from simple image referencing in that it not only uses the image as a reference but also blends the style of the reference image with the generated content, creating a more stylistically consistent output.
What are some limitations of the new style reference feature?
-The style reference feature does not currently support consistent characters and can become temperamental when pushed too far, especially when using three style references that do not have a thematic connection.
What is the current status of stable video from stability.a?
-Stable video from stability.a is currently in beta and is available for free during this period. It offers a platform for stable video diffusion 1.1, which is open source and may be the underlying technology for other platforms.
What are some of the camera motion options available in stable video?
-In stable video, users can lock the camera, shake the camera, tilt it down, perform an orbit, pan, and zoom in and out. There are also experimental camera motion options that users can explore.
How does the voting system work in stable video?
-After generating a video, users can vote on which of the generations from other users they think looks good. This interactive feature allows for community involvement in the creative process and can be a way to pass time while waiting for generations to complete.
What are the text video options in stable video?
-For text video, users have options for three different aspect ratios and can choose from a number of different styles to generate a video. They can input a text prompt and select from four generated options to find the one that best suits their needs.
What is the overall impression of the creative AI space based on the transcript?
-The overall impression is that the creative AI space is rapidly advancing, with new features and platforms being developed and improved. The speaker is excited about the potential of these tools and looks forward to seeing the progress in the near future.
Outlines
🎨 Introducing Mid Journey's Style Consistency Feature
The paragraph discusses the introduction of a new style consistency feature in Mid Journey, an AI platform for image generation. It explains that users can utilize image URLs or multiple images along with a prompt to create a new style. The feature is likened to a blend of image prompting and style tuning. The video script provides a walkthrough of using the MID Journey Alpha website, which is accessible to certain users. It also describes how to issue commands to generate images with specific styles, such as referencing a Lara Croft image. The paragraph highlights the differences between this new feature and simple image referencing, and explores the possibilities of combining multiple images for style references. It also mentions the limitations of the feature, particularly in maintaining consistent characters, and notes that it is still in the alpha phase of development.
🌐 Exploring Style References and Stability.Video
This paragraph delves deeper into the intricacies of using style references in Mid Journey. It describes how the influence of each image URL can be controlled and how combining different images can lead to unique and interesting results. The script also touches on the challenges of using three unrelated style references and suggests that thematic connections can improve outcomes. The paragraph then shifts focus to Stability.Video, a platform for stable video diffusion developed by Stability.A. It discusses the open beta access and the features available for generating videos from images or text prompts. The video script provides examples of generated videos, including character animations and establishing shots, and comments on the quality and potential improvements. The paragraph concludes by emphasizing the rapid advancements in the creative AI space and the excitement for future developments.
🚀 Exciting Updates in AI Image and Video Generation
The final paragraph summarizes the exciting updates and features in AI image and video generation. It mentions the new capabilities of Mid Journey's style consistency feature and the potential for powerful combinations with other commands. The paragraph also highlights the free PDF available on Gumroad for further information. Turning to Stability.Video, the script discusses the platform's early access and the open-source nature of its technology. It provides insights into the types of videos that can be generated, including character animations and text-to-video options. The video script concludes with a reflection on the progress in the field and the anticipation for future advancements, leaving the audience excited about the potential of AI in creative content creation.
Mindmap
Keywords
💡Mid Journey Update
💡Style References
💡Stable Diffusion
💡Image Prompting
💡Style Tuning
💡Community Feed
💡Discord
💡Alpha Version
💡Style Influence
💡Gumroad
💡Beta Period
Highlights
Introduction of a mid-journey update focusing on style consistency.
Exploration of a new feature that combines image prompting with style tuning.
Use of image URLs with prompts to create a new style, demonstrated through the MID Journey Alpha website.
Access to the MID Journey Alpha website is currently limited, but will soon be available to more users.
The ability to drag and drop an image for immediate style referencing.
Influence of reference images on the generated content, such as changing Lara Croft's appearance to resemble the reference.
Combining two different images as style references to create a blended style, like a cyberpunk woman and a dog Samurai.
Control over the influence of each image URL through the use of wait commands.
The provision of a free PDF on gumroad detailing the information, with donations appreciated.
Challenges with using three style references, leading to unusual results.
The potential of style referencing to inspire new creative directions, such as generating an astronaut in a coffee shop.
The limitation of the feature in maintaining consistent characters, with the upcoming release of Dash Dash CF.
The ability to increase the overall strength of style reference images with the --ssw command.
Introduction to Stability's platform for stable video diffusion 1.1, which is open source.
Options to start with an image or text prompt for video generation.
Features like camera lock, shake, tilt, orbit, pan, and zoom, available for video generation.
The experimental camera motion feature and its potential for interesting results.
The community voting system for generations from other users.
Examples of generated videos, including a pirate ship made of Swiss cheese and a crime film character.
Text video options with different aspect ratios and styles, demonstrated with a digital art style.
The current free access to stable video during its beta period.
Anticipation for the progress in the creative AI space and its potential impact by the end of the year.