FREE Stable Diffusion Based AI Art Editor - Playground AI's "Canvas"

MattVidPro AI
23 Mar 202315:58

TLDRThe video discusses the advancements in AI art image generation, highlighting the evolution from simple prompt-based image creation to the current state where tools like Playground AI's 'Canvas' offer sophisticated editing features. The Canvas editor is compared to Dolly 2, noting its strengths in outpainting and inpainting while also mentioning its limitations and areas for improvement. The host explores the Canvas editor's capabilities, such as generating high-resolution images, blending separate images, and the potential for fine-tuning details. The video also touches on the need for a new open-source model to advance AI art technology and concludes by encouraging viewers to experiment with the Canvas editor, which is currently free to use.


  • 🎨 **AI Art Evolution**: The AI art space has evolved significantly with the introduction of open-source models like Stable Diffusion, leading to numerous tweaks, methods, and fine-tuned models.
  • 🖼️ **Dolly 2 Editor**: Dolly 2 is recognized for its outpainting capabilities, allowing users to expand images by regenerating parts based on prompts.
  • 📈 **Playground AI's Canvas**: Playground AI has introduced its own canvas editor, similar to Dolly 2, offering fine-tuned Stable Diffusion models and unique outpainting and inpainting systems.
  • 💡 **Text-Based Image Editing**: Playground AI features a chat-like interface for text-based image editing, enabling specific requests like adding a top hat to a cat in the image.
  • 🔍 **Canvas Beta Features**: The Canvas beta by Playground AI allows users to adjust the size of the generation frame, enabling more precise editing and expansion of images.
  • 🚀 **High-Resolution Generations**: The system supports high-resolution image generation, allowing for detailed and expansive AI art creations.
  • 🔄 **Blending Images**: Users can blend separate images effectively using the inpainting feature, even without specific filters designed for blending.
  • 🔍 **Fine-Tuned Models**: Playground AI's Canvas editor provides access to fine-tuned Stable Diffusion models, enhancing the customization and quality of generated images.
  • ⏱️ **Generation Time**: The generation process in the Canvas beta takes longer than typical Stable Diffusion generations, indicating the complexity of the editing features.
  • 🆓 **Free to Use**: Playground AI offers the Canvas editor for free, allowing users to generate a substantial number of images daily with limited quality after 50 generations.
  • ✂️ **Creative Exploration**: The Canvas beta encourages users to experiment with AI art, offering an 'infinite canvas' for continuous generation and exploration of creative ideas.

Q & A

  • What was the initial state of AI art image generation technology?

    -The initial state of AI art image generation technology was quite basic, where it only required a prompt to generate an image.

  • How has the AI Art Space evolved since the introduction of Stable Diffusion?

    -The AI Art Space has significantly expanded with the introduction of Stable Diffusion, which is open source and has led to the addition of tweaks, new methods, and fine-tuned models by the community.

  • What is the main function of the Dolly 2 image editor?

    -The Dolly 2 image editor is primarily used for outpainting, which involves expanding an image by generating new content that matches the existing image's style and content.

  • What are some of the features offered by Playground AI's Canvas editor?

    -Playground AI's Canvas editor offers features such as fine-tuned Stable Diffusion models, a good prompt system with exclusionary prompts, text-based image editing, and the ability to change the size of the generation box for more precise editing.

  • How does Playground AI's Canvas editor differ from Dolly 2 in terms of outpainting and inpainting?

    -Playground AI's Canvas editor uses its own system for outpainting and inpainting, which allows for more precise control and potentially better results compared to Dolly 2, although Dolly 2 has been a long-standing favorite for image generation.

  • What is the significance of the expandable and changeable box feature in the Canvas editor?

    -The expandable and changeable box feature allows users to select specific areas of the image for generation, enabling more targeted edits and the ability to maintain the original image's integrity outside the selected area.

  • How does the Canvas editor handle blending separate images?

    -The Canvas editor can blend separate images by using an inpainting generation frame over both images, allowing for seamless blending even when specific filters are not supported for blending.

  • What is the limitation when using filters on previously generated images in the Canvas editor?

    -When the generation frame is hovering over another previously generated image, the user is limited in terms of the filters that can be used, as some filters are not yet supported for use on existing images.

  • How does the Canvas editor compare to other AI art generation systems in terms of resolution and detail?

    -The Canvas editor, utilizing Stable Diffusion, is capable of generating high-resolution images with a lot of detail, which is one of the benefits of using an outpainting system like this for AI art generation.

  • What are some of the challenges faced by the Canvas editor during the outpainting process?

    -The Canvas editor struggles with rendering detailed and complex scenes, especially with close-up images at high resolutions. It also has limitations in blending unrelated images or generating multiple generations from a single prompt.

  • What are some of the future improvements desired for the Canvas editor?

    -Some desired future improvements for the Canvas editor include the ability to do multiple generations per prompt, faster generation times, an undo button similar to Dolly 2, and possibly an auto prompter feature for more creative freedom.



🎨 Evolution and Tools of AI Art Generation

The paragraph discusses the evolution of AI art, starting from simple image generation based on prompts to the current state with various tools and methods. It highlights the Dolly 2 image editor, known for outpainting, and the recent announcement of Playground AI's canvas editor. The summary also covers the capabilities of these tools, such as fine-tuning stable diffusion models, text-based image editing, and the unique expandable generation box feature of Playground AI's canvas editor.


🧩 Blending and Editing AI-Generated Images

This section explores the ability to blend and edit AI-generated images using the canvas editor. It discusses the process of in-painting to blend separate images and the limitations when hovering over previously generated images. The paragraph also touches on the experimental aspect of blending unrelated images without prompts and the potential for an auto prompter feature in the future. The user expresses a desire for additional features like multiple generations per prompt, faster generation times, and an undo button.


🏝️ High-Resolution AI Image Generation and Editing

The speaker experiments with generating high-resolution images using the canvas editor, specifically focusing on outpainting a close-up image of a pirate ship. They note the challenges with close-up images and the surprising results when generating images without any prompts. The paragraph emphasizes the fun and creative aspect of using the canvas editor, the potential for tweaking images for specific results, and the excitement for future developments in AI art tools.


🚀 The Future of AI Art and the Need for New Models

In the final paragraph, the speaker reflects on the current state of AI art technology and the reliance on base models like stable diffusion. They express a need for an updated open-source model to further advance the field. The speaker also shares their enthusiasm for the technology and invites viewers to share their thoughts and try out the canvas editor for themselves.



💡AI Art Generation

AI Art Generation refers to the process of creating visual art through artificial intelligence. It involves using algorithms and machine learning models to generate images based on prompts or existing images. In the video, AI Art Generation is the central theme, showcasing how technology has evolved to allow for the creation of intricate and detailed images, often with the help of platforms like Playground AI.

💡Stable Diffusion

Stable Diffusion is an open-source AI model used for image synthesis. It's a part of the broader AI Art Generation landscape and is known for its ability to generate high-quality images. The video discusses how Stable Diffusion is utilized within the Playground AI's Canvas editor to create and edit images, highlighting its importance in the current state of AI art tools.


Outpainting is a technique used in AI Art Generation where the AI extends or expands an existing image beyond its original borders. The video demonstrates outpainting with the Dolly 2 image editor and Playground AI's Canvas, showing how the AI can creatively continue the image contextually and aesthetically.


Inpainting is the process of editing an image by filling in or erasing parts of it to create a new composition. The video script describes how Dolly 2 and Playground AI's Canvas can perform inpainting, allowing users to replace or remove sections of an image and have the AI generate a coherent result.


In the context of AI Art Generation, a prompt is a text description or request given to the AI to guide the generation of an image. The video emphasizes the use of prompts in directing the AI to create specific imagery, such as 'a little village living up in this little cloud,' which the AI then attempts to generate.

💡Canvas Editor

The Canvas Editor is a tool within Playground AI that allows users to edit and generate images using Stable Diffusion. It is highlighted in the video as a new and improved feature that offers capabilities like outpainting, inpainting, and text-based image editing, setting it apart from other AI art generation tools.

💡Text-Based Image Editing

Text-Based Image Editing is a feature that enables users to describe changes they want to make to an image using natural language, which the AI then interprets to make the corresponding edits. The video script describes this feature as a way to 'talk to' the AI, giving it instructions like 'give this cat a top hat'.


Mid-Journey refers to a specific AI art generation model or platform mentioned in the video. It is used as a source for importing high-quality image generations into the Playground AI's Canvas editor, indicating that it is recognized for its strong performance in generating images.

💡Fine-Tuned Models

Fine-Tuned Models are AI models that have been trained on specific data sets to improve their performance for particular tasks. In the video, Playground AI's Canvas editor is noted to use fine-tuned Stable Diffusion models, which enhances the quality and specificity of the image generation process.

💡Generation Frame

The Generation Frame is the designated area within the Canvas Editor where the AI generates or edits images based on user prompts. The video script discusses how the size of this frame can be adjusted, allowing for precise control over which parts of the image the AI will affect.


Beta refers to a testing phase of a software or tool that is not yet fully released to the public. The video script mentions that the Canvas editor by Playground AI is in beta, indicating that it is still being developed and improved upon, and users can expect new features and changes in the future.


AI art image generation has evolved significantly since last year with the introduction of various components and methods.

Stable Diffusion has become an open-source technology that has been enhanced with tweaks, new methods, and fine-tuned models.

Dolly 2 image editor is a staple for editing AI art images, offering outpainting capabilities.

Playground AI has announced their own canvas editor, similar to Dolly 2's outpainting editor, offering unique features.

Playground AI's pricing and user interface are considered favorable, with a good experience for users.

The Canvas editor by Playground AI uses fine-tuned Stable Diffusion models and has its own system for outpainting and inpainting.

Users can change the size of the generation box in the Canvas editor, allowing for more precise image editing.

The Canvas editor allows importing high-quality generations from Mid-Journey and offers the ability to zoom in for detailed editing.

The editor can generate images based on prompts, and users can continuously generate to refine results.

Playground AI's Canvas editor has the ability to blend separate images effectively, creating seamless outpainting photos.

The editor can generate images without prompts, exploring the latent space of AI generation models.

The Canvas editor is capable of high-resolution image generation, allowing for detailed and expansive creations.

The editor is currently in beta and offers a significant leap forward in comparison to other stable diffusion in-painting and out-painting models.

Playground AI provides the Canvas editor for free, allowing users to generate up to a thousand images per day.

After 50 generations, the quality and details may be limited, but the editor remains a powerful tool for AI art creation.

The Canvas editor is expected to improve with updates, including features like multiple generations per prompt and faster generation times.

The editor's ability to make minor edits and fine-tune images is seen as its main selling point for users seeking specific final results.

The Canvas editor represents a step forward in the AI art space, offering an infinite canvas for continuous generation and creative exploration.