AI Image Generation Algorithms - Breaking The Rules, Gently

Atomic Shrimp
25 Feb 202309:37

TLDRThe video explores AI image generators, focusing on DALL-E from OpenAI and Stable Diffusion from Stability AI. It compares their outputs to previous algorithms, noting improvements and occasional misunderstandings. The creator discusses the algorithms' ability to generate realistic images and their emergent properties, highlighting their limitations in text understanding and output. The video also humorously experiments with generating text-like images and shares an intriguing discussion on the potential archetypal nature of English language as represented by AI.


  • 🎥 The video discusses the creator's informal exploration of AI image generators, focusing on the phenomenon rather than the technology.
  • 📹 The creator had access to more advanced algorithms, Dally from OpenAI and Stable Diffusion from Stability AI, and shares their experiences with these tools.
  • 🔍 The video compares the results from these advanced algorithms to previous ones, noting both improvements and disappointments in the generated images.
  • 📝 The importance of verbose text prompts is highlighted to achieve desired outputs, as seen with the 'boy with apple' and 'oil painting' examples.
  • 💡 AI algorithms are capable of creating realistic images based on their training, such as understanding refraction and shadows in the context of 'sunlit glass of flowers on a pine table'.
  • 🧠 The algorithms are not sentient but have been trained to perform tasks that mimic human perception and creativity, like 'knowing' or 'imagining'.
  • 🚫 The video mentions that asking for text output is discouraged as the algorithms are not trained for written output, but the creator finds the results interesting and amusing nonetheless.
  • 🖼️ The 'outpainting' feature of Dally is demonstrated, showing how it can extend an image by filling in plausible details.
  • 🎭 The creator collaborates with Simon Roper, a language expert, to read AI-generated outputs in an Old English style, exploring the potential archetypal language aspects.
  • 🌟 The video concludes with the idea that sometimes not following guidelines can lead to interesting discoveries and experiences.

Q & A

  • What was the main focus of the creator's previous videos on AI image generators?

    -The creator's previous videos focused on exploring various artificial intelligence image generators from a more phenomenological perspective rather than a technical one.

  • Which AI algorithms did the creator gain access to after making the initial videos?

    -The creator gained access to DALL-E from OpenAI and Stable Diffusion from Stability AI.

  • How did the creator approach testing the capabilities of the new AI algorithms?

    -The creator decided to use the same text prompts with these new algorithms as those used in the previous videos to see how the results would compare.

  • What was the general outcome of using the same text prompts with the new AI algorithms?

    -The results were mixed, with some improvements and triumphs, but also some disappointments, depending on the prompt used.

  • How did the creator describe the difference between previous algorithms and DALL-E or Stable Diffusion in terms of output?

    -Previous algorithms were more focused on creating art-like images, while DALL-E and Stable Diffusion aimed to provide more literal responses to the text prompts.

  • What does the creator mean when saying the AI algorithms 'know' or 'imagine' things?

    -The creator means that the algorithms have been sufficiently trained and configured to perform tasks that, if done by humans, would be described as knowing or imagining. It does not imply sentience or self-awareness.

  • How did the AI algorithms demonstrate their ability to create realistic images?

    -The algorithms could generate images like a sunlit glass of flowers on a pine table with plausible shadows and light effects, indicating an understanding of refraction, shadows, and how sunlight interacts with objects.

  • What was the creator's experience when asking the AI for text output?

    -The creator found it interesting and amusing, as the AI produced outputs that visually resembled text but did not actually form coherent words or sentences, showing that the AI knows what writing looks like but not how to write.

  • What did the creator do with the 'outpainting' feature of DALL-E and Stable Diffusion?

    -The creator used the 'outpainting' feature to extend an existing image into a larger view by filling in plausible details, such as extending a sign or creating more of a scene from a poem.

  • How did the creator explore the idea of AI-generated text representing an archetypal version of English?

    -The creator had the AI generate text outputs and then asked Simon Roper, a YouTuber specializing in language, to read them in an Old English style to see if there was any archetypal essence to the generated words.

  • What was the overall message the creator wanted to convey with their exploration of AI image generation?

    -The creator wanted to show that sometimes not following guidelines can lead to interesting discoveries and fun experiences, without advocating breaking laws or safety protocols.



🎨 AI Image Generators: Exploration and Experimentation

The paragraph discusses the creator's informal exploration of various artificial intelligence image generators, focusing on studying them as a phenomenon rather than just as technology. The creator has recently gained access to more advanced algorithms, Dally from OpenAI and Stable Diffusion from Stability AI, and shares the outcomes of using them with the same text prompts as in previous videos. The results were a mix of triumphs and disappointments. The creator compares the new outputs with previous ones, noting improvements and areas where the algorithms did not perform as expected. The paragraph highlights the need for more verbose text prompts with these advanced algorithms to achieve desired outputs, such as generating an oil painting style image of a boy with an apple in the style of Johannes van Hoytul.


🤖 AI's Image Generation Process and Text Output Curiosities

This paragraph delves into the process of how AI algorithms generate images, emphasizing that they are not sentient but have been trained to perform tasks that mimic human understanding of concepts like refraction and shadows. The creator challenges the skepticism about the uniqueness of generated images by changing prompts and receiving plausible results. The discussion then shifts to the limitations of AI in text generation, explaining that while AI can produce images of text, it has not been trained to write or produce written output. The creator finds it interesting and amusing to request text output despite the advice against it, resulting in outputs that visually resemble text but are not actual written content. The paragraph concludes with a creative experiment involving the outpainting feature of Dally and Stable Fusion, and a collaboration with a YouTuber, Simon Roper, who reads AI-generated text in Old English style, adding an extra layer of curiosity to the exploration of AI's capabilities.



💡artificial intelligence image generators

Artificial intelligence image generators refer to AI systems capable of creating visual content based on given input or prompts. In the context of the video, the creator explores these systems not as technology per se but as a cultural and creative phenomenon. The video showcases the evolution from earlier algorithms to more advanced ones like DALL-E from OpenAI and Stable Diffusion from Stability AI, highlighting improvements in image generation quality and the ability to follow complex prompts more accurately.


DALL-E is an advanced AI algorithm developed by OpenAI known for its ability to generate images from textual descriptions. It represents a significant leap in AI image generation capabilities, as it can understand and execute complex prompts that earlier algorithms struggled with. The video script describes how DALL-E responded to prompts with more accurate and detailed images compared to previous systems.

💡Stable Diffusion

Stable Diffusion is another sophisticated AI image-generating algorithm developed by Stability AI. It is designed to produce high-quality images from textual descriptions, aiming to provide precise visual outputs that closely match the user's request. The script highlights how Stable Diffusion, like DALL-E, is a step forward in AI's ability to comprehend and execute detailed and nuanced prompts.

💡text prompts

Text prompts are textual descriptions or requests given to AI image generators to produce specific images. These prompts can range from simple to complex and are a crucial aspect of how AI systems interpret and generate visual content. The video emphasizes the importance of crafting detailed and descriptive text prompts to guide AI algorithms in creating the desired output.

💡realistic images

Realistic images refer to visual outputs generated by AI that closely mimic real-world appearances. The ability to create realistic images is a significant milestone in AI image generation, as it demonstrates the system's understanding of various visual elements such as lighting, shadows, textures, and object shapes. The video script discusses how advanced AI algorithms can generate realistic images that are almost indistinguishable from photographs.

💡emergent properties

Emergent properties are characteristics or behaviors that arise from complex systems as a result of interactions among their parts. In the context of AI learning, these properties are not explicitly programmed but result from the training process. The video script mentions the understanding of refraction as an emergent property, where the AI learns to generate images with accurate depictions of light and glass without being directly taught these concepts.

💡verbose text prompt

A verbose text prompt is a detailed and lengthy textual description provided to an AI system to guide the generation of a specific image. These prompts help the AI understand the nuances and complexities of the desired output, leading to more accurate and relevant visual content. The video emphasizes the need for verbose prompts when using advanced AI algorithms to achieve the desired results.


Outpainting is a feature of some AI image-generating algorithms that allows them to extend an existing image by creating additional, plausible sections that blend seamlessly with the original content. This capability showcases the AI's ability to predict and generate visual elements based on its understanding of the image's context and composition. The video script describes how outpainting was used to extend an image of a sign, resulting in a larger, coherent visual output.

💡text output

Text output refers to the generation of written or typographic content by AI systems. While AI image generators are primarily designed for visual content creation, they can also produce text-like outputs based on their exposure to images containing text during training. The video script explores the interesting and sometimes amusing results of requesting text output from these algorithms, despite it not being their primary function.

💡archetypal version of English

An archetypal version of English refers to a fundamental or primal representation of the language, which may encompass the most basic visual or structural elements of words and phrases. In the video, the creator speculates that AI-generated text outputs might represent an archetypal version of English, as the algorithms have learned to draw pictures of words rather than understanding their linguistic meaning.

💡not following guidelines

Not following guidelines in this context refers to the creator's decision to experiment with AI image generators beyond the recommended or expected use cases. The video highlights that while there are certain guidelines for using AI, such as avoiding requests for text output, deliberately not adhering to these guidelines can lead to interesting and unexpected discoveries. This approach encourages creativity and exploration of the AI's capabilities.


The speaker discusses their informal exploration of artificial intelligence image generators, focusing on the phenomenon rather than the technology.

Access to more advanced algorithms, Dally from OpenAI and Stable Diffusion from Stability AI, has been gained for further exploration.

The results from using the same text prompts as in previous videos show a mix of triumphs and disappointments.

A comparison between previous and current outputs reveals an improvement in the quality of generated images.

The speaker notes that some algorithms are specifically trying to return something that looks like a work of art, while others aim to return exactly what was asked for.

To achieve a desired output, more verbose text prompts are often required with the newer algorithms.

The algorithms have been trained to create realistic images, such as a sunlit glass of flowers on a pine table, based on their understanding of refraction and shadows.

The speaker changes the prompt to a sunlit glass sculpture of a lobster and a Citroen 2cv, receiving plausible images that demonstrate the algorithm's ability to generate novel combinations.

The algorithms sometimes misunderstand sentence attributes, leading to images that don't perfectly match the prompt.

The speaker discusses the limitations of the algorithms in understanding and producing written text, despite their knowledge of what writing looks like.

Interesting and amusing results are found when asking for text output, even though it's advised against.

The speaker's experiments with text output evoke a sense of archetypal English, suggesting the algorithms have learned to make primitive word shapes abstracted from their meaning.

The speaker collaborates with Simon Roper, a YouTuber specializing in language, to read some of the AI-generated outputs in an Old English style.

The video concludes with the speaker reflecting on the fun of not always following guidelines and encourages viewers to explore the edges of AI image generation.