CPU Deepfake Tutorial (No Graphics Card Required!)

Deepfakery
14 Sept 202008:29

TLDRThis tutorial guides viewers through creating deepfake videos using only a CPU, without the need for a graphics card. Utilizing DeepFaceLab 2.0, the video covers downloading and setting up the software, extracting images from videos, creating face sets, training the deepfake model, and merging the final video. It emphasizes optimizing settings for CPU-only training and provides tips for enhancing the quality of deepfake videos.

Takeaways

  • 😀 This tutorial teaches how to create deepfake videos using a CPU without a graphics card.
  • 💻 The software used is DeepFaceLab 2.0, build 8 2 2020, and it's run on a Windows PC.
  • 📂 It's recommended to close other applications to free up CPU resources for the deepfake process.
  • 🔧 The tutorial uses the 'Quick 96' preset trainer with settings optimized for CPU-only training.
  • 📥 Download DeepFaceLab from GitHub, using the provided torrent magnet link or direct download.
  • 📁 After extracting the files, the software is ready to use with no installation required.
  • 🖼️ The 'workspace' folder organizes images and trained model files for the deepfake process.
  • 🎥 Video clips are processed to extract frames, which are then used to create face sets for the deepfake.
  • 🤖 The face extraction process can be customized for face size, number of faces per image, and image quality.
  • 🔍 After extraction, face sets can be viewed and edited to remove unwanted or unusable faces.
  • 🤖 Training the deepfake model involves loading image files and running iterations to improve accuracy.
  • 🎞️ The final step is merging the trained faces onto the destination video, which can be customized for quality.
  • 📹 The resulting deepfake video can be viewed and further training can be done to enhance the quality.

Q & A

  • What is the main topic of the video?

    -The main topic of the video is creating deepfake videos using only a CPU, without the need for a graphics card.

  • Which software is used in the video for creating deepfakes?

    -DeepFaceLab 2.0, build 8 2 2020 is used in the video for creating deepfake videos.

  • What are the system requirements for running DeepFaceLab as mentioned in the video?

    -The system requirement mentioned is a Windows PC, and it's recommended to close all other applications that use CPU resources.

  • How can one obtain DeepFaceLab according to the video?

    -DeepFaceLab can be downloaded from the GitHub repository of iprov, either through a torrent magnet link or from mega.nz.

  • What is the purpose of the 'workspace' folder in DeepFaceLab?

    -The 'workspace' folder in DeepFaceLab holds the images and trained model files for the deepfake process.

  • What is the recommended FPS for extracting images from a video in the tutorial?

    -The tutorial suggests using a lower FPS for long source videos, such as entering 15 for a 30 fps video or 10 fps for a video with a higher frame rate.

  • How does one select the face size for the deepfake training in the video?

    -In the video, it is mentioned to type the letter 'f' for the standard face size when prompted for face size selection during the face extraction process.

  • What is the significance of the 'data_src' and 'data_dst' folders in the workspace?

    -The 'data_src' folder contains the source video's images, while 'data_dst' contains the destination video's images, which are used to produce the face sets for the deepfake.

  • What is the purpose of viewing face sets after extraction in the tutorial?

    -Viewing face sets allows the user to remove unwanted or unsuitable faces from the project, such as distorted, blurry, or obstructed images, to improve the quality of the final deepfake.

  • How does one begin the training process for the deepfake model in DeepFaceLab?

    -The training process is started by double-clicking the file labeled '6 train quick 96', selecting 'CPU' if not automatically chosen, and then allowing DeepFaceLab to load the image files and begin the first iteration of training.

  • What is the final step to create the deepfake video as described in the video?

    -The final step is to merge the new deepfake frames into a video file with the destination audio by double-clicking the file labeled '8 merged to mp4'.

Outlines

00:00

🖥️ Deep Fake Video Creation with CPU

This segment of the video tutorial instructs viewers on how to create deep fake videos using only a CPU, without the need for a dedicated graphics card. The tutorial utilizes Deep Face Lab 2.0, build 8 2 2020, and requires a Windows PC with other applications closed to free up CPU resources. The process starts with downloading and installing Deep Face Lab from GitHub, followed by setting up the workspace and preparing video clips. The tutorial then guides users through extracting images from videos, selecting appropriate frame rates, and choosing output file types. It continues with extracting face sets, adjusting face sizes, dimensions, and quality, and finally, viewing and refining the face sets by removing unwanted or problematic images.

05:02

🤖 Training and Merging Faces for Deep Fakes

The second part of the video script details the training and merging processes for creating deep fake videos. It begins with training the deep fake model using the '6 train quick 96' command, where users can input a model name and select the CPU for processing. The training accuracy is monitored through a preview window, and users are advised on how to interpret the loss values and training progress. Once training is satisfactory, the model is saved and exited. The merging process involves running the '7 merge quick 96' command to integrate the trained faces into the final video. Interactive merging settings are adjusted using keyboard commands to refine the facial mask and blur, with the goal of seamlessly integrating the deep fake face into the video frames. The tutorial concludes with instructions on merging the deep fake frames with destination audio to create the final video file, which is then viewed to assess the deep fake's quality.

Mindmap

Keywords

💡Deepfake

A deepfake refers to a synthetic media in which a person's likeness is superimposed onto another person's body in a video or image. In the context of the video, deepfakes are created using a CPU without the need for a graphics card, showcasing how accessible and advanced technology has become for creating convincing fake videos. The video demonstrates the process of creating a deepfake video using DeepFaceLab software, which is a testament to the evolving nature of media manipulation.

💡CPU

CPU stands for Central Processing Unit, which is the primary component of a computer that performs most of the processing inside the computer. The video emphasizes the capability of creating deepfake videos using only a CPU, indicating that high-end graphics cards, traditionally required for such tasks, are not necessary. This is significant as it lowers the barrier to entry for individuals interested in experimenting with deepfake technology.

💡Deep Face Lab

Deep Face Lab is a software application used for creating deepfake videos. In the video, Deep Face Lab 2.0, build 8 2 2020 is specifically mentioned as the tool used to demonstrate the process of making deepfake videos on a CPU. The software is highlighted for its ability to perform complex image processing tasks, which are essential for the deepfake creation process.

💡Preset Trainer

A preset trainer in the context of the video refers to a pre-configured set of parameters within Deep Face Lab that is optimized for specific types of deepfake training. The 'quick 96' preset trainer mentioned is designed for CPU-only training, with settings adjusted to accommodate the computational limitations of a CPU compared to a GPU.

💡Batch Files

Batch files are scripts in a programming language that contain a series of commands to be executed by the computer. In the video, batch files are used in the deepfake process to automate various steps such as extracting images from videos and merging the final deepfake frames. This automation is crucial for streamlining the deepfake creation process and reducing the complexity for users.

💡FPS (Frames Per Second)

Frames Per Second (FPS) is a measure of how many individual frames are displayed in one second of video. The script mentions adjusting FPS during the image extraction process, which affects the number of images generated from the video. A lower FPS results in fewer images, which can be beneficial for managing computational resources when creating deepfakes on a CPU.

💡Face Sets

In the context of the video, face sets refer to collections of facial images extracted from video frames that are used to train the deepfake model. The script describes the process of extracting face sets from both the source and destination videos, which is a critical step in creating a convincing deepfake, as it provides the raw material for the model to learn and replicate facial features.

💡Training

Training, in the context of the video, refers to the process of teaching the deepfake model to accurately map and replicate facial features from the source video onto the destination video. The script details the use of Deep Face Lab's 'quick 96' preset for training, which involves loading image files and running iterations to improve the model's accuracy. This is a critical phase in deepfake creation, as the quality of the training directly impacts the final output.

💡Merge

Merging, as described in the video, is the final step in creating a deepfake video where the trained model's output is combined with the destination video's frames to produce the final video. The script mentions using 'quick 96' for merging, which involves interactive adjustments to ensure a seamless integration of the deepfake face onto the destination video.

💡Interactive Merger

The interactive merger is a feature within Deep Face Lab that allows users to manually adjust and fine-tune the deepfake face's appearance in the final video. As mentioned in the script, users can change erode mask values and blur mask values to refine the deepfake's realism. This interactive approach provides users with greater control over the final output, enabling them to achieve a desired level of quality in their deepfake videos.

Highlights

Learn to create deepfake videos using only a CPU, no graphics card required.

Tutorial uses DeepFaceLab 2.0 for creating deepfakes.

Ensure a Windows PC is available and close unnecessary applications to free up CPU resources.

DeepFaceLab's Quick 96 preset trainer is used with settings optimized for CPU-only training.

Download and install DeepFaceLab from GitHub, selecting either a torrent magnet link or a direct download.

No setup is required for DeepFaceLab; simply extract the files to start using the software.

The 'workspace' folder will hold images and trained model files for the deepfake process.

Two video files, 'data_src' and 'data_dst', are used to produce face sets for deepfake creation.

Custom deepfake videos can be made by moving video clips into the 'workspace' folder.

Extract images from video using a specified frames per second to optimize processing time and file size.

Face sets are extracted using CPU by default or by manually selecting 'cpu' if a GPU is installed.

Choose face sizes and image dimensions that balance quality and file size for the deepfake model.

View and edit face sets to remove unwanted or low-quality faces before training the model.

Begin training the deepfake model with the '6 train quick 96' file, using CPU for processing.

Use the training preview to monitor accuracy and adjust training as needed for best results.

Merge the trained faces using the '7 merge quick 96' file to create the final deepfake video.

Adjust erode and blur mask values interactively to fine-tune the deepfake video's appearance.

Merge the new deepfake frames with destination audio to complete the video creation process.

View the final deepfake video and consider retraining to improve quality if necessary.

Experiment with different merger settings to achieve the desired deepfake video outcome.