Tech CEO Shows Shocking Deepfake Of Kari Lake At Hearing On AI Impact On Elections

Forbes Breaking News
18 Apr 202408:40

TLDRTech CEO Riddhiman Gupta, founder of Deep Media, addresses the impact of deep fakes on society and elections during a hearing. He explains deep fakes as AI-manipulated content that can mislead or harm, emphasizing the rapid improvement and low cost of producing such content. Gupta outlines the importance of understanding generative AI technologies like Transformers, GANs, and diffusion models. He highlights the threats posed by deep fakes to political integrity and public trust, with examples like fake videos of political figures. Gupta advocates for a collaborative approach involving government, AI companies, platforms, journalists, and detection companies to combat the issue. He showcases Deep Media's efforts in assisting media outlets and being part of initiatives to detect and label real and fake content, aiming to set a gold standard for the industry.

Takeaways

  • 🚀 **Innovative Start**: Riddhiman Gupta, a tech entrepreneur and hacker, founded Deep Media in 2017 to address the impending issue of deep fakes.
  • 🧠 **Understanding Deep Fakes**: Deep fakes are AI-manipulated images, audio, or video with the potential to mislead or harm, not including text.
  • 🤖 **Core Technologies**: Three fundamental technologies behind generative AI are the Transformer, Generative Adversarial Network (GAN), and Diffusion Model.
  • 💰 **Economic Aspect**: The cost to produce deep fake videos is decreasing rapidly, posing a significant societal threat due to their affordability and potential for misuse.
  • 📈 **Rapid Advancement**: The quality of deep fakes is improving swiftly, nearing perfection, which could lead to a future where it's hard to distinguish real from fake content.
  • 🗳️ **Impact on Elections**: Deep fakes have been used to influence political opinions and elections, exemplified by fake videos of political figures.
  • 🌐 **Online Content Concern**: There's a risk that by 2030, up to 90% of online content could be deep fakes, which could erode trust in all media.
  • 🤝 **Collaborative Solution**: A multi-stakeholder approach involving government, AI companies, platforms, journalists, and detection companies is necessary to combat deep fakes.
  • 🛡️ **Detection and Prevention**: Deep Media works with news organizations and is part of initiatives to detect deep fakes and promote the labeling of real and fake content.
  • 📊 **AI's Perspective**: Gupta demonstrated how AI analyzes media, focusing on the technical aspects of detection to minimize false positives and negatives.
  • 🌟 **High-Quality Deep Fakes**: Even the highest quality deep fakes, like the one of Kari Lake, can be detected by advanced systems, showcasing the ongoing battle between deep fake creation and detection.

Q & A

  • What is the primary concern expressed by Riddhiman Gupta regarding deep fakes?

    -Riddhiman Gupta expresses concern about the potential of deep fakes to harm or mislead, dismantle society, and their increasing quality and affordability which could lead to widespread misuse, particularly in the context of political elections.

  • What are the three fundamental technologies that Riddhiman Gupta asks legislators to keep in mind?

    -The three fundamental technologies Gupta mentions are the Transformer, a type of architecture; the Generative Adversarial Network (GAN); and the Diffusion Model. These technologies are the basis of generative AI.

  • What is the current cost of producing deep fake videos and how is it expected to decrease?

    -Currently, the cost of producing deep fake videos is about 10 cents per minute, and it is expected to decrease to 1 cent per minute very quickly.

  • How does Gupta describe the potential societal impact of deep fakes?

    -Gupta suggests that deep fakes have the potential to create a society akin to George Orwell's '1984', where the distinction between real and fake becomes blurred, leading to plausible deniability and a loss of trust in media content.

  • What solution approach does Gupta propose to combat the deep fake problem?

    -Gupta proposes a collaborative solution involving government stakeholders, generative AI companies, platforms, investigative journalists, and deep fake detection companies. He emphasizes the need for these groups to work together to solve the problem.

  • What role does Gupta's company, Deep Media, play in addressing the deep fake issue?

    -Deep Media is actively involved in developing technology to detect deep fakes. They work with journalists and media outlets, are part of the DARPA Semafor and AI Force program, and are members of the Content Authority initiative to label and detect real and fake content.

  • What are the challenges in detecting deep fakes while avoiding false positives?

    -The challenge is to accurately detect deep fakes without mistakenly identifying real content as fake. This requires a low false positive and false negative rate, which Deep Media's platform aims to achieve.

  • How does the AI technology perceive and process audio for deep fake detection?

    -AI technology perceives audio through visual representations, such as graphs, that depict the voice's characteristics. Deep Media's detectors analyze these representations to distinguish between real and synthetic audio.

  • What is the significance of the deep fake example featuring Kari Lake?

    -The Kari Lake deep fake example demonstrates the high quality of current deep fake technology, using proprietary generative models to create a convincing fake. It underscores the urgency and importance of implementing effective detection methods.

  • What is the 'tragedy of the commons' in the context of deep fakes?

    -The 'tragedy of the commons' refers to the difficulty of managing shared resources without degradation. In the context of deep fakes, it signifies the collective harm caused by the misuse of generative AI technology, which can lead to widespread misinformation and fraud.

  • How does Gupta envision the role of legislation in addressing the deep fake problem?

    -Gupta believes that proper legislation can help internalize the negative externalities of deep fakes, such as fraud and misinformation, and promote a flourishing AI ecosystem that uses AI for good.

  • What is the role of the free market according to Gupta's perspective on AI?

    -Gupta is a believer in the free market and thinks that AI can be used for good. He sees the market as a mechanism to address the deep fake issue by developing and adopting technologies that can detect and mitigate the spread of deep fakes.

Outlines

00:00

🚀 Introduction to Deep Fakes and Their Impact

Ridel Gupta introduces himself as a tech entrepreneur and hacker, who has been building applications and websites since the age of 10. He discusses his academic background in machine learning from Yale and his subsequent focus on generative AI. Gupta founded Deep Media in response to the impending issue of deep fakes. He emphasizes the need to understand what deep fakes are, which are AI-manipulated images, audio, or video that can mislead or harm. He also highlights the importance of three key technologies behind generative AI: Transformer, Generative Adversarial Network (GAN), and diffusion models. Gupta stresses the rapid improvement and decreasing cost of deep fakes, and their potential to disrupt society. He provides examples of deep fakes used for political purposes and the dangers of plausible deniability. He concludes by stating that a collaborative effort involving various stakeholders is necessary to solve the deep fake problem.

05:01

🛠️ Solutions to the Deep Fake Problem

Gupta presents a solution-oriented approach to the deep fake issue, emphasizing the need for cooperation among government, generative AI companies, platforms, investigative journalists, and deep fake detection companies. He shares that Deep Media has assisted journalists from CNN, Washington Post, and Forbes in detecting and reporting on deep fakes. The company is also part of the Witness organization, DARPA's Semafor program, and the Content Authority initiative, all aimed at addressing the deep fake challenge. Gupta explains that his company uses AI to understand and detect deep fakes, maintaining a low false positive and false negative rate. He demonstrates how AI perceives audio and video to identify deep fakes, using examples to illustrate the process. Gupta concludes by reiterating the importance of staying ahead in the ongoing battle against deep fakes and offers to answer any questions, positioning himself as a resource for policy makers seeking technical insights on the matter.

Mindmap

Keywords

💡Deepfake

A deepfake refers to a synthetically manipulated image, audio, or video created with AI that can be used to deceive or cause harm. In the video, the speaker emphasizes the growing prevalence and potential dangers of deepfakes, particularly in the context of political misinformation and manipulation, which can disrupt society and elections.

💡Generative AI

Generative AI is a type of artificial intelligence that can generate new content, such as images, audio, or text, that is similar to the content it was trained on. The speaker discusses generative AI as the underlying technology behind deepfakes, noting that it requires significant computational resources and data to function.

💡Transformer

A Transformer is a type of AI architecture that is particularly effective for handling sequential data like text or audio. It is one of the three fundamental technologies mentioned by the speaker that are critical to generative AI, and it plays a key role in the creation and detection of deepfakes.

💡Generative Adversarial Network (GAN)

A GAN is a type of AI system consisting of two parts: a generator that creates content and a discriminator that evaluates it. GANs are used to create high-quality deepfakes by iteratively improving the generated content. The speaker includes GANs as one of the core technologies behind generative AI.

💡Diffusion Model

A diffusion model is a technique used in AI to generate high-quality images or videos by gradually refining an initial, noisy version. The speaker mentions diffusion models as part of the technology stack that enables the creation of increasingly convincing deepfakes.

💡Political Assassination

In the context of the video, political assassination refers to the use of deepfakes to damage the reputation or credibility of political figures. The speaker provides examples of deepfakes involving President Biden, President Trump, and Hillary Clinton, which were used to mislead the public for political purposes.

💡Plausible Deniability

Plausible deniability is a situation where someone can claim that they are not responsible for an action, especially because they can provide a believable excuse. The speaker warns that deepfakes can lead to a scenario where politicians or other individuals can falsely claim that real content is a deepfake to avoid accountability.

💡Free Market

The free market refers to an economic system where prices are determined by supply and demand with little to no government intervention. The speaker expresses belief in the free market's ability to harness AI for good and suggests that proper legislation can address the negative externalities, such as misinformation, caused by deepfakes.

💡Negative Externality

A negative externality is an unintended negative consequence that affects a third party who is not involved in an economic transaction. In the video, the speaker describes the harm caused by deepfakes, such as misinformation, as a negative externality that needs to be addressed through legislation.

💡Content Authority Initiative

The Content Authority Initiative is a collaborative effort involving companies like Adobe to label and distinguish between real and fake content. The speaker mentions their participation in this initiative as part of the solution to the deepfake problem, aiming to authenticate content and prevent misinformation.

💡DARPA Semaphor and AI Force Program

The DARPA Semaphor and AI Force Program is a collaborative effort involving researchers, corporations, and government resources to address the challenges posed by deepfakes. The speaker highlights their involvement in this program as a way to bring together different stakeholders to find solutions to the deepfake issue.

Highlights

Ridel Gupta, the founder of Deep Media, testified before a hearing on the impact of AI on elections, discussing the rise of deep fakes and their potential to disrupt society.

Gupta emphasized the importance of defining deep fakes as synthetically manipulated AI images, audio, or video that can mislead or harm.

He outlined the need for legislators to understand the technology behind deep fakes, including the Transformer architecture, Generative Adversarial Networks (GANs), and diffusion models.

Gupta highlighted the rapid improvement and decreasing cost of deep fake technology, with video production costs dropping to as low as 10 cents per minute.

He warned that by 2030, up to 90% of online content could be deep fakes, which could lead to a crisis of trust in digital media.

Examples of deep fakes have already impacted elections, including fake videos of political figures like President Biden, Trump, and Hillary Clinton.

Gupta pointed out the dual threat of deep fakes: political assassination and creating false narratives to make politicians seem more relatable.

He expressed concern that the larger threat lies in the impact of fake content on real content, leading to plausible deniability and potential misuse by politicians and businesses.

Gupta called for a collaborative effort between government stakeholders, generative AI companies, platforms, investigative journalists, and deep fake detection companies to address the issue.

Deep Media has been involved in detecting and reporting on deep fakes, working with major news outlets and being part of initiatives like DARPA's Semafor and AI Force program.

The company is part of the Content Authority initiative with Adobe, aiming to label real and fake content to combat misinformation.

Gupta is a proponent of the free market and believes AI can be a force for good, but acknowledges that deep fakes represent a market failure.

He presented a vision where proper legislation can internalize the negative externalities of deep fakes and foster a flourishing AI ecosystem.

Deep Media uses advanced AI to both create and detect deep fakes, setting a gold standard in the industry by keeping their generative AI technology internal for detector training.

Gupta demonstrated how their system can differentiate between real and fake content, maintaining a low false positive and false negative rate.

An example of a high-quality deep fake featuring Kari Lake was shown, illustrating the capabilities of current deep fake technology and the importance of detection systems.

Gupta concluded by offering to answer questions and provide information on tech solutions to the deep fake problem from a technical perspective.