Defending the Technology Behind Deepfakes

  • Reference: IVEY-9B20TA03-E

  • Number of pages: 4

  • Publication Date: Jan 1, 2020

  • Source: Ivey Business School (Canada)

  • Type of Document: Article

Grouped product items
Format Language Reference Use Qty Price
pdf English IVEY-9B20TA03-E
As low as €8.20

You already have a subscription

To order please contact the person in charge of academic purchases in your university.
You'll be able to order once your profile has been validated.

Description

People can no longer always trust their eyes or ears thanks to generative adversarial networks (GANs), a special subset of artificial intelligence technology. This technology makes it relatively easy to create deepfakes—convincing clips of people doing or saying things that they never did or said. Deepfakes present a serious threat to Western civilization because they create an opportunity for bad actors to supercharge disinformation. National security experts worry about foreign actors deploying deepfakes to manipulate public opinion. The potential for an assault on reality has put a serious regulatory target on GANs. But the technology behind deepfakes might just be what democratic nations need to keep up in the AI race. The objective of this article is to highlight how GAN technology enables AI developers to augment or complete data sets where data is unavailable, incomplete, or otherwise lacking. GAN technology has plenty of other positive things to offer, including for medical research, automotive safety, and security and military applications. Instead of having a conversation based on fear and misunderstanding, Western nations need to explore deploying laws, policies, and regulations grounded in age-old legal concepts like defamation and slander to counter the threat posed by deepfakes. After all, knee-jerk calls for regulatory action risk putting unnecessary restrictions on the development of a technology that could hold the key to democracy’s survival.