top of page
Social Media at Law Logo.gif
Writer's picturePeter DiSilvio

What is a Deepfake?

Deepfakes are computer generated media in which a person in an existing image or video is replaced with the likeness of another and, in the event of a video, the voice of this same person as well. "While the act of faking content is not new, deepfakes leverage powerful techniques from machine learning and artificial intelligence to manipulate or generate visual and audio content with a high potential to deceive" [1].

How Do They Work?

While there have been many ways to edit and alter photos and videos since their inception, none have been as convincing or insidious as deepfakes. While previous editing technology required human input and ingenuity, deepfakes use artificial intelligence to synthesize faces and speeches. These AI programs study hundred of thousands of hours of video to better generate realistic looking content. More startling, AI technology and the algorithms that power it learn and improve over time meaning this technology will become harder to identify in the years ahead.


Why is it Dangerous for the Public?

To appreciate how dangerous this technology can be consider this video of Barack Obama that is actually comedian and filmmaker Jordan Peele or this clip of not-Tom Cruise speaking on the important of exfoliants. There is also the video of Richard Nixon giving a speech following the death of the moon landing astronauts which, you may recall, is incredible because the President never filmed such a speech and the first men to walk on the moon returned to earth unharmed.


The public is already skeptical of traditional news outlets with 72% of U.S. adults saying they do not think the media does a good job of disclosing biases [2]. Your first thought when viewing these numbers is that they are not correlated to deepfakes as the Media, one would think, had the tools to discern what is real and what isn't. However, with 26% and growing of the American public getting their news from YouTube [3] one can see how dangerous a well times, well placed, and well promoted video might be.


Why is it Dangerous for Public Figures?

Recently a journalist out of India, Rana Ayyub, was forwarded a video that appeared to show here engaging in sex acts [4]. The video was a fake.


Ask anyone who is in the public eye how damaging a video like that could be of them. Especially in the weeks or even days ahead of an election. One can imagine a scenario where one party releases a deepfake of the other making comments that go against public opinion. Worse still, given the interference in US elections in the past, imagine a scenario where foreign powers post a deepfake of a President engaged in sexual acts with someone who is not their partner or even with a child.


With these scenarios growing more likely as the technology becomes more readily available it is no wonder Congress was looking into cracking down on deepfakes in February of 2022.


If the public cannot trust what they are seeing online, at least to some degree, it will have a detrimental effect on future elections. As of this writing, there is very little one could do to protect themselves from a deepfake video.






0 comments

Comments


bottom of page