Deepfakes: A Digital Transformation Leads to Misinformation
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 17 | |
Author | ||
License | CC Attribution 3.0 Germany: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/55726 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | |
Genre |
13
05:10
Computer animation
05:28
Lecture/Conference
06:39
Computer animationLecture/Conference
11:01
Lecture/Conference
11:18
Computer animationLecture/Conference
15:44
Computer animationLecture/Conference
18:00
Lecture/Conference
18:55
Computer animation
Transcript: English(auto-generated)
00:01
Hi everyone, and thank you so much for the invitation to present to the 23rd International Conference on Gray Literature. My name is Mika Noor, I am a PhD third year student over at the University of California of Irvine, where I specialize in researching misinformation
00:21
online, specifically deep fakes, and I am really proud to present this paper that I co authored and wrote with Julia Gelfand, the Applied Sciences and Engineering librarian at the University of California, Irvine, we wanted to do a special thank you to the conference
00:40
organizers, we really wish that we could all gather around and talk more about this in person but as we know, COVID-19 being in the way we really are sad to miss it and hopefully we'll get to see folks next year. So, getting into it. Our paper is on deep fakes, a digital transformation that leads to misinformation, and this presentation serves as
01:08
a highlight that will parse through the main points of our paper, we do really look forward to your response and your thoughts and feedback as we build on the relationship between gray literature and deep fakes.
01:24
So, when it comes to the coronavirus COVID-19 pandemic, we, you know, people were faced with trying to convey the medical consequences of the pandemic and the direct citizens to act and direct the citizens to act responsibly and rationally in
01:53
the form of social distancing guidelines recommendations, etc. You know, had the public questioning mandates, you know whether or not to honor social distancing hygiene washing your hands, and the effort to focus on scientific evidence became convoluted through misinformation
02:09
conspiracy theories opinions that lacked medical data, and has been challenging, which frames for our presentation today, an example of how misinformation has flourished with different media establishments and outlets over the past two years, really testing the credibility
02:28
and legitimacy at every turn. While COVID-19 is one of many large examples of how misinformation is compiled and looked at, there are lots of other
02:41
troubling mediums out there, and a lot of other areas that we need to look at, so that we can be more preventative, more proactive, and identify research gaps to see you know what is this new frontier, especially when it comes to gray literature. And when it comes to gray literature we're talking about the fact that we've got open access
03:03
digital publishing that's not peer reviewed there's foreign elements and artifacts, multiple formats that exist out here. And so when we look at the spectrum of gray we look at the spectrum of information that's provided to us. How do we determine and build on the parallels with emerging technologies, new frontiers.
03:22
What does it mean to be born, digital, especially in 2021. And that's when we look at how is information produced on all levels of government, academia, business, industry in electronic and print, not necessarily controlled by commercial publishing, where publishing is not even the primary
03:45
activity of how we produce a body of work, how we produce traditional scholarly literature that is often peer reviewed. And when we look at these, we are seeing that there are parallels with some deep fake strategies and products, because they might not be the primary output of their sources and let's just say deep fakes are definitely not peer reviewed at this point in time.
04:06
So what are deep fakes deep fakes are I various combination of artificial intelligence neural networks and algorithms that replace that they typically replace how a person or an a person's likeness, or their audio appears to be made authentic, even though it's not real
04:28
it's not true, how do these alterations make sense I'm right let's let's dig into this so there's lots of ways to make deep fakes, you know, videos of people that appear to be true or sound true but actually aren't true at all and have been manipulated in some way.
04:43
And you can do that through body movements, you can change artifacts and elements on your face, especially when it comes to facial recognition software, you can actually alter language, you can use text to speech to build a catalog of voices out there to replicate or duplicate something that sounds like what the content is
05:04
saying or the person is saying but has never actually uttered those words in real life. And so, to help kind of prove that point, we want to demonstrate a couple of deep fakes that have been made so that for the purpose of this presentation, you have good knowledge of what a deep fake is or what the examples might be.
05:23
And I'm going to skip back and try to play this video again. Let's see if this works to help families refinance their homes to invest in things like high tech manufacturing, clean energy, and the infrastructure that creates new jobs.
05:52
See I would never say these things, at least not in a public address, but someone else would someone like Jordan field.
06:12
And sort of learning to recreate that person's face by looking at the thousands of images over and over and over.
06:24
Like a lot more research than you would think would would go into making a goofy video or something like that. Truly surprising for me. Yeah, I was just really surprised. I didn't do any after touching on that video that was just using the technology that was available from the machine learning side.
06:47
So you have a sense of what it means to create a deep fake. It's pretty accessible. You saw him discuss how he really barely retouched the face swap in that video. So let's talk a little bit more about the literature that we provided in our paper. We are covering everything from the rise in ethics of deep fakes.
07:06
What does it mean to not develop detection strategies? Can deep fakes actually invoke personal and societal harm while threatening the foundations of society? We also looked at algorithmic detection. What does it mean? How can we use artificial intelligence and computers and
07:24
neural networks to figure out whether or not content is correct, whether or not it's original, whether it's been manipulated. And that's really another part of the arms race when it comes to misinformation and the various levels when it comes to the variety of content or false content that's available.
07:43
Can computers identify, is this real or is this fake? And whether or not, let's say, if it passes the muster with computer engineering, can people understand, can people detect with the naked eye, with our own sights and our own volitions and our own ability to actually determine and judge the accuracy or the trustworthiness of what we're seeing?
08:06
How do human beings verify if what they're watching or reading is true or false? And that gets to what we're really here to talk about, which is this area of information disorder, these
08:20
definitions that we are adopting and their sentiments that distinguish true and false messages when it comes to misinformation, disinformation, malinformation. And what's the differences between the three? Well, you know, this, it's the false information, this, it's by contrast. And so as we still adapt and adopt, there is an element of the spectrums of gray when it comes to misinformation.
08:46
We still haven't even figured out how to identify it because we have a variety of categories on what we consider content to be misinformation. And which content is disinformation, especially when it comes to the intent of that content.
09:05
And we know that there's a variety of forms of deepfakes that are out there that are equally disturbing. But for the purposes of this paper, as we draw parallels between the relationship of gray and spectrums of gray, we chose deepfake videos. There's really three main types of deepfakes that currently exist that
09:25
are more widely disseminated, whether they're created by good actors or bad actors or neutral actors. The first is entertainment, where you are, you know, it's content of individuals singing or dancing, even though that's not who they are. More and more traditionally, you see these in apps of people trying to put Nicolas
09:42
Cage's face into Willy Wonka, even though he was not the actor of Willy Wonka. There's humor, political satire, you know, putting on an SNL in person or a skit or an actor's face on Hillary Clinton and making, you know, comedic relief that's been marked as parody, which is definitely attached to a different form of speech, a different form of regulation.
10:03
When you put content out under the auspices of parody, there are different laws and regulations at play. And lastly, there's content meant to warn viewers like what are the implications of deepfake videos? How can this be a potential for bad? How is this how is this possibly threatening the trust and safety of our foundation?
10:26
And what does it mean to be a warning for society? What are the potential impacts of this going wrong? And a great example is our own former President Barack Obama, who in this clip you'll see is being deepfaked.
10:44
You'll see in this video that he is saying things that sound... I told you I was going to win. Windows 11 takes you to another level. And when it comes to streaming movies, we haven't really experienced any buffering. Seeing it load up that quick, I was genuinely surprised.
11:01
Now with former President Obama taking on fake news, except it turns out it is not really President Obama in this PSA. This is a clear example of technology that could become more widely used. ABC's David Wright is here with more. David, this video is proof that we can't believe everything that we see online. That is right, Paula. Good morning. They say that the camera never lies, but technology is advancing so fast that it can lie with greater and greater effectiveness.
11:27
And that's the point of this new video from Jordan Peele. Our enemies can make it look like anyone is saying anything at any point in time. Former President Barack Obama, right? So, for instance, they could have me say things like, I don't know, Killmonger was right.
11:46
Wrong. You see, I would never say these things, but someone else would. Someone like Jordan Peele. Comedian Jordan Peele actually produced this video to warn about the future of fake news.
12:01
So, again, that was a political official, a former President of the United States. Sound real? It was Jordan Peele using a voiceover and somebody else using artificial intelligence to adjust the tone of Barack Obama. Actually using his eyebrows, manipulating his eyes, twitching his nose, maybe making sure those hands were in sync.
12:21
And so let's talk about it. What is the good, the bad and the future of both gray literature and deep fakes? And so you'll see that there's so many culminations and available apps that are made out there. It's not just that, oh, deep fakes are this piece of content that can be used to deceive and are hard to make.
12:41
And you have to be in a basement with a high powered computer and lots and lots of coding knowledge. No, because the way we even look at technology, the way we implement the technical expertise is we need our gray. What does it even mean to be a technical expert? And these days, anybody with a smartphone, a mobile device, et cetera, is a technical expert.
13:03
I know that's hard to believe. Sometimes you're just trying to bust it up with and you can't get something to print. But we have people out there that can use a like after effects based tools. There are apps made not just in the United States, but in other countries such as China, Brazil, India, where anyone can take a picture of you,
13:22
a picture of your loved one, a picture of that guy across the street and swap it on an existing video and making it seem true with zero technical expertise. All you need is a high powered cell phone. And that's when it comes to getting back to the spectrum of gray. There is a spectrum of false content that can be made in a video format.
13:45
There's fabricated content where you're deliberately designing to deceive. You're trying to make something completely false. There's manipulated content where there's been a manipulation of the actual content. And there's some of those artifacts are there, but they've been transposed or disposed or kind of
14:02
manipulated a little bit to maybe have a different intention, whether it's good or bad or neutral. Then you have imposter content. This is where your genuine sources are impersonated to seem as if they are true, but they're not. You've also got false content. You're mixing the genuine with false contextual information.
14:24
And then you have misleading content, which is using information to frame an issue or an individual that might not have been in that position to begin with. The last two are you have the false connection. So this is when headlines or visual captions don't actually support or relate to the content, but are meant to lead you to a different direction.
14:45
And lastly, you have satire parody, fake content that's intended for social commentary. I know that was a lot of different types of content, but the gist of it is, how are you manipulating, doctoring, and creating that level of gray, that level of
15:01
you know that space that isn't quite black or white when it comes to intent, and that's really at the end of the day what we're asking about. You have people utilizing these techniques, these technologies, whether it's accessible or not to them, just a little bit of knowledge, or maybe they have a lot of knowledge. And the question is, how are they manipulating or presenting this information? And what is the intent they're trying to emote and evoke?
15:28
And that's where it gets a little bit disturbing because we don't know sometimes. Is it satire? Is it a false connection? Or are we deliberately deciding to deceive under the idea or the auspice that it's a parody?
15:45
And let me show you this next video. This next video went viral earlier this year in 2021. It is a video of Tom Cruise, but it's not Tom Cruise. Spoiler alert, we're not showing you real content. But what was what was very disturbing about this one is it wasn't necessarily just made for parody.
16:04
It was uploaded on a popular platform called TikTok, which really hits many different demographics in a very specific setting. There was a it's an actor that's pretending to be Tom Cruise. But the level of work, artistry and the way they disseminated this video really had people fooled until the deep fake creators came out themselves.
16:25
So I'll let you be the judge of how realistic this is or maybe isn't to you. That's why it's a little embarrassing.
16:42
You know, my team was once in Russia, ran into. He said, you know, Mr. Movie star, why do you notice? So following this, a lot of news articles came out talking about how creepy Tom Cruise is, that he's uploading these creepy videos.
17:05
And why does this matter? This matters because deep fakes can impose national security implications, put individuals, government and companies at risk. You can undermine trust instill or instill fear or entertain.
17:21
But we need to look at it that how other related technologies, for example, like drones that follow and track individuals, they give the same sense of insecurity to the public because they haven't been authorized or been given permission. So imagine if you, the individual have now, you know, had your face doctored onto a video that isn't real.
17:40
It isn't one you consented to. How are you going to fact check or figure out who that is? And that's what really gets scary. It's how, what is the leap from creating this realistic looking video of Tom Cruise to maybe even affecting presidential candidates, elections, campaign rallies? What is the content and the reaction that we're invoking? Are deep fakes, you know,
18:06
can we get to a place of black and white where they're dangerous or they're not? You know, with the rise of the hyper realistic simulations, the media have far surpassed the simple edits that are being made to photos and videos.
18:21
And that brings us to media literacy, a new form of ICT, where we need to figure out what are the standards and best practices we're going to use to educate the public so they don't fall prey. What does it mean to look at media and society? And how do the demographics of this range, right? We're going to have a whole generation of citizens that have never known life without the internet.
18:45
Are they more or less susceptible to falling prey to deep fakes? Do those demographics matter? Does age matter? What is technical and media literacy mean for the future? And who is going to be accountable? As fake content become more prolific, we've got to
19:02
understand the potential and societal impacts of deep fakes, but there are a few regulatory standards available. The potential for the societal impact is unknown. And that's what we need to scale. Those are our limitations. We're currently in the Wild West. There's not a lot of policies out there. The implications are quite unclear. And again, why should we be concerned? Well, there is infringements on individual rights. We're
19:26
looking at copyright infringement, but it doesn't matter what regulatory or what policies you have. It's whether or not you can even track down this game of whack-a-mole when it comes to outpacing the bad actors when you're in this arms race of trying to fight misinformation and disinformation.
19:44
And that's really what's key here. As we look at actually building up upon this relationship with deep fakes, we want to understand when it comes to deep fakes and grave literature.