We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Caught in the propaganda crossfire? Bots on social media

00:00

Formal Metadata

Title
Caught in the propaganda crossfire? Bots on social media
Title of Series
Number of Parts
234
Author
License
CC Attribution - ShareAlike 3.0 Germany:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Computational propaganda – the use of information technologies for political manipulation – is on the rise. Social bots are crucial instruments in digital attacks: During the US elections 20% of all Twitter traffic was generated by them; and Trump bots outnumbered Clinton bots 5:1. During Brexit 1% of accounts drove nearly 1/3 of all traffic. Both state and non-state political actors have used bots to manipulate conversations, demobilize opposition, and generate false support on Twitter, Facebook & Instagram. Are bots weapons in a (cold) cyberwar? How are the used in the Bundestagswahl 2017?
18
166
MultimediaSocial softwareRobotComputer animationJSONXMLUMLLecture/Conference
HypermediaTwitterConfidence intervalStrategy gameLecture/Conference
Demo (music)Streaming mediaComputational physicsDigital signalTwitterBitComputer animation
Uniformer RaumTwitterRobotHypermediaRobotGreatest elementBitPlotterEllipseCuboidComputer animation
WeightDigitizingMeeting/Interview
James Waddell Alexander II
1 (number)Scaling (geometry)HypermediaState of matterMultiplication signPoint (geometry)EvoluteLecture/ConferenceDrawing
1 (number)Power (physics)Lecture/Conference
RippingHypermediaMassMultiplication signCausalityState of matter
Lecture/Conference
HypermediaMassMultiplication signBroadcasting (networking)Prisoner's dilemmaWordRevision controlComputer animationLecture/Conference
InternetworkingMultiplication signHypermediaPoint (geometry)Physical systemLecture/ConferenceMeeting/Interview
HypermediaInformationEstimatorFacebook
SphereHypermediaSquare numberWebsiteElement (mathematics)Form (programming)Right angle
Computational physicsHypermediaNeuroinformatikLecture/ConferenceComputer animation
RobotContent (media)Element (mathematics)Key (cryptography)Electric generatorHypermediaNeuroinformatikLecture/Conference
HypermediaScaling (geometry)Virtual machineInternetworkingContent (media)QuicksortTask (computing)Computer animation
HypermediaInternetworkingElectric generatorContent (media)Surface of revolutionShared memoryUser-generated content
RobotRobotHypermediaWordTerm (mathematics)NP-hardTwitterMultiplication signRight angleContent (media)Cellular automatonCharacteristic polynomialSinc functionBitSign (mathematics)State observerLecture/ConferenceComputer animation
Total S.A.Series (mathematics)RobotMereologyMeeting/InterviewLecture/Conference
RobotVirtual machineInternetworkingRoboticsHypermediaPoint (geometry)CuboidDoubling the cubeLecture/ConferenceComputer animationMeeting/Interview
Shape (magazine)Integrated development environmentInteractive televisionRobotHypermediaSoftwareMassCuboidLecture/Conference
Distribution (mathematics)TwitterRobotMechanism designInteractive televisionScaling (geometry)MereologyMeeting/Interview
Distribution (mathematics)Archaeological field surveyHypermediaFacebookRoboticsRobotInformationComputer animation
Physical systemOrder (biology)HypermediaBinary multiplier
CuboidPosition operatorVideo gameHypermediaVector spaceDistribution (mathematics)SphereBinary multiplierRobotMessage passing
TheoryMedical imagingMessage passingLogic gatePortable communications deviceRobotLeakData conversionContent (media)Perspective (visual)Connectivity (graph theory)Denial-of-service attackLecture/Conference
Distribution (mathematics)HypermediaSpring (hydrology)Computer clusterRobotDenial-of-service attackCuboidComputer animation
Hacker (term)Meeting/Interview
Coma BerenicesNeuroinformatikState of matterSet (mathematics)HypermediaStrategy gameTwitterBenchmarkTheory of relativityOffice suiteComputer animationLecture/Conference
James Waddell Alexander IIResultantFingerprintRight angleMultiplication signNumberExterior algebraView (database)VotingMeeting/Interview
NumberVotingTheory of relativityTwitterMeeting/InterviewLecture/Conference
James Waddell Alexander IIHypermediaSlide ruleDifferent (Kate Ryan album)WordTwitterStructural loadFile systemInheritance (object-oriented programming)Data conversionMultiplication signMetropolitan area networkVotingMeeting/Interview
RobotTwitterData conversionRobotQuicksortNumberDivisorCuboidMeeting/InterviewLecture/Conference
MereologyTwitterRobotLecture/ConferenceMeeting/Interview
James Waddell Alexander IIRobotMereologyBoss CorporationResultantState of matterComputer animationMeeting/Interview
Content (media)Uniform resource locatorCategory of beingLecture/Conference
InformationInformationHypermediaHydraulic jumpContent (media)HoaxWeb page
Web pageContent (media)QuantumUser-generated contentLecture/Conference
RobotRight angleMultiplication signNeuroinformatikMeeting/Interview
Position operatorDigitizingWordAttribute grammarSphereBitTerm (mathematics)Arithmetic meanInternetworkingMeeting/InterviewComputer animation
Physical systemComputing platformInternetworkingExpressionFacebookRegulator geneTwitterQuicksortVector spaceFilter <Stochastik>Content (media)Meeting/Interview
TwitterFacebookComputing platformContent (media)Lecture/Conference
HypermediaBinary multiplierDivisorTerm (mathematics)SphereInternetworkingPerspective (visual)NeuroinformatikLecture/Conference
Point (geometry)BenutzerhandbuchLecture/Conference
Computer animation
Transcript: English(auto-generated)
Welcome to my talk, Caught in the Propaganda, Crossfire, Bots on Social Media. I'm Lisa Maria Neutert, I'm a researcher at the Oxford Internet Institute, and I'm very happy to see that some people came and thought this topic here was interesting. So, I will actually start my talk with a little confession. I was very afraid that people wouldn't want to show up to my talk.
So, I thought, nobody really knows me here, I'm quite new to Republika. Also, it's day three already, people probably already have been to the afterparty, and it's still in the morning, so we'll see how it goes. But then I thought, I deserve a little confidence boost.
And I went on Twitter and came up with a little PR advertising strategy for myself. And I started this account here, and I've actually tweeted a little bit about a research, and I've also talked to people that were tweeting with the Republika hashtag and invited them to my talk.
And some of you also talked to me and said, it sounds interesting, so I was very excited about that, but there is a little catch to that. This account here is actually, that wasn't me, and it actually wasn't a person either. This was a social media account that was automated, that was tweeting automatically and pretended to be a human.
And this is what we call a social bot. And social bots, apparently right now, are all the rage. For example, The Guardian thinks that bots are taking over the world. Wired thinks governments don't set the political agenda anymore, but bots do.
And Angela Merkel is afraid of them. So in my talk today, I want to get to the bottom of those apocalyptical headlines and talk a little bit about what bots actually are and if we're really trapped in a propaganda crossfire. And to do that, first I want to talk about propaganda itself.
With propaganda in a digital sphere, there's one problem. It's incredibly different than it has been before, and I think it's potentially more powerful than the propaganda that we are used to and that we know. But let's first start with propaganda, the way it started. This here is a picture of propaganda in its weary making.
Propaganda is as old as politics are themselves. This here is Alexander the Great. And Alexander the Great, well, he's the great. That's what we know him for. He has won every battle. He's a war hero. And I think it's no coincidence that we know him as Alexander the Great.
That is so great. It's because he actually insisted on traveling with a legion of chroniclers. That wrote about the history that he had just made. And of course he also dictated the history and told them what to write and what would be interesting and also what would portray him in a good light. And that way, Alexander the Great was actually one of the first ones to employ large scale propaganda.
Propaganda has always had a history of going with the state of the media. For example, when the letter press was first invented, propaganda for the first time became a much more large scale phenomenon. Because with the letter press, it could now reach more people.
The problem at the time was that people still couldn't read very much. So most of propaganda, like here in the French Revolution, was actually pictures, was caricatures. And obviously also because pictures work really good and appeal to emotions. Which also brings us to our next point. To what is it that propaganda wants?
So propaganda is something that is very emotional. It wants to appeal to people's feelings. It wants to speak to people's hearts. It's not something that is rational. It's not something that wants to speak to people's minds. And that is exactly where the power of propaganda is. It's manipulative in a way that appeals to emotions and not rational.
And also propaganda wants to talk to the masses. It wants to polarize the masses and it wants to win the masses for its causes. It's not something that targets an individual. It's rather something that wants to establish one big common sense in a mass.
So it's not very surprising that when it targets the mass, it would also target mass media. And so propaganda has had a long history of working with mass media. For example, in World War II, we had propaganda that was targeting the opponents of war that were demonized really.
In World War II, I think one of the most prominent examples of propaganda worldwide, Hitler and Goebbels, had propaganda that was essential to the totalitarian state. Propaganda at that time really was everywhere. It was in the newspapers. It was on the radio.
It was in movies. It was on the TV. It was also in theaters. It was art. Propaganda was omnipresent really. Also, in the GDR, propaganda was very prominent as well. It was mainly targeted towards capitalism versus socialism.
But there's also many newer examples of propaganda that sometimes I think we seem to forget about. For example, propaganda during the Gulf War. There were also propaganda in Afghanistan, propaganda in Iraq. And propaganda, thereby, it works with the mass media. And it also really works with the kind of media that right now is interesting to people.
The media that people like to spend their time on, where to get their information, but also where to get entertainment. So, when propaganda started, it was mostly word of mouth. It was sermons, it was songs, it was chants. Then with the letterpress, it became more evolved. It was prints, it was caricatures.
And then it followed the radio, it followed broadcasting, it followed movies. And when I was looking for pictures to present for this talk, I actually found lots of Nazi Donald Duck versions. And I was quite surprised. And I was like, oh god, the internet is terrible again. I didn't know that this was a thing. But then I actually did some research and it turns out that Walt Disney himself was also very much in the propaganda system.
And they produced lots of Walt Disney Donald Duck propaganda movies during World War II. And so I think this only underlines my point that propaganda is where the media is that excites people and that attracts people. So, where is that media right now that people like to spend their time on?
It's online. It's social media. It's Facebook, it's Twitter, it's Instagram, it's WhatsApp. It's also, it's Sina Weibo. It's really, social media right now I think is the media that engages people the most. It's incredibly widespread. There are estimations that more than 80% of US adults right now do have one or more social media accounts.
Pew Research estimates that more than 60% of adults get their news and their information on social media every day. And there's also research that says that between two to four hours a day are spent on social media.
So social media is the media right now. It is where we go to to get our entertainment. It's where we go to socialize. But it's so much more than that too. It's also where we go to generate content. It is where we have discussions. It is where we talk to people. It is where we form our will. It is where we learn about politics.
Propaganda is also a very key essential element of this we think. And it works in a public sphere. The public sphere that has emerged as social media. So, social media as a new public sphere. We at Oxford, we think there's many bright sides to social media as a public sphere.
With democracy, with a new forum for discussions, with a new forum for forming a pluralistic will. But we also think that there is a downside to it. And we think this downside is that social media might be much more prone to propaganda than other media has been before. And we describe this as computational propaganda.
Propaganda that works on social media. And computational propaganda really is many things. And also many of the things that are heatedly debated right now. It's fake news. It's social bots. It's also micro-targeting. That all seems like those are hot issues right now after the U.S. presidential elections. But it's actually research that we have been doing for two to three years already.
So the question, of course, is what is computational propaganda? And also why is it different than the propaganda that we know? So at the very essence, we still have the manipulation of opinion. That is the very key element of propaganda.
But then the methods are different. Computational propaganda is data driven. So its agents actually know something about the users that it's targeting with propaganda. It knows this from social media. It knows this from the likes that are generated. From the content that people engage with. And also from the content that people just like oneself engages with.
So propaganda on the internet has the ability to use big data to actually approach people more directly and more targeted. The second big thing is automation. And automation actually means that it's not a single agent or a person that is behind that propaganda.
But it means that a person is targeting and tasking some sort of machine with automation. And so propaganda is appearing, is happening automatically. It's still a person that is tasking a machine. But then the scale that is provided by a machine goes far beyond what we have known with propaganda.
As it was before. And where is that happening? It's happening on social media. And social media is not just a new media. It's not just a popular media where people go to and are looking to right now.
But it's actually, it's much more than that. It's very different because it's also where people generate content. It's this revolution of the internet that has really been embodied by social media. This user generation that everybody, each and every one of us can easily produce content on social media.
We block, we share pictures, we share texts. But the thing is now we can also not only share user generated content but also user generated propaganda. So now actually let's come back to social bots. What this talk is supposed to be about. So social bots, to remind you, were those automated social media accounts that are pretending to be humans on social media.
So this here is a picture of an account that we have found active during the German presidency elections. And it also has some of the key characteristics that social bots usually do. This account here was very, very active.
It had 48,000 tweets since January 2016. And also it has some of the other tell signs. For example, the picture is not an actual human. It's often times either Twitter acts or it's pictures of cartoons of animals.
And this account here also clearly states that it's a critical observer and a realist. It says it's an opposition party. That is also something that we have found to be quite typical and common. And then if we were to scroll down a little bit we would also see that this right wing content is very typical for bots in social media in Germany but also in the US, in Brexit and in France.
Most of the time it's right wing content that is being propagated. Content that is very skeptical of the EU. For Germany it's for example very skeptical in terms of refugees, in terms of Merkel. But who of you, when looking at this account, would have actually thought that this is a human?
Would anyone have thought this is a human? I see actually very few hands going up. I think we have maybe a total of maybe 10 people. A couple more. Okay so, but this is what's interesting.
So if people, if only some people think that this is a human and many people here were already convinced that this is a bot. And it doesn't seem like to be such a big problem, social bots. If we can just educate people to distinguish them. But this is the biggest misconception. Social bots are not this.
Social bots are not machines that are engaging with people directly and that are talking to people. Not robots that are typing away on the internet. So this is really the single biggest misconception about social bots. It's not that they direct, that they interact directly with people on social media.
They work way more subtly. So the usual user that will encounter a bot will actually not speak to it directly. And will not get to the point where it even thinks about is this a bot or is this not a bot. But bots, bots work way more subtly. They don't shape the direct interaction but they shape the environment that we're working on on social media and that we're engaging with.
So let's have a look into what it is what bots actually do. So bots usually don't work alone. Bots usually work with other bot friends. And when they join together we call that a bot network or a bot network.
And they work together and the thing is they are really interested in creating mass. So that again is very similar to what we were talking about propaganda and the mass media. And there are really three key mechanisms that we found for bots.
The first of them is the amplification of issues. That occurs whenever a bot makes a topic seem bigger than it actually is. That for example happens when a bot retweets, when a bot likes, when a bot comments. It's really all those small things that are driving user interaction and that are bringing up scale.
It's also for example a Twitter hashtag that is trending that has been brought to trends by bot amplification. But why is that so bad or why does that seem bad? Yeah because just liking a couple of things on Facebook is not essentially propaganda, is it? But the problem is that people, that we actually are interested in what other people are doing on social media online.
It's called social information. So we're looking into what are the opinions that are dominant? What are the opinions that are in the minority? And for example if I perceive my opinion to be in the minority then I'm less likely to speak out than when I think I'm in the majority. So it matters a lot. We for example found amplification through social bots very active during the US election campaigning and also during the US debates.
For example there were bots that were tweeting after the campaigns, each trying to say that either Hillary won or that Trump won. And it's really also where those kind of battles are being decided. And why is that where those battles are being decided?
Because we then have multipliers that are looking to social media and as seeing like, oh we think Trump won is the dominant hashtag so the people seem to think that this is actually the polling system and this is actually what has worked to see that Trump has won. But this is really a big misconception.
That is one of the dangers of amplification. The multipliers that are carrying out that agenda into the digital sphere not only but also into the media and on the political agenda. Then the second big thing is distribution. And bots can distribute just about anything. They can distribute very positive things.
For example I have subscribed to a couple of bots that are providing me with access to academic papers that are being published. You can also, there's a couple of fun bots that are just telling you funny things and jokes that are happening for example. So bots can be perfectly fine and they can even be fun and can make life on social media more cool and more engaging.
But bots can also distribute whatever messages they would want to distribute. So it can also be conspiracy theories. It can be some very graphical, horrible images. And for example we also found bots active during Pizzagate and most recently during the Macron leaks.
So bots are pivotal in what we think is distributing fake news. Then the third big component is flooding. And flooding occurs when bots are targeting conversations, are targeting hashtags and are just flooding them with content or with spam.
And why is that so bad? One example that is very prominent occurred during the Arab Spring. During the Arab Spring activists have used hashtags to actually coordinate with each other on the ground. But then the Syrian government found out about that and they used bots to flood the hashtags and to prevent people from coordinating via social media.
So I know what some of you are thinking about right now. And it's this. Is Putin already hacking the Germany elections? He seems to have had some say in the US elections. There is also some talk about that Russia has been involved in the French elections right now.
And that obviously is something that is of very big interest to us as citizens but also as researchers. To see if there is some hacking that is occurring during the elections in Germany. So we decided to look into computational propaganda in Germany with regards to the elections in September.
And for that we first started looking into the presidential election this year. And the presidential election is not really the most interesting election there is. It's fairly representative and even before people actually vote, people more or less know who is going to be the president of Germany.
But it's still the closest thing that we have to actually look into what are these strategies like in Germany and what is happening in Germany. And what we can use as a benchmark for the elections in the state elections and also for the elections in September obviously. So we went on social media.
We collected a data set of about 120,000 tweets that were in relation to the presidential elections. And here you can actually see the elections and the results, what was happening. So for those of you who are not that well versed in German politics, those were the four white male candidates.
He had 75% roughly of the votes. Then second came the candidate of the Linker party with about 10.2% of the votes. Then one very interesting candidate and maybe you've seen the talk before.
Albrecht Glaser from the AfD, the alternative for Deutschland. And that is a party that is really heatedly debated in Germany right now. Because it's a right wing party, because it's racist, because it's rationalist, because it is afraid of refugees, because it is really a party that uses propaganda and it's a party that also holds views that most of us would say are not democratic.
And he gained 3.6% of the vote. And then number 4 is Alexander Halt, a TV judge for the Fliebela and he had 2% of the vote. So we collected 120,000 tweets that were in relation to the candidates and to the presidential elections.
And we then wanted to see how big were the different candidates on social media. And the next slide shows this. So some surprises here, some things not super surprising.
Stankweiter Steinmeier had the largest amounts of tweets that were generated about him. He obviously was the candidate that won. And he had roughly 45, excuse me, 54% of the tweets were generated about him. Then, Krzysztof Bodebege and Alexander Halt, who both had little less than 5% of the vote,
had also less than 5% of the vote of, not the vote, I'm sorry, but of the conversation on Twitter. But then Albrecht Lase, the AFD candidate that only had 3.6% of the vote, he was huge on Twitter too.
He actually had 40% of all conversation was about Albrecht Lase. So with that in mind, we thought, okay, either the AFD is really controversially discussed on Twitter, which it is, or we thought the AFD is very active on Twitter, has a lot of conversation, which also appears,
or the third sort of lever, maybe social bots are behind that. So we were looking into the bot factor and wanted to see how much have social bots actually contributed to these numbers. And the answer is not very much. So in Germany, you can see the highlighted part of the pictures actually shows how much of the tweets were driven by automation.
For each of the candidates, it's somewhere between 5 and 15%, which is not very much. And also, those tweets were generated by 22 accounts only, when we were looking into roughly 30,000 accounts.
So it's incredibly unsubstantial. The question now is, does that mean that social bots are not an issue in Germany at all? Well, it either means we didn't find them or they weren't active during the elections in Germany. But it also, something to bear in mind is that social bots can become active almost instantly.
So they can tweet in real time and become active. That is something to bear in mind. But in general, we can say, Germany on social bots during elections, maybe not so much right now. And we were also looking into the state elections in our most recent research and then we also saw the same results.
Social bots had not been very active in Germany in elections so far. Then we didn't stop there. We also took a further step and wanted to look into one of the other hot and sexy issues right now, which is fake news. And so we were looking into the URLs that were being shared during the presidential elections.
And we found about 17,000 URLs. And in each of the URLs, we categorized and actually looked into the content of them. So we had categories like shopping. We also had categories like personal food blogging or also news, political content and misinformation.
And then what we saw there was not quite as calming. So we actually found a relationship of information to junk news on social media of four to one. So for every four political news content, there was one fake news story.
That is compared to what we found in Michigan in the US where we found a ratio of one to one is that relatively small, but I still think it's substantial. And it also shows that fake news is very much an issue in Germany, too, and not just in the US. And what is more, we were also looking into the pages that were generating that content.
And we could see that they really had an agenda. About, I would say, 95% of what we were seeing was right-wing content. It was content, again, that was anti-immigration, that was racist, that was nationalist. And it wasn't Breitbart. It also wasn't any kind of American news publications that were just being posted in Germany, too.
It was original content that originated in Germany, which I think is an incredible revelation, too. And then, so, the question is, now that we know, computational propaganda is an issue. It's an issue internationally. It's an issue in Germany where fake news, where junk news seems to be an issue.
Where maybe social bots are not as much an issue right now, but technically could become active any time. So, I personally see myself as a researcher. And I don't think that I'm in a qualified position to give any kind of policy recommendations of what we should do
and what should be done in terms to this phenomena. But I think I'm in a pretty good position to say what we cannot do and what we shouldn't do. And the first thing that we can do is attribution. And attribution actually means that whenever a propaganda attack in a digital sphere is occurring,
may that be a social bot, may that be fake news. It's very hard to tell where that campaign actually originated from. We can tell whether it was Russia, we can tell whether it was just a Macedonian teen who wanted to make some money on the site. And that's one of the issues, because you can't find a culprit behind those phenomena.
You can say, like, you are responsible, you need to be thrown into jail or whatever. And that is now actually where our politics are reacting a little bit anxiously. Because they think if we cannot find the people that are responsible, then maybe we should look at all the people on the internet.
And that is one of the big trends right now. It's regulation. It's things like the Nesvectoresetzungsgesetz in Deutschland. It's laws, it's content filtering. It is all that regulation that is really aiming into the freedom of expression online. Which is ironic because that is exactly what it tries to protect.
And then the third variable and the third sort of trend that has emerged is putting platforms in charge. Saying that Facebook, that Twitter should be responsible of what content is propagated on their platforms and what should be allowed to do there. But if you actually let Facebook and Twitter decide what content is true, then they do just that.
They decide what is true and they can dictate their truths. So I think that is also highly problematic. So then thinking about if we can't do all that, then what can we do? I think it's within this conference if we can say we can laugh out loud a little bit.
And I do think there is some contribution there. But I think right now what is more pivotal and what is really important is just creating media literacy. It's obviously, it's something that is long term and nothing that we can approach in a short term perspective. But I think it's very important that people on the internet understand the computational propaganda as a phenomenon.
And it's even more important that the multipliers, that the journalists, that the politicians, that the opinion setters, that they understand computational propaganda is happening on social media and that they cannot take for granted what they see on social media and use that and bring it out and transport it into the media and transport it into the broad political public sphere.
Thank you very much. Okay, we still have just a few minutes, but if there are some urgent questions you would like to ask,
we can come to you and you can ask them right now. So can I see some hands? Any questions at this point for Lisa Maria? No questions. Alright then, thanks again for the amazing talk and see you later.