We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Predicting War - Minority Report Meets World Politics

00:00

Formal Metadata

Title
Predicting War - Minority Report Meets World Politics
Title of Series
Part Number
115
Number of Parts
177
Author
License
CC Attribution - ShareAlike 3.0 Germany:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language
Production PlaceBerlin

Content Metadata

Subject Area
Genre
Abstract
"Making predictions is very difficult, especially about the future". An introduction to ethical dilemmas and philosophical assumptions of algorithmic political forecasting.
67
PredictabilityXMLComputer animationLecture/ConferenceMeeting/Interview
PredictabilityElement (mathematics)Medical imagingInformation securityExpert systemMereologyTheory of relativityComputer wormLecture/ConferenceComputer animation
Expert systemPhase transitionState of matterMereologyPredictabilityLecture/ConferenceMeeting/Interview
PredictabilityAlgorithmPattern languageExpert systemMultiplication signProduct (business)Cursor (computers)Electronic mailing listState of matterOrder (biology)Lecture/ConferenceEngineering drawingProgram flowchart
Term (mathematics)PredictabilityAnalytic setStress (mechanics)Optical disc driveLattice (order)Lecture/ConferenceMeeting/Interview
OracleGodPredictabilityAnalytic setMedical imagingArithmetic progressionLecture/Conference
PlanningBuildingMedical imagingMachine visionPredictability2 (number)Analytic setLecture/Conference
InferencePredictabilityLimit (category theory)Virtual machineAnalog computerParameter (computer programming)StatisticsSound effectOrder (biology)NeuroinformatikBuilding
PredictabilityMereologyHistory of mathematicsSeries (mathematics)Software testingAuthorizationTheorySign (mathematics)Lecture/Conference
PredictabilityMathematicsCross-correlationDivisorLecture/ConferenceMeeting/Interview
SimulationPredictabilityComputer simulationTouch typingDatabaseProjective planeElement (mathematics)Open sourceLecture/ConferenceMeeting/Interview
Broadcasting (networking)Web 2.0Event horizonHypermediaCASE <Informatik>Observational studyTraffic reportingSet theoryReal-time operating systemOpen sourceStudent's t-testSimulationBit rateInformationFormal languageUniform resource locator
ArmPredictabilityHypermediaMessage passingHacker (term)InformationOrder (biology)Row (database)Web 2.0Broadcasting (networking)FacebookOpen sourceCybersexComputer animationLecture/Conference
QuicksortTouchscreenFiber (mathematics)CybersexRow (database)Computer animation
PredictabilityObject (grammar)Row (database)Position operatorComputer scienceGame theoryView (database)Perspective (visual)Lecture/Conference
Perspective (visual)Element (mathematics)PredictabilityLecture/ConferenceMeeting/Interview
PredictabilityRule of inferencePhysicalismContext awarenessPhysical lawMechanism designPoint (geometry)Lecture/ConferenceMeeting/Interview
Rational numberEndliche ModelltheorieDifferent (Kate Ryan album)Set theoryPredictabilitySystem callPosition operatorSurface of revolutionDirection (geometry)EvoluteRange (statistics)Disk read-and-write headMaterialization (paranormal)Arithmetic meanResultantProcess (computing)Type theoryTerm (mathematics)Natural numberEvent horizonTheoryLecture/Conference
AlgorithmMeasurementMathematical analysisFigurate numberSimulationPoint (geometry)Order (biology)Lecture/Conference
Rational numberEndliche ModelltheoriePredictabilityPredicate (grammar)Event horizonArithmetic meanMeeting/InterviewLecture/Conference
40 (number)Lecture/ConferenceMeeting/InterviewComputer animation
Core dumpPrincipal idealPredictabilityPoint (geometry)Element (mathematics)Rule of inferenceLoop (music)Arithmetic meanOrder (biology)Cartesian coordinate systemData miningAlgorithmGroup actionLecture/ConferenceMeeting/Interview
Arithmetic meanMathematical analysisRational numberMilitary baseProcess (computing)Inheritance (object-oriented programming)Multiplication signSpeciesType theory2 (number)Rule of inferenceSpacetimeCausalityDirection (geometry)Power (physics)Physical law40 (number)Meeting/Interview
CASE <Informatik>ExistenceLecture/Conference
Performance appraisalObject (grammar)Numerical analysisPredictabilityMathematical analysisPhysical lawTable (information)CASE <Informatik>Maxima and minimaArithmetic meanMeeting/Interview
Right angleMathematical analysisTerm (mathematics)Group actionArithmetic meanLecture/Conference
Uniformer RaumTerm (mathematics)TheoryRight angleArithmetic meanNetwork topologyMeeting/Interview
Uniformer RaumPredictabilitySound effectMultiplication signArithmetic meanAnalytic setNatural numberDependent and independent variablesEndliche ModelltheorieComputer scientistMathematical statisticsMathematicianMathematicsNeuroinformatikTheoryPower (physics)CASE <Informatik>Type theoryLecture/ConferenceMeeting/Interview
AnalogyWater vaporState of matterGoodness of fitMeeting/Interview
PredictabilityPhysical systemPoint (geometry)Multiplication signDifferent (Kate Ryan album)Level (video gaming)AbstractionParameter (computer programming)Lecture/Conference
TwitterSurface of revolutionSound effectNeighbourhood (graph theory)AbstractionCivil engineeringLevel (video gaming)2 (number)Meeting/Interview
PredictabilityDatabaseDialectMathematicianRow (database)Sheaf (mathematics)Open sourceWeb 2.0Complex (psychology)Data structureAreaOrder (biology)Physical systemChemical equationEvent horizonComplex systemLecture/Conference
Likelihood functionCategory of beingRevision controlComputer crimeMultiplication signMeeting/Interview
PredictabilityMultiplication signMathematical analysisCASE <Informatik>Meeting/InterviewLecture/Conference
1 (number)PredictabilityRight angleSet theoryRow (database)Lecture/Conference
Endliche ModelltheoriePredictabilityOrder (biology)CASE <Informatik>Multiplication signLecture/ConferenceMeeting/Interview
Web 2.0PredictabilityBasis <Mathematik>TwitterCASE <Informatik>Right angleMeeting/InterviewLecture/Conference
StatisticsPredictabilityFundamental theorem of algebraArithmetic meanLecture/Conference
ExistenceResultantRight angleArithmetic meanLikelihood functionInverse elementMeeting/Interview
Sound effectPredictabilityDynamical systemInformationPower (physics)Mathematical analysisMultiplication signChemical equationLecture/ConferenceMeeting/Interview
Point (geometry)Menu (computing)Physical systemLecture/ConferenceMeeting/Interview
View (database)Student's t-testComputer scientistDependent and independent variablesGodDomain namePosition operatorComputer scienceSpeciesFreewareNeuroinformatikCivil engineeringArithmetic meanLecture/ConferenceMeeting/Interview
Lecture/ConferenceComputer animation
Transcript: English(auto-generated)
Thank you very much for the introduction.
Predictions are very, very difficult, especially about the future. When I was about 10 years, I discovered one of my favorite books in my grandparents' basement, and that was an illustrated book about the future from the 1960s. What is interesting about these pictures and what fascinated me back then already
is that they're so wrong, and so hilariously wrong. Later I discovered that this fascination is called retrofuturism, how yesterday viewed tomorrow. This one is particularly interesting, it's called Leaving the Opera in the Year 2000,
and it's from 1882. And what I like about it is that it has two elements, it has flying cars, something we don't have yet, and going to the opera as a mainstream activity, which sadly we no longer really have. So, in conclusion, it's fair to say that we are extremely bad at making predictions.
And this, of course, also applies to political science and international relations. This is an image from the election in Crimea. According to the Society for Stuftung für Wissenschaftung Politik in Germany, it's a famous think tank in Germany,
almost no security expert could have foreseen that Crimea is now part of Russia. If you would like another recent example, in June 2014, the Islamic State captured major parts of northern Iraq, even though this region is covered by a lot of political experts, hardly anybody anticipated that this would take place.
This failure, the failure of political experts is also documented by science. There's a famous book by the expert Philip Tedlock called Expert Political Judgment, in which over the course of 20 years, he tested the predictions of 284 experts.
And his conclusion is, in a lot of times, chance makes better predictions than political experts. Even a basic algorithm would make better predictions than political experts. This, of course, raises the questions, what if we have more data and better algorithms? Will we also be able to make better predictions?
So this is a patent which Amazon filed in 2012, and which was granted by the end of 2013, and that's a patent for anticipatory shopping. So what this algorithm does, it anticipates what you're going to buy before you have bought it.
And the data they use for these predictions is previous orders, product searches, wish lists, the shopping cart, and even how long your cursor stays on the item. So the method they use, of course, is use data of the past to make predictions about the future.
And this raises the question, of course, will big data, I'm using the term now, and more data and better predictive analytics methods make predictions easier and better also in political science. So this authoria and an example for the interest,
the recent interest in predictions is a book by Patrick Tucker called The Naked Future, A World That Predicts Our Every Move. I think everybody in the room is familiar with the big data debate and also the criticism that occurred over the last years. But what we tend to forget, or what I would like to stress
is that this is also a debate which changes our understanding or our relationship to the future. Before I would like to talk about and give you some examples of using predictive analytics in political science, I would like to take a step back and argue that this is actually not a new thing.
So the desire to predict the future is actually an ancient wish, and this is the image you see, is the Oracle of Delphi. The Oracle spoke on behalf of the gods and was not only consulted for personal matters, but also to predict the outcomes of war, for example.
And I think what this example shows is that even though the idea of the future as progress and the future as inevitably improving is a child of the enlightenment, thinking about the future is a very human thing to do. Very basic human activities, making plans, investing, building,
require some working image or vision of the future. The second thing which isn't new about the dream to use predictive analytics to make predictions is the fact that there has been a long tradition in social science
to use quantitative methods and statistics in order to make inferences about the future. This example here, so one of the earliest examples for the use of also computers, is the 1972 prediction by the Club of Rome called the Limit of Growth. And what you see here, this is a machine built in 1949
by the economist Bill Phillips, and this is an analog computer which is used to predict economic outcomes. So if you change one parameter of the economy, what effects will it have on other aspects of the economy? And the third part, which is not new about the fantasy
to use big data to make predictions, is the fact that it actually stems from science fiction, and this is a book I would like to talk about by Isaac Asimov. It's a foundation series, it's a science fiction series in which the author predicted or invented a sci-fi science called psychohistory.
And that was as early as the 1950s. So in the novel psychohistory, or in the foundation novels, Isaac Asimov pictures or portrays the scientist Harry Selden. He develops a science called psychohistory, which uses mathematics, history and sociology
to collect all the data of the Galactic Empire and predict its future. And the goal of this prediction is of course ultimately to change the course in its favor. So if we go back to the original question, we are pretty bad at making predictions,
and the idea of using data and mathematics to make predictions is a very ancient desire. And of course this also has a tradition in political science and there are a lot of important institutes, for example, which calculate correlates of war or try to predict which factors influence genocides.
But what I would like to introduce to you now are three examples which work with the narrative that I have tried to explain at the beginning of this talk. So what you see here, this is a school in Pakistan. And in the past seven years, over 500 terrorist attacks or suicide attacks have been carried out in the country
and a very favorite target of these attacks are schools. So the company Predictify Me, one of the co-founders is also from Pakistan. He joined in December 2014 with the United Nations to make predictions about when and where terrorist attacks on schools will occur.
And they do two things. The first thing they try to do is make simulation to test the risk preparedness of schools. And the second thing is to predict and they claim with a 94% accuracy, with a three-day notice when and where a terrorist attack is going to occur. I tried to get in touch with them, but I couldn't reach them to find out which data exactly they're using,
what they claim they're using, geospatial data primarily. The second example is from 2010. This is the global database of ideas and tone. This is also an open source project
and it monitors the world's broadcast and news media in over 100 languages in real time. It identifies people, location, things, events and tone of the reporting of these events. And this is not, as you can see, this is a simulation from 2013 over to 2014,
which shows where on earth events happen. This case study or this data set has been used in 2013 by the then PhD student Genomite to forecast political violence in Afghanistan. And again, he did not use traditional forecasting methods like poverty rate or income of the country,
but instead, he used the entirely open source intelligence of information available in the World Wide Web. And the third example I'd like to give, this is a company called Recorded Future, which I think is a great title, and it's funded by the investment arm of the CIA and Google.
And if you read the Hacker News recently, there was a claim that Recorded Future monitors private Facebook messages, which had turned out to be wrong. Nonetheless, what Recorded Future does is they also use open source information from the web, that is broadcast news, but also social media news, in order to make predictions.
The main customer of Recorded Future are private companies, so you could say what Recorded Future does is in fact business intelligence. They try to find out where do cyber, so-called cyber attacks happen on the company. Are data breaches going to happen?
But another activity the company is engaged in is also the forecasting of political protest. And what you could see here, this is the work they did last year on the upcoming presidential elections in Egypt. And if you're a customer of Recorded Future, you can see this is sort of one of the screens you receive, and you can see a forecast of future uprising in Egypt.
And, well, one of the most common questions I have been asked is, so are these predictions actually true? Can we actually predict the future? I think it's very important to find out and work out what the precise assumptions of each of these predictive methods are.
And this is what my fellow speaker Kavi is going to do now.
So, thank you very much for welcoming me so warmly. And thank you for the conference, for inviting me, and Frederic, for organizing everything. So, what I'm going to discuss about is about what are the fundamental issues that exist in this prediction game.
My position will be more, I am a professor of computer science. I am supposed to be somebody that does objective stuff and tries to be as much as possible objective.
And what I'm going to present will be a kind of criticism view from a technical, scientific perspective. And I am going to ask the fundamental question, is it possible to predict?
And if it is possible to predict, will it be useful? And if it's useful, will it be worthy? So, these three elements of question will be the main topic of my, what I'm going to say in the coming minutes.
So, first of all, we have to figure out what is the basic underlying assumption that exists with this kind of predictive technology. The first underlying assumption that exists is that, all in all, we have some kind of global rules that govern humanity.
And all in all, the human history or the sociology or economy or any human-related stuff is governed by a kind of mechanical law that will push the human to behave in some way.
It's quite interesting that this question is not a new question. We had something like 200 years of discussion around this idea that has been in the context of sociology,
coined the discussion between positivist and neo- or post-positivist people, that some people claiming that yes, there exists some kind of overall rules and laws that will validate the fact that some topic are coined as science.
For example, politics is political science or sociology is kind of science as the same way as physics or chemistry is a science. Indeed, the point of are these topics science or something like philosophy that is not seen as a science
and as a different thing is by itself a question of interest and a long-term discussion. For example, if you go even further, you will see that a topic like history, you have some people that are proponent of the position that history is a science and you can predict history.
Let me give you a very classical example of this. If you go to historical materialism and you go to the theory of Marx, he said that all the history is predictable. We are having a kind of process that will push you toward communism.
And he gave, as an example of this predictability, a set of events that has happened in the past. Sometimes we might get lost in this kind of predictivity. For example, some people said that the French Revolution is the same nature as the English Revolution of Cromwell because the two revolutions cut the head of their king,
which indeed is a question of a long ongoing discussion among historians. The second assumption that exists behind this is the assumption of rationality or the human rational. And if the human are rational, how can we define the rationality?
And indeed, when we will end up with defining the rationality, we are just uncovering another problem, which is then there is different type of rationality. And now if I construct a model that will help me to predict the behavior of human based on rationality, what is rationality become a question of politics.
And we are beginning to see ideology coming into the story, meaning that this kind of model and this kind of approach cannot be separated from ideology. And in fact, what you are modeling, what you are simulating is the result of that ideology of the person that built the model.
And this is a kind of longer term discussion. You cannot say that when you are using an algorithm, we are completely objective because any algorithm is based on some assumption and the assumption are based on the ideology of the person. And when we want to do a simulation based analysis,
we have to go back on the ideology and figure out what is the ideology that was behind. This is a very important point that we will also supposed to participate in another session just after where we will be discussing a little deeper about this ethic of algorithm stuff.
The third assumption that is underlying there and that is important to take into account is the assumption that all in all, so we had as the first the issue of being a science or the rationality. And the third assumption is the fact that we are able to predict this behavior
and we are able to have some kind of models that will enable us to do this prediction. And what is more important is that the model that we are using,
we are able to separate the fact of observing or simulating from the event itself, meaning that we are not making something that will be a self-fulfilling prophecy. And this assumption is a very, very strong assumption that is very, very difficult to analyze.
And based on this, we get into Asimov's book, and Asimov defined this pyscho-history book in the 40s when he was writing the book, and we see that what Asimov has said is completely relevant to our discussion.
He says that Harry Seldon has developed a new mathematical method that is able to predict the future of the empire that is in the story, and he said that this pyscho-history has three principles. In fact, it defines two main principles,
and in the last sentence of the last book of the series, he defined the third principle. The first principle that is defined is that it is only applicable to a large group of people in the order of billions of humans. And this is a very important thing.
All the data mining techniques that we are using, all the prediction techniques that we are using, are not applicable to a single person. When you say that I'm going to use an algorithm to detect a terrorist, it's simply a subjective assumption. It can never be objective. The second point is that in order for pyscho-history to be valid,
you should make sure that people do not know about pyscho-history, meaning that they are not able to react and to adapt their behavior to the fact that their behavior is predicted. And the third rule that Asimov defined in the last sentence of his last book
is that pyscho-history is only valid if we have only humans in the loop, meaning that we can predict their rationality. And if you have some other element, like in the book, robots, it's not anymore possible to apply pyscho-history.
And you see that Asimov in the 40s, he just goes in defining the three, in criticizing and defining the three bases that I give you at the first. Is it possible to find global law? Yes. If we are looking at large enough and long time enough span. Second, we need to assume that if we go on the second rule of Asimov,
that we are not interacting with the process that we are looking at. And the third assumption that Asimov was doing was that we are able to capture the rationality, meaning that we are only dealing with
one specie or one type of rationality of human. And indeed, we can ask ourselves, has the human of today evolved from the human of past time? Or are we not evolving all the time? Meaning that can we assume that our rationality today is the same of the rationality of our parent?
Or going further? Meaning that taking, capturing this rationality is by itself difficult. Do not say this to a teenager of 14, 15 years, because his answer will be that my rationality is completely different from the rationality of my parent. So, now let's go on the risk and benefit of this analysis.
As a scientist, as an engineer, I have been educated to always look at benefit and risk and evaluate the benefit and risk of whatever I do. The most important thing that an engineer do is optimization.
Now, there are some cases where I can evaluate the benefit, I can evaluate the risk and I can trade off between them. There are some case where I can evaluate the benefit, but I cannot evaluate the risk. In this case, frequently the fallacy is to say,
I cannot evaluate the risk, so the risk doesn't exist. Let's forget about that risk. Because I cannot evaluate it, so let's get it out of the story. But indeed, in a lot of case, the risk of using a technique or a technology is so big that we cannot evaluate its maximal value.
And because of this, we should not use this technology because we are not able to evaluate the risk. Let me give you a very concrete example. Why we are against that penalty? The fact that we are against that penalty is, can be based on some kind of objective argument,
like that penalty is not reducing the number of crime. But the main reason that we are against that penalty is that the ethical risk of that penalty, which is to kill one innocent, is much larger than any benefit that we can find by putting a criminal out of society.
Meaning that we accept that that penalty is not acceptable just because the risk and the ethical risk that is involved is too large to afford. Something that we have to take into account is that, are we capturing, are we measuring the risk
of using these kind of predictive tools? Indeed, when you use the predictive tool for selling one device on Amazon to somebody, the risk is that the person will buy it or not. But when I am making a risk, an analysis on predicting a war, the risk is that I am beginning to do a self-fulfilling prophecy,
because the fact that I know that there is going to be a war, every action that I am doing will be an action that will drive me to the war or make me getting further from the war. And if I am able to avoid the war by predicting it,
I am able also to make a war if I am predicting it. Meaning that there are some ethical risks and some kind of more fundamental risk that exists in this kind of stuff. Meaning that even if I am able to do it, what I am not thinking that is possible just because of the three previous topics that I said being not clear,
it's not something that I would like to do and knowledge that I would like to have. A very classical example of this is that all of these methods are based on the assumption, on the hidden assumption that the past will tell us something in the future. And in the risk theory, there is one term that is used for this fallacy,
which is the black swan. Black swans are animals that will never see them before going to Australia. So up to the 19th century, the definition of a swan was a white animal, that is the swan that we are using to see.
And now you get into Australia and you see a black swan. There is another much fundamental thing, which is the risk of uniformity. The fact that I build a prediction, define what is typical and what is atypical.
What is typical is what follow my prediction. What is atypical is what not follow my prediction. Meaning that by the fact of prediction, I define a kind of way of separating what is normal from what is abnormal. And what is abnormal, what I'm going to do, I will put it aside,
I will see it as an outlier, I will see it as somebody that should be removed from the past of my theory. If we do this, what will be the society that we are going to build? Because giving the power of deciding what is normal and what is abnormal
to a statistical model, or any type of model, will also change the nature of humanity and will be a kind of questionable thing. As a last sentence, I'm saying as a computer scientist, a little mathematician,
I am not against predictive techniques. I'm not against this kind of method. I'm just saying that there are risks involved. And the risks involved are not only the risks that I can assess using my theoretical tool of mathematics and statistics. Meaning that if I am going to do this, I have to understand
that I have a responsibility, an analytical responsibility, on the unknown effect of what I'm doing. And maybe if we have time, I will discuss a little more about the unknown effect, and in the next session that I will have,
there will be more concrete case. So are we moving from pre-crime to pre-war? This is one of the questions that can be asked
based on a provocative title of our talk. And I think to answer this question, two sub-questions are relevant. The first, is it possible to predict war? And what we said throughout this talk is a very good example, or analogy to explain the current state of predicting politics is the weather versus the climate.
So in the past, we were not able to predict the weather or forecast the weather for the next days. And even now, we're still not able to predict the weather for the next 15 years, 15 days, and also not 15 years. And when you talk to scientists, one of the explanation is, it's not because we don't have enough data about the weather,
there are sensors everywhere around the world, but the problem is in the predictability of the weather. The system is simply too complex. So a counter argument to this would be that what I'm saying right now is a prediction in itself, and that in the future, we might never have a technology that enables us to predict the weather. But I think it's still a good point that more data
does not always improve the ability to predict. At the same time, however, we are able to model and predict the climate, but it's a very different level of abstraction. The weather is a different thing compared to the climate. And the second question is, of course,
even if we're able to predict or forecast long-term trends in society, if you can say in this certain neighborhood, there's a certain probability that a crime will occur, we can also say in this region of the world, a certain probability of civil unrest or a revolution will occur, then the question is, should we forecast and predict the future if we can?
And I think, again, this is in the next session, you and a couple of other people from our institute will talk more about this, but what has emerged in the criticism or critical debate of big data in the last years
is that, first of all, predictions are never objective, but especially if you use open source intelligence data from the web, the data is biased in a structural way. For example, if you go back to the global database of events and tone, there are certain regions in the world which are simply not as well covered as other regions.
Last year at Republica, there was a talk, I Predict a Riot, which was fascinating by a mathematician who predicts complex system and also worked on the London riots. And what I found really interesting and also asked as a question after the talk was she used police arrest records in order to predict riots. And police arrest records are one of the prime example
of systematically biased data. Certain people will get simply arrested more than others and certain areas are policed more than others. If you use these data to model and make a prediction, you perpetuate imbalances which existed before.
And then finally, yes, of course, the question is, should we predict? And I think it's important, it sounds very scary, but at the same time, predicting conflict also means predicting genocide, predicting wars, predicting terrorist attacks.
And even the company Recorded Future, they also predict the likelihood of data breaches. This is false under the category of cyber crime and under this category, also a lot of legitimate political dissent occurs. But at the same time, I think a lot of people in the room are against data breaches
and would like the data to be protected in a way. And I would conclude by saying predictions can be bad and at the same time extremely problematic. Maybe the most concerning danger of predictions is the blind belief in their truth and their liability.
And this is of course the famous quote by John Lennon, war is over if you want it. So war and peace is not an informational problem. There are enough wars happening right now in the world, which we simply don't care enough to prevent. So I think if you do a risk and benefit analysis of prediction, one might come to the conclusion
that in certain cases, it's probably better not to predict.
Okay, again, thanks very much for this panel. And I would now say we're open for questions from the audience. So first one's first. And before we start the Q&A, I would politely ask you to shortly introduce yourself, say your name and where you come from. So who's first up?
Hi, my name is Matthias. I'm a journalist based here in Berlin. My question to the example that you used at the end from last year's talk about predicting riots. You said that these predictions were based on a very biased set of data, the arrest records.
And if you were to act on them, you would perpetuate that bias. If you were not to act on them, would that be a possibility to establish the correctness of the prediction? You don't have to act on that prediction in the sense that you would increase, for example, police surveillance and monitoring.
And that could also, of course, be one reason for a riot to start. But if you were not going to act on them, would that be a method to at least establish the correctness of the prediction in that case, in that example?
I would say when it comes to predictive policing or in the case of riots. So she said she worked with it, she works at UCL and does great work there. But she's also working with the London police in order to help prevent riots in the future. So these prediction on these models
are actually used to make police work more efficient. And I think it's not about whether or not these models work. I think they do work. Otherwise, they would not be called science and not receive that much publicity. But at the same time, and you can see this from Wikipedia everywhere,
data in the web or participation is not equal. There are certain people who are much more likely to write an article on Wikipedia, for example. There are certain people who are much more likely to tweet. Any prediction you make on the basis of Twitter data, even if it's a good prediction in the sense that it works, is based on biased data, I would say.
The question is a very good one. And it's not only related to the case of the riot prediction. It's a kind of more fundamental issues in statistics. Something that we have to figure out about statistics,
a lot of us have took course in statistics and generally statistics is seen as a kind of tool that can answer a question. But in fact, statistic can never answer a question. It can just reject the opposite of the question. This is a kind of mental twisting
that we should take into account. Meaning that when statistics predict you something, it is saying that it is not able to refutate the inverse. Meaning that a statistician will not be able to predict the existence of a riot. He will only be able to say that the fact
that they will not be riot, the probability is less than some value. This twisting, in fact, show the, give us kind of caution to use in some results. For example, you are saying correctly that
the fact that you are predicting that there will be a riot, you have a probability that a riot is happening that is high, mean that you are going to put more police. The fact that you put more police will increase eventually the likelihood. But I do believe a lot of the predictions are not being published. So when the US government publishes now climate or terrorist risk is at five,
this could be, as you said, a strategic move. But at the same time, there will be predictions which just have effect on, have different effects on the people who have access to them. And this creates new dynamics of power and imbalances of information. Okay, Q&A is closed for question requests.
Last question. Thank you. I'm Christoph. I study psychology in Berlin. And my question is, in Asimov's novels, the whole point is that the creation of the pre-crime system is very dangerous because it can be easily abused.
And aren't you afraid that the tools that could be created to predict war could be manipulated to create war in the first place? I said if you are able to predict war,
if you are able to predict war to avoid them, you are able to predict war to make them happen. And now, the basic question is, where is our free will? The political question there is that it is possible to build several things.
But are we going to let them happening? Are we thinking, you know, a pessimistic view of humanity will be to say that human are like sheeps. They are told to do something and they do it. A more optimistic view will be to say
that this totalitarian viewpoint is not new. Now, Anna Haren, in her book about totalitarianism, said that the most dangerous of totalitarianism is the totalitarianism that will come by not saying that I am a totalitarian government
because of God's right, but the totalitarian government that will come and said that I am totalitarian because science is saying this. And indeed, Anna Haren has predicted this. It means for us as citizens that we should be aware of this risk
and we should be behaving correctly. For me, as a computer scientist, I have to say that nowadays, I think very important to transfer to my students this kind of ethical question that are raised in this kind of discussion.
By the way, in computer science, we have no place for having a course. No one of the existing curricula contain a course on ethics. If you go to a department of civil engineering, you will have a course on social responsibility and ethics. If you go to other domain of engineering, you are doing the same. Computer science has not yet did.
We are just getting awakened by the risk that exists, meaning that we should, and I will say that my situation as a computer scientist is relatively similar to the nuclear physicists in the 50s. Where they were seeing that there are some risk. And by the way, one of my inspiration way
when I talk about the ethics of this kind of stuff is to go and see what they have done and the position that they took in the 50s. Thank you very much. Thank you very much, Frederic Kaltohana and Professor Kaveh. Salaamat shalom.