Hot Topic: Mathematics of Disease – The Science of Epidemic Modeling
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 10 | |
Author | ||
License | No Open Access License: German copyright law applies. This film may be used for your own use but it may not be distributed via the internet or passed on to external parties. | |
Identifiers | 10.5446/55004 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
00:00
Goodness of fitMathematicianModel theoryDynamical systemSign (mathematics)Decision theoryPhysical systemResultantMathematical modelWave packetMultiplication signPredictabilityPresentation of a groupEntire functionMeeting/Interview
02:53
TheoryMathematical modelGroup actionMathematical modelDecision theoryTrajectoryCalculationProjective planeModel theoryAlgebraic structureNumerical analysisDifferent (Kate Ryan album)Price indexLimit (category theory)TheoryLine (geometry)Energy levelGenerating set of a groupMultiplication signDot productDirection (geometry)1 (number)19 (number)Series (mathematics)Kontraktion <Mathematik>Operator (mathematics)Right anglePoint (geometry)Range (statistics)Meeting/InterviewComputer animation
09:12
Electric currentLatent heatPredictionAxonometric projectionRange (statistics)ResonanceMathematical modelTheoryAerodynamicsTheoryMathematical modelRange (statistics)Projective planePredictabilityGoodness of fit19 (number)Term (mathematics)Moment (mathematics)Multiplication signBasis <Mathematik>Order (biology)Ocean currentTrajectoryLine (geometry)Model theoryForestEvent horizonMathematicsDecision theoryEnergy levelDot productNumerical analysisPhysical systemLimit (category theory)AreaLatent heatRight angleComputer programmingOrbitDifferent (Kate Ryan album)HorizonStatistical hypothesis testingComputer animationDiagram
14:48
Mathematical modelMathematicsParameter (computer programming)Algebraic structureData transmissionModulformLocal ringCharacteristic polynomialPhysical systemNumerical analysisStatisticsMathematical modelSubsetProjective planeUniverse (mathematics)MathematicsModel theoryRange (statistics)Vapor barrierPhysical systemResultantConfidence intervalLocal ringPresentation of a groupPosition operatorExplosionSquare numberTerm (mathematics)Decision theoryMereologyData transmissionGroup actionAlgebraic structureMoving averageCharacteristic polynomialStandard deviationArithmetic meanDegree (graph theory)Statistical hypothesis testingAreaCasting (performing arts)Limit (category theory)Parameter (computer programming)Meeting/InterviewComputer animation
20:50
Local ringCharacteristic polynomialPhysical systemMortality rateModulformGroup actionRegular graphMathematical modelModel theoryFunction (mathematics)Axonometric projectionDirected graphModel theoryData transmissionMathematicsMathematical modelFunction (mathematics)Limit (category theory)Variety (linguistics)EstimatorPhysical systemMeasurementMaxima and minimaTime domainMultiplication signLikelihood functionPresentation of a groupGroup actionProjective planeNumerical analysisDirected graphNichtlineares GleichungssystemSet theoryOrder (biology)Block (periodic table)Term (mathematics)19 (number)Parameter (computer programming)Decision theoryExistenceStaff (military)Computer animation
26:12
Mathematical modelMathematical analysisModel theoryPrisoner's dilemmaComputer programmingDependent and independent variablesMeeting/Interview
27:22
Physical systemArithmetic progressionTotal S.A.Network topologyModel theoryPhysical systemDifferent (Kate Ryan album)Arithmetic progressionOcean currentProjective planeEnergy levelMany-sorted logic1 (number)MathematicsMereologyData transmissionSlide ruleGamma functionMeasurementMultiplication signTrajectoryPrice indexPossible worldRule of inferenceArea
31:43
Mathematical modelEnergy levelModel theoryProcess (computing)Model theoryMathematical modelArithmetic meanFamilyResultantRight angleObservational studyMany-sorted logicIdentical particlesComplete metric spaceNumerical analysisTrajectoryDifferent (Kate Ryan album)Variety (linguistics)
33:51
Model theoryMathematical modelEnergy levelProcess (computing)Group actionLattice (order)Fatou-MengeAnalytic continuationRight angleSlide ruleModel theoryGroup actionObservational studyMeeting/InterviewLecture/ConferenceComputer animation
35:29
Physical systemProcess (computing)Körper <Algebra>Model theoryGrothendieck topologyOperations researchTime zoneMaß <Mathematik>Lecture/ConferenceMeeting/Interview
37:09
SurgeryBuildingPressure volume diagramOpen setIndependence (probability theory)Continuous functionLine (geometry)Staff (military)Uniqueness quantificationSet theoryProcess (computing)Directed graphClassical physicsCurveComplex (psychology)Point (geometry)Term (mathematics)Multiplication signBuildingAreaRight angleArithmetic meanArithmetic progressionMathematical modelOpen setDimensional analysisFunction (mathematics)Computer animation
40:46
Observational studyNumber theoryGroup actionAlgebraic structureMathematical modelObservational studyMeasurementWater vaporGraph coloringMusical ensembleFrequencySpacetime
42:25
Observational studyFaktorenanalyseIncidence algebraMathematical modelRandom numberForestLink (knot theory)Mechanism designComplex (psychology)Parameter (computer programming)Model theoryDifferent (Kate Ryan album)Link (knot theory)Observational studyMathematical modelParameter (computer programming)AreaArithmetic meanDivisorEstimatorDifferent (Kate Ryan album)MeasurementWater vaporModel theoryFrequencyMechanism designRight angleOperator (mathematics)Graph coloringIncidence algebra
44:59
Dependent and independent variablesDependent and independent variables19 (number)Statistical hypothesis testingModal logicPressureSet theoryDifferent (Kate Ryan album)Element (mathematics)Cycle (graph theory)Complex (psychology)Multiplication signKörper <Algebra>
46:11
Parameter (computer programming)ConsistencyPredictionMultiplicationElement (mathematics)Descriptive statisticsGoodness of fitDecision theoryCycle (graph theory)Basis <Mathematik>ConsistencyExplosionKörper <Algebra>SphereParameter (computer programming)Analytic setPredictabilityEstimatorHand fanBuildingMultiplication signMaß <Mathematik>Table (information)Process (computing)Latent heatPoint (geometry)Pivot elementProjective planeMaterialization (paranormal)Radical (chemistry)AreaDifferent (Kate Ryan album)TheoryPanel painting
48:26
Grothendieck topologyBuildingMaterialization (paranormal)MereologyGoodness of fitGrothendieck topologyProjective planeMultiplication signComputer animation
49:36
Table (information)Standard deviationFigurate numberFunction (mathematics)Limit (category theory)Local ringGenerating set of a groupModal logicPotenz <Mathematik>Grothendieck topologyStatisticsStandard deviationState of matterSet theoryArithmetic meanGoodness of fitLecture/Conference
51:21
FaktorenanalyseModel theoryLocal ringMathematical analysisArithmetic progressionAerodynamicsBlind spot (vehicle)Presentation of a groupMathematical modelPerspective (visual)Model theoryPoint (geometry)Slide ruleResultantArithmetic progressionDifferent (Kate Ryan album)Multiplication signDynamical systemMany-sorted logicMathematical analysisCuboidOpen setLocal ringEnergy levelGroup actionKnotAreaUniformer RaumGrothendieck topologyProduct (business)Meeting/Interview
53:17
Decision theoryStatistical hypothesis testingResultantGroup actionMany-sorted logicDecision theoryClassical physicsDifferent (Kate Ryan album)Model theoryAssociative propertyComputer animation
54:38
Model theoryNumerical analysisPresentation of a groupTerm (mathematics)Lecture/ConferenceMeeting/Interview
56:07
Model theoryMathematical modelPredictabilityOrder (biology)Expected valueFunction (mathematics)Set theoryTerm (mathematics)Open setTranslation (relic)Connected spaceGroup actionArea19 (number)Ocean currentBarrelled spaceMereologyOperator (mathematics)SummierbarkeitMeeting/Interview
59:10
Model theoryMathematical modelGoodness of fitDegree (graph theory)Group actionStandard deviationSet theoryDifferent (Kate Ryan album)Game theoryPredictabilityCausalityProjective planeConfidence intervalModulformPoint (geometry)Wave packetRectifierProcess (computing)Meeting/Interview
01:00:16
Mathematical modelWave packetSet theoryModel theorySummierbarkeitFisher informationResultantRight angleEnergy levelLatent heatOrder (biology)PredictabilityDecision theoryLimit (category theory)Lecture/ConferenceMeeting/Interview
01:02:41
Model theoryAreaMathematical modelGoodness of fitNegative numberPoint (geometry)ResultantLecture/ConferenceMeeting/Interview
01:03:57
Point (geometry)Negative numberData transmissionMathematical modelModel theoryCausalityAreaMultiplication signDistanceEvent horizonNumerical analysisEnergy levelConnected spaceDifferent (Kate Ryan album)Kontraktion <Mathematik>Inverse elementMeeting/Interview
01:05:44
Numerical analysisKontraktion <Mathematik>Connected spaceMathematical modelStatistical hypothesis testingRight angleComplex (psychology)Order (biology)Multiplication signPerspective (visual)EstimatorPhysical systemVector potentialDirected graphReal numberMathematical analysisMeeting/Interview
01:08:11
Set theoryModel theoryTerm (mathematics)Mathematical modelElement (mathematics)MathematicsGoodness of fitObservational studyRight angleDescriptive statisticsLimit (category theory)Multiplication signGaussian eliminationMeeting/Interview
01:09:17
Gaussian eliminationCuboidMathematical modelMultiplication signMathematicsDifferent (Kate Ryan album)Set theoryData transmissionVariety (linguistics)Line (geometry)Ocean currentProjective planeSampling (statistics)Term (mathematics)Meeting/Interview
01:11:12
Mathematical modelModel theoryProjective planeRight angleDifferent (Kate Ryan album)Closed setPhysical lawEnergy levelAreaOrder (biology)Observational studyRule of inferencePhysical systemGroup actionMereologyDecision theoryMeeting/Interview
01:13:22
Model theoryPoint (geometry)MereologyMultiplication signLatent heatDecision theoryOperator (mathematics)ModulformPotenz <Mathematik>Lecture/ConferenceMeeting/Interview
01:14:44
Point (geometry)Multiplication signAreaModel theoryRight angleMeeting/Interview
01:15:53
Order (biology)Mathematical modelProcess (computing)Set theoryModel theoryAreaComplex (psychology)Mathematical analysisComputabilityMeeting/Interview
01:17:13
ComputabilityModel theoryMathematical modelAnalytic setMultiplication signDecision theoryMeeting/Interview
01:18:59
Term (mathematics)Kontraktion <Mathematik>MathematicsProcess (computing)Scaling (geometry)Real numberMathematical modelPrice indexDependent and independent variablesOrder (biology)Physical systemGoodness of fitMoment (mathematics)Cartesian coordinate systemMeeting/Interview
01:21:02
Universe (mathematics)Order (biology)MathematicsAlgebraic structureMathematical modelDifferent (Kate Ryan album)Fatou-MengeLogicMereologyMultiplication signVapor barrierResultantAreaLimit (category theory)Observational studyMeeting/Interview
01:23:06
MathematicianMathematical modelTerm (mathematics)Körper <Algebra>Model theoryFatou-MengeLine (geometry)Lecture/ConferenceMeeting/Interview
01:24:23
Mathematical modelRight angleMathematicsFatou-MengeAutoregressive conditional heteroskedasticityArithmetic meanLimit (category theory)Grothendieck topologySubsetModel theoryMany-sorted logicOpen setFunction (mathematics)INTEGRALPoint (geometry)Power (physics)Operator (mathematics)Presentation of a groupLecture/ConferenceMeeting/Interview
01:26:59
Computer animation
Transcript: English(auto-generated)
00:19
Hello, everyone. Good morning, good afternoon, or good evening, depending on where you are in the world.
00:25
And welcome to the hot topic of the Heidelberg Laureate Forum. My name is Martin Enserink. I'm a science journalist. I'm the international news editor at Science Magazine, a global research journal. And I specialize in infectious diseases and global health.
00:42
And I'm based in Amsterdam, which is about a five-hour train ride from Heidelberg. The topic of today's session really is hot, because we're going to talk about epidemic modeling. And I think it's fair to say that there's never been a time when modeling has been so much in the media and in discussion.
01:07
And models are hugely important during the COVID pandemic. Entire societies have been shut down because of the forecasts from modelers. Models predicted surging cases, overburdened hospitals, and governments had to act on those.
01:24
But models can be hard to understand for the public. And often the results from the models, the outcomes, the predictions have differed, and sometimes they've been spectacularly wrong. It's sometimes a bit of a black box if you're not a mathematician how these models work.
01:45
And when I wrote a story about COVID-19 last year for science and about modeling, I came across what is apparently an old saying, all models are wrong, but some models are useful. So why are they sometimes wrong? And how can they be improved?
02:02
And how do modelers talk to the public and to the decision makers about the uncertainty in their models? And what do the people who use the models, what do they think? Are models useful? How can they be improved? So those are the issues we're going to discuss today. And we have four great speakers lined up for you, two of whom are here in Heidelberg.
02:24
The other two will join online, and I'll introduce them one by one. They will each speak for about 10 to 12 minutes. I've asked them to keep it short so that we have time for discussion at the end. So please send us your questions through the conference system, and I hope you enjoy their presentations.
02:47
Our first speaker is joining us from London. And his name is Sebastian Funk. He is a professor of infectious disease dynamics at the London School of Hygiene and Tropical Medicine. He's also a Wellcome Trust Senior Research Fellow.
03:03
And Sebastian is one of those people who builds models. He has worked with many organizations, including the World Health Organization, the European Center for Disease Prevention and Control, Public Health England. And he also leads the EpiForecast Group. And that's a group of scientists that produce real-time modeling of diseases
03:23
in collaboration with public health decision-makers. Sebastian, the floor is yours. Thank you very much, Martin. And thank you for having me. So can you see my slides? Do I need to do anything to get these slides?
03:48
Here they are. Okay. What I will try to give you in the next 10 minutes is a subjective overview of what, to me, modeling is, what it's useful for, and what its limitations are.
04:03
And I will argue that it's largely a tool for structured thinking through knowns, unknowns, and the consequences of assumptions that one makes. Now, another way of putting this is that models are a tool to combine data on what we know,
04:25
so what we know with theory, so what we think, to learn about what we don't know. And the kind of questions that we often get asked during infectious disease outbreaks are, how transmissible is this? How severe is it? So those were the kind of questions that were particularly important early on in the COVID-19 pandemic.
04:47
But also things like, which ones are the populations at risk? Where is it going to go next? And all these kind of questions are ideally answered with data. And models really are an excuse that you can use when there's things that you don't have data on.
05:02
And so you do that by using the bits of data that you have and combine them with things that you think are correct, and you get outcomes, and they are the things that you want to learn. Now, as an example, I mean, one in biology is germ theory. So we know that infections spread via pathogens and direct contact.
05:21
And so if we illustrate this, so if you have a population and someone is infected, starts infecting others, and they infect others, and so on and so forth. And this is where you get the now famous exponential behavior of epidemics and outbreaks that we've seen so many times with COVID-19, for example, as well as other diseases before.
05:41
The other thing that we know is that people recover from disease. So they become colored in blue. So in some way, in most people, there's a successful immune reaction to an infection. And once that has happened, there's some level of immunity, some memory, and that person, at least initially, can no longer be infected.
06:04
And I'm just trying to get the next slide, which is not happening. I'm just trying to reload this. Okay, here we go. Yeah, sorry, one back. And this is encoded in the now relatively famous SIR model.
06:22
So it's simply putting these relatively simple relationships and insights, or you could arguably call it theory, together in something that you can then do calculations on, that you can simulate in a computer, and you can draw conclusions from.
06:42
And just to give you an example of this kind of approach, a question early on in the COVID-19 pandemic was how transmissible this is. And this was before we started working on this, before it became widespread around the world, when it was largely confined to China, and a few cases that have been found in passenger flights.
07:02
And so then with that, the question was, how can we use what we know? So having seen, having those cases, relatively noisy cases from different places, how we can put these together to learn something about how transmissible it is, and particularly to find out what its reproduction number was. So this, again, something that has become a fairly well-known concept now,
07:24
the number of cases, secondary cases, or the number of follow-on cases that someone who becomes infected generates. So we put together a model that was fairly similar to this SIR model that I just showed, with a bit more detail, you can see that on the left there, where we included the population of Wuhan, where the original outbreak was reported from in China,
07:44
as well as the cases in international travelers. And you can see that on the right, the black dots there were the data points, and we had this model, we fitted this model to the data. So this is a line you can see through there. And by combining these international and nationally reported cases,
08:01
we could get some picture of what the reproduction number was at the time, that is how infectious that was. And that gave us some indication of what the risk was, and we became very clear that there was a genuine risk of seeing sustained spread elsewhere, once the infection was introduced.
08:23
Next slide, yeah. Okay, another key unknown, and one that we're asked about particularly often by policymakers, is the future. And Martin already mentioned forecasts, and I'm glad you mentioned forecasts, because there are many different ways in which models can describe the future. One is forecasts.
08:41
So by looking at the past trajectory of cases and having a model that is fitted to that, one can simulate that model forward, make a forecast. And so again, this is the kind of idea that we have a theory, so we know something about spread, we put this together, we've seen things so far, so in principle, if that theory is correct, we should be able to run that forward and predict the future.
09:01
In practice, this isn't so easy. And here, for example, this is a project we run, it's a so-called forecast hub, European forecast hub, where we collate forecasts from many, many over 20 modelling teams from across Europe that all submit on a weekly basis the latest predictions for what the number of cases will look like across Europe.
09:23
And I give you here an example from Germany, given that this event is hosted in Germany. And so you see these green lines, which is the prediction, and the black lines is the data, and the green lines come out of the data, and in a perfect world, if these predictions are always correct, they would align with the data.
09:40
And what you can see instead is that quite often, they go quite a long way away from the data, and they also attach with quite a lot of uncertainty, which is shown here as the green bars. And the reason for this is that there are many things that are happening in an epidemic that are really difficult to predict, that we don't have good theory for, and that are difficult to encode in a model, such as changes in behaviour due to spread, but also changes in policy.
10:06
So in order to make a good prediction, a good forecast of COVID-19, you have to predict politicians' behaviour as well as individual behaviour, and we generally don't include that in the model, and really, we don't know either how we would do that in the best way.
10:21
So it's hard to make forecasts, at least in the long term. It's easier in the short term, one or two weeks, but it's hard to do in the long term. But what you can do is you can still make statements about the future, but you can make statements about the future that are conditioned on a specific assumption that you make. And this is one example that was made by the Commission of Modellers.
10:42
So in the UK, there is a body of modellers called SPI-M or SPI-MO that on a weekly basis provides evidence to politicians and policymakers that then politicians use or base their decisions on. And this was one from the end of October where there was a projection. You see there's something that says projection on the current trend,
11:01
and by projection, we mean a forecast or a simulation, a prediction about the future under the specific assumption that nothing is to change. So we take out all this, what I just said about individuals changing behaviour and other things changing that are not encoded in the model, and we say, well, we can't really make a statement.
11:22
We can't make a good forecast. But what we can tell you is that if things stay as they are, then this is what we would see. And that's what you can see here as a red line. We call that a projection. And in that case, it became or it was clear that at the trajectory at the time there, this is the dashed vertical line, within four to six weeks we would reach hospital capacity within the UK,
11:46
and that was one of the evidence that went into then a four-week lockdown that started in early November. And in fact, the credibility of that statement was based on the preceding four to six weeks
12:13
where in all these cases there wasn't much evidence that anything was changing. The dots were perfectly in line with the predictions made all this time before.
12:23
And whilst we weren't sure that there wasn't going to be a change or a turnaround, there wasn't any particular reason to believe that cases and hospitalisations wouldn't continue as they were. And so this was whilst not a prediction that we would stand up to and say, this is definitely going to happen, it was still a useful tool for policy in order to assess the relationship
12:41
between the current level of admissions, the trajectory, and the relationship of both of these to the capacity in the health system. And then lastly, instead of just assuming that everything is going to be the same, something that you can do with models is you can make explicit assumptions. You can test ranges of assumptions. And so again, this is something from SPIAM.
13:04
And in this case, we looked at different scenarios, particularly the research led by Matt Keeling and others, they looked at different scenarios of how fast the vaccine rollout was to continue. We didn't know and nobody knew at the time how fast the vaccine rollout would continue.
13:24
So it wasn't possible to make a prediction because that was an unknown and something that we didn't have any information on. But we could test or modellers could test different assumptions. And so lay out a range of scenarios, neither of which was probably going to become reality because these weren't forecast. These weren't created with the idea that one of these would happen.
13:44
But at least they could lay out kind of a range of possibilities that, again, policymakers could use to base decisions on. And so to summarize, I hope I've made it clear that models are a tool.
14:01
They're not a crystal ball. They can combine things that we know with assumptions and theory. It's important to know what the assumptions and the theory are that go into a model. If they are wrong, then the model will be wrong. But they allow us, they are an excuse for data and they can replace data where we don't have data. It's really important to always be aware of the limitations
14:20
and the assumptions that go into a model, but often or sometimes they are the only thing that we have available. The main purposes of this kind of real-time modeling and outbreaks is to understand what's going on, to get better situational awareness, but then also, as I've shown in the end, to explore plausible or possible scenarios. And it's important to bear in mind the distinction between forecasts
14:40
and models can be used for forecasts, but they're usually not very good beyond a short time horizon versus projections versus scenarios. Thank you. Thank you very much for that nice talk, Sebastian. I remember that paper well that you mentioned at the beginning in the Lancet, infectious disease, that I remember being struck how you could use some data from China
15:01
and also from all these other cities where people with COVID had landed and then to use that as a way to gauge the reproductive number. I think that's something that many people don't realize, that you can do that with modeling as well. It's not just about forecasts and projections. So I thought it was interesting.
15:21
We're going to our next speaker, who's standing right next to me. It's Sheetal Silal. She's the director of the modeling and simulation hub Africa, also called Masha. She's also an associate professor in the Department of Statistical Sciences at the University of Cape Town. Sheetal's models have helped governments in many lower and middle-income countries
15:44
make important decisions on diseases such as malaria, syphilis, and pertussis. She's also a part of a modeling consortium that advises the South African government on COVID. I'm very grateful that she's come all the way from Cape Town to Heidelberg for this session.
16:04
She and I had a beer last night on one of the lovely squares here in town, and Sheetal told me that outbreak modelers need empathy, and I thought that's interesting, because empathy is not usually the first thing you think about when you're talking about mathematics.
16:23
So perhaps she will explain to you what she means by that. Thank you. Thank you very much. And today, following my colleague's presentation on what the purpose of modeling is, I'd like to focus on the importance of taking into account context, diversity, and culture in our modeling.
16:44
So when we are developing a disease transmission model, what are the primary steps? And we've listened to a little bit of this already, but to go through it briefly, we first need to review the existing knowledge base. We need to read up on global policy, understand the biology of the disease, the various treatment options that are available.
17:03
We then collect whatever data is available to us, and we then use this data to develop and build our mathematical models, choosing a methodology that is commensurate with the amount of data available to train our models. We then need to test our models to assess that they are, in fact, robust
17:22
and doing what we believe they should be doing, and sensible, and so we conduct sensitivity analyses, not just on the parameters, but also on the model structure itself, and we test out the assumptions that we're including in our models. And then we're in the position to run model scenarios, such as demonstrated by the previous speaker,
17:42
these stochastic scenarios that allow us to come up with model findings with uncertainty ranges and confidence bands, and the last step is to communicate our model results and model findings to those who are requiring them. So, taking into account the strengths, the limitations, the uncertainties,
18:02
all the good, all the bad, and all the uglies, to communicate in an honest manner. But when you take into account, or when you wish to take into account context, culture, and diversity, it requires you to do quite a bit more than your standard disease modeling that perhaps one learns in a degree
18:23
or in a textbook. So, what you have to do is to look at the population of interest. Perhaps you're focusing on a country. You need to read the local country policy documents. You need to read the monitoring and evaluation reports that tell you not just the policies that countries decided to implement
18:42
or chose to implement, but also how well they implemented them into the past. You've got to take into account local epidemiology and particularly the health system characteristics, understanding how the population access the health system, and very importantly, understanding how data are collected
19:01
and how representative the data are of the true underlying situation. So, I'll give you a few examples from malaria. In the many malaria models that I've built in LMIC, there have been a few characteristics that have always stood out for me in terms of being important to take local context, culture, and diversity into account.
19:23
One is access to treatment. We start asking questions. Who accesses treatment? How do they access treatment? Do the sick access treatment through the public health sector, through private care, through traditional medicine, or perhaps self-treatment, often by accessing drugs available on the counterfeit markets
19:42
if you're not able to get them yourselves through legitimate means? What about the questions around when people access care? Do you access care as soon as you fall sick, or do you access care present late to treatment? In particular, we focus on the barriers to access to treatment.
20:01
So, are there groups in the population who simply do not have access to the care that is perhaps even freely available to them? In malaria, one such group are our high-risk groups of migrant and mobile populations. It is often the case that migrant and mobile populations are afraid to access the healthcare system,
20:21
often because they are in the country through illegitimate means or because the service that is provided is not acceptable. So, in some cases, it is through xenophobia or through extra judgment that people do not feel comfortable accessing the healthcare system. Now, we need to know all of this. You ask yourself, disease modeling is about mats.
20:42
Why does this matter? It matters because it helps us understand better how effective these policies are, how effective those drugs are, if they're able to actually get to the populations who need them, and it's able to help us understand who's missed. Likewise, in malaria, we have what we call insecticide-treated nets.
21:02
These nets help you sleep under them and they prevent mosquito bites, so they help to prevent transmission in that way. And it is not enough that countries distribute millions of nets. It matters if they are used. In some communities, nets may be used as a source for fishing, as a fishing net, because it helps to bring money into the household.
21:22
But in many, many high-prevalence settings, these nets are simply not suitable to many of the population or there's no appetite to really use them. So, brand-new nets are kept in the cupboard or given away or discarded, and so these nets are not achieving the maximum impact that they need.
21:42
And there are many examples like the social behavior, environmental considerations, and I could go on forever talking about each of them. But let's move on to the next topic. What do we do when there's no data available? Now, if we consider the health surveillance systems in many LMIC and many countries around the world, they are often weak.
22:03
And this can be for historical reasons or even in the more recent times due to limited budgets. You'd rather be spending the limited pot of money you have for health on buying drugs, on taking care of hospitals, or making sure there's enough staff rather than getting a much better
22:21
or establishing a much better data collection system. In other cases, surveillance systems or data simply doesn't exist. Using COVID-19 as an example, at the very start of the epidemic early last year, we had to make not just the short-term forecasts that we could perhaps do comfortably using existing data,
22:42
but we had to make long-term projections, six-month worth of projections at the very beginning when no data was available. So, what did we do? We established expert panel groups. The one thing I hope that comes across quite clearly during this presentation is that mathematical modeling is not the domain of just mathematics
23:00
or just computer science. It is multidisciplinary by definition. So, when we establish these expert panel groups, they're comprised of doctors, of epidemiologists, biologists, virologists, those working in the public health system, economists, those in government, all coming together to take international data
23:20
and bring some context to the situation in which you are modeling. And that is where this question of empathy comes in, because you need to be able to empathize with the population who is being served ultimately by these models. We may develop models for government, but government is using these models to develop policy to help people on the ground, and they are the ultimate end user,
23:44
and that is exactly where empathy is required. So, these expert panels, what they do is they help to contextualize information. One quick example that I could cite is when it came to critical care and understanding in my country, South Africa, the likelihood of people accessing critical care
24:01
and how long they might spend in a critical care bed. We could not just take the parameter values or these estimates from European countries, because in South Africa, there are far more strict criteria to enter hospital in the first place and then to remain in ICU. And so, this is exactly where the expert panel helps to bring everything together
24:24
in order to take this international information and make it relevant for the population you're trying to serve. And lastly, all of this, at the end of the day, only really matters if we can communicate our model findings in a clear way.
24:41
So, what we have found is that if we, as scientists, convey our model findings objectively and honestly and with humility, and therefore, citing what the models can do, their strengths, citing the limitations, as well as all the very vast uncertainty
25:00
in developing these models, then we are able to establish a good relationship with these stakeholders who are in charge of making decisions. And we've also found that using appropriate tools helped to bring these lots of equations and numbers in a meaningful way
25:20
for these stakeholders to understand. So, for example, using interactive dashboards, maps, clear plots, talking with non-technical phrases, as well as developing focused output to each stakeholder group that you are serving, we have found these all to be useful measures in better communicating our model findings.
25:40
And so, in conclusion, what I'd like to leave you with is that I have found in my experience that among all the models that I've developed across a variety of diseases and for many countries around the world, the models that have been the most useful are those that have paid the due respect and attention
26:01
to the local context of the population in which they serve. Thank you. Thank you, Sheetal. That was really interesting. I guess you really made clear why you need empathy and also why modeling is not something that you do just behind your desk. You really need a lot of information on the ground, which in many countries is very hard to get by, right?
26:22
So you also need really good relationships with the people that can provide you those data. Right, yeah. Okay, thank you. We go to our third speaker, Julia Fitzner. You've now heard from two modelers. We're now going to the other side, if you will, to people who use models.
26:42
The client, as Sheetal called them, or maybe the consumer, however you want to call it. Julia is not a modeler, but she works at the World Health Organization headquarters in Geneva, Switzerland, where she's the team lead for data and analysis in the Global Influenza Program.
27:02
Welcome, Julia. Julia has been involved in the response to many outbreaks and epidemics around the world at WHO, including yellow fever, SARS, the influenza pandemic of 2009, and currently, of course, the coronavirus pandemic. Welcome, Julia, again, and the floor is yours.
27:22
Well, thanks a lot for this nice introduction and welcome. And I'm very happy that I am actually talking right after. Sheetal is bringing a lot of this kind of contextual thing to it. I will focus a little bit more on actually the data that is used then with the models and then how we actually use them and what we need to do
27:43
and maybe also on where we go with the next steps on what we need to understand and going further with all these kind of data and models that are already available around the world. So just, and this is a little bit going back to the basics again as well,
28:04
what are we asking at the beginning and during any pandemic? And it's really sort of, do we know how transmissible the disease is or stays? I mean, now with the new variants coming up, we are still asking also, how does the transmissibility of the disease change?
28:25
How severe is the disease or the part of the disease and who is most affected? So who is the ones that we need to care most about and protect? But also, what is the impact on our health system and on society?
28:43
And finally, and this is sort of also these kind of scenario modellings, how do the interventions that we plan or have at hand change this natural progression of the disease?
29:07
So going back now to what we have sort of currently at the global level, we have a lot of data and we are collecting a lot of data. So we have the data of the current COVID pandemic at the global level.
29:20
We know we have reported of now nearly 230 million cases and nearly 5 million deaths. And we have them sort of by the region, by the country. We also sort of have the cases and the deaths by age group. So we can see trajectories and these data are important then to sort of make other, to make models and see further projections.
29:46
We also have, next slide please, like the vaccine coverage. So we do know where people have been vaccinated. We see here the big hole in the African continent
30:01
of being vaccinated and compared to other countries. Next slide please. We also have data and this is actually done now, the collection of data from people around the world that actually give this voluntarily to us. So it's individuals in the countries who give us information
30:23
on what public health measures are used and then we calculate an index out of it. So we can see the, over time, this is sort of from last week, but we can see over time how public health measures have been used. And this was as Sebastian was saying, these things are important if we want to then judge
30:43
and look at the data and the models that we see and that were used. We also have, and this is now just a snapshot of the GISA data, which is the data on the actual virus genomics.
31:01
So here you see sort of the snapshot from three days ago. So we have more than three million genomes of viruses that have been analyzed in detail and we can then make these kind of differences. We see these, most people know now about these things.
31:23
This is very new as well. People know and talk about alpha, beta, gamma and delta variant and you see where they come from and we have a lot of detailed information on these deep genomics as well, which again is important to consider when making models.
31:43
Next slide. We also, I mean, while we have heard that we have the data on the cases, the case reporting, and this was also stressed by Sheetal, it doesn't mean that we actually have all these, that the cases that are reported is referring to the right number of cases.
32:04
So we also are looking at what is actually the serology of who has had either vaccinated now or had the disease in the different countries. And so we have here the different studies that have looked into
32:22
what is the serology of COVID in the different countries. And again, this is important to understand the trajectory that is then done with the different models. Next slide. And so this was just a snapshot of the different data to be used
32:41
and then we use models with these different data. We apply the different algorithms and as Sebastian was saying, we then sort of look to fill gaps of the different data because the data that we have though is not the same wherever we get it from.
33:04
So we have the really lack of completeness, we also have lack of timeliness and of quality, which again, modern algorithms and sort of models or artificial intelligence can help to really close those gaps,
33:20
but they need to be considered and thought carefully through. And then we get these different results from the different modellers and they are very different or they have a very wide variety of results, which is then really difficult to grasp from the people that...
33:51
Technical problem.
34:02
It looks like we have a technical problem. We seem to have lost Julia. I hope that can be restored. Right. She was in the middle of an interesting talk, but we've lost her. I'm looking at the technical people.
34:22
I hope we can get her back. Or should we move on to the next talk for now? Yeah, I think we're going to the next talk. Yeah, and hopefully Julia can finish her talk after that. All right. That's going to happen at hybrid meetings. It's a technical challenge.
34:42
So we're continuing. Our last speaker today... Oh, Julia is back. We lost you momentarily. Please finish your talk, Julia. Okay. So next slide. So there is a lot of comparing
35:02
and Sheetal as well as Sebastian have been saying there are in these groups of experts that actually look into the different models and this is a very, very important thing so that the models, the different experts can have a look into the different models and can help each other. So these are the experts.
35:22
Oops. I think we lost her again. I think we lost her again. Yes. We may have to go to our next speaker. Okay. Sorry about that. Our next speaker is Amris Baiju,
35:40
who has now also joined me here on the stage. Amris is also not a modeler, but he is a field epidemiologist and microbiologist. He has worked in many humanitarian crises around the world, places like refugee camps, war zones, I believe, and he recently took a new job with Maîtacens Sans Frontières,
36:04
also known as Doctors Without Borders. He is director of their Luxor Operational Research Unit which is based in Luxembourg. Amris is from the Netherlands, just like me, and I've seen him on television a lot the last 18 months
36:20
in all kinds of talk shows where he is often very critical of the Dutch coronavirus approach. I found that really interesting. He's also been critical sometimes of epidemic models and the way they are used. So I'm curious what Amris has to say today. Hello, everybody.
36:40
So thanks for having me here. Indeed, I'm trying to play a little bit of the devil's advocate, not to say that something is good or that it's bad, but to kind of better know and get to know each other's worlds, on one side the academic world and on one side the world that we with Doctors Without Borders work in, which is often the emergencies that are most difficult,
37:03
where data is never readily available, where surveillance systems are lacking. Maybe just a little bit of a background. Our line of work, medical aid, where it's needed most, we do it independent, neutral and impartial.
37:22
We work in over 70 countries. I don't want one too fast. 70 countries and we have over 65,000 staff members and we do our work as close to the ground as possible by running health facilities, by running medical facilities, which gives us also a unique eye into many humanitarian emergencies
37:42
where not many actors actually work. These are some of the subjects that we work in, right? So it's everything related to infectious disease but not limited to infectious disease, noncommunicable disease, cardiovascular disease, diabetes in communities like refugee populations
38:00
that have lack to access to healthcare, big diseases that have been there since the start of humanity like malaria that are very dependent for their control on national budgets. And that also reignites, well, this COVID crisis reignited and then put us back in much of the progress that has been gained over many years.
38:20
So we're not just looking at COVID-19. Let me just kind of zoom in on the outbreaks, humanitarian health emergencies. Increasingly, we are becoming aware that we are codependent on each other, that we need our best in terms of work, our brightest to work together.
38:41
And I was surprised during the pandemic, I had hoped we would have done a little bit of a better job both in the countries that we come from as well as a global village to say so. This is an image that maybe many of you are now familiar with. It's a classic epidemiological curve, right? Typically, you have a patient zero and then at one point when you have enough patients in a hospital
39:04
and there's an alert, we start interventions. And that is hopefully something that comes onto the screen now. And then your intervention begins. And if you do your job in the right way, you will come to areas like capacity building, lessons learned,
39:22
policy formation, academic research. At least that's a classical dogma. In reality, we're constantly learning, we're constantly adapting, we're constantly improving. There's also an added complexity of dimension that was added, right? So more data sources are available. Not in the least because we have more advanced diagnostics
39:41
here and there available. Unfortunately, not in many of our settings. And a couple of years ago, we wrote a paper on that which talks really about, yeah, data science in outbreaks. How can we improve that and how can we work better together? So we also use a lot of open data sources, right, to get data from maps, GIS types of data.
40:05
And we have many more possibilities when it comes to analysis. We also have more advanced clinical data, mostly here in the West, not so much in the areas that we work on. So if you look at what is really interesting for us, much of it is actually summed down to what we call basic descriptive epidemiology.
40:24
It is not rocket science, but it's incredibly pivotal to the work that we do. All the other types of discussions related to medical modeling, unfortunately for us, are not directly relevant, but we need to keep in constant dialogue to see how we improve workflows
40:40
and how we can make the data that we have more useful so we get better output from the academic world as well. Let me walk you through an example. For example, when this crisis started, of course, like the first question was how hard is it going to hit refugee camps. And one of the biggest refugee camps that we know is the refugee camp in Koksmasar
41:01
that started during the outlooks of Rohingya refugees from Myanmar back in 2017. And the headlines hit it in a modeling study that the impact was going to be really bad. And we kind of knew that because basically none of the basic measures that we have become so accustomed to are applicable
41:21
or are implementable in many of these refugee camps. So March 8, first case in Bangladesh where the camp is located, the preprint of this paper was published on the 26th of March. And we wanted to know, of course, what the potential impact was going to be, how much did we need to scale up.
41:41
The study was published in one of the more prestigious medical journals in June 16. But there were two kind of highlights that I want to highlight here. One is that if you look at the main findings, one of the main findings was that, yeah, it might have been bad and the refugee camps didn't have structures to support if things would go wrong.
42:03
And we need to come up with novel approaches. None of this was actually new to us or actually contributed anything to the work we were doing. On top of that, and this is not a criticism to any individuals, zero authors on this paper were actually from the country itself. Let's stand, we're actively working in that specific emergency.
42:23
So there was a lot of contextual information missing. Another example, cholera is a disease that we are very familiar with. It's a disease that is spread through water and we know that it can be endemic, for example, during periods of floods. And one of these articles that we came across, for example,
42:41
cholera outbreaks can be predicted using climate data and AI. And I just highlighted one of the main findings that we need is that human factors are important for cholera incidents, such as access to water resources. Again, nothing new to us, doesn't really help us in operational work. Why? We will never be able to pinpoint
43:00
when exactly a cholera outbreak is going to happen because we never have that type of surveillance capacity. And even with novel tools, if we could pinpoint it, we often can't reach those areas or don't have the means to kind of work in these type of areas. So I guess the key question is, what does it add and how does it help us?
43:22
So we deliver data, right, in a theoretical exercise. And for us, magic happens when we give it to mathematical modelers and people that are way smarter than we are. And then ideally what we hope is to get out this eureka solution. And then if you look at reality, the data we deliver, not high quality.
43:44
And what you get out, yeah, unfortunately also not high quality. And I specifically use these two emoticons that everybody knows nowadays because it's a famous saying in applied epidemiology, shit in, shit out. Too often I feel that the link with public health professionals
44:02
that are working on the ground is lacking. And that makes it, mathematical modeling sometimes, a little bit of a classic academic driven study or a limited study of which operational utility remains to be questioned for us working in difficult contexts. There's limited awareness of the operational context and on our side there is limited awareness of the academic support mechanisms,
44:23
which are often so complex and fast moving that we don't even see what the proper picture is. And an Ebola funeral, for example, where infection prevention measures are not properly used can completely change the course of an Ebola outbreak. Often we're not consulted on parameter estimations
44:42
and the parameters are kind of like the known unknowns that you want to feed into your model. But also we deal with different incentive models. My work is to deliver the best medical care possible to populations in need. And yes, that sometimes does conflict with the academic model of publishing.
45:01
But it's important. The global trends in humanitarian response look grim. We see more fragile contacts, more people being displaced by conflicts, not in the least to amplify it all due to the COVID-19 crisis and climate change. The duration of our responses is on average longer and much more complex as we've all experienced during this pandemic. And financial support is not growing accordingly.
45:22
So our capacity to respond to emergencies is drastically and radically tested. And then there's a few elements on the complexity of the data that we deliver. Much of the data we still collect is by necessity sometimes collected on paper. So 90% of our time in the field settings,
45:41
we're busy with merging different types of data sets, getting data from a very badly maintained Excel file and merge it all together, data cleaning, re-cleaning, problem-solving, getting clearance to actually share data, which is a really important subject. And we have narrow timeframes for analysis, right, because the time pressure is immense during these health emergencies.
46:02
So the question is, how much time do we actually have for actual work? And that fueled the kind of need for us to focus on the development of tools. And if you look at the cycle from data to action, for us there's a few elements that are very important. Data cleaning, having graphics and data visualizations that are insightful,
46:23
a good descriptive tool, and consistent reporting, because we need to report on a daily basis and we want to do it consistent so that decision makers can take the best informed decisions possible. What do we need for that? Well, a lot of our work lies in good descriptive epidemiology. A lot is inherent in the ways that we collect data,
46:42
and surveillance capacity is always a limited aspect in our working area. Having good tools that help us, as I said, with the things like around data cleaning, visualization, parameter estimations, predictions, but especially consistent reporting. But actually the role of advanced analytics in a humanitarian sphere of field epidemiology
47:02
has been limited so far. So why is it relevant? I'm just going to speed to this slide. It will generate more time to actually work on our real job, which is actually solving an emergency. It helps us identifying the right bottlenecks, treating patients better, will make data more comparable around time and across different places,
47:22
and that by itself, of course, fortifies the way we can actually do science and improve practices, but it also helps us communicating with stakeholders, increase awareness of what actually is happening, interact with affected communities, a very important point, and advocate based on solid evidence. There's two examples of the improvement of tools
47:42
that I kind of want to highlight to you and that you can go and have a look at. The R for Epis project, which was a collaboration between many, let's say, more theoretical epidemiologists, people that have a lot of experience with R, and many of the field epidemiologists that do the day-to-day work, and that these are specific areas where we need help, for example,
48:02
to make these so-called situational reports where we have ready-made R scripts, you just put your data in, it's automatically cleaned, inconsistencies are removed, and you get a nice Word document with tables that you can edit, basically visualizations that you can partially edit or export in Excel,
48:20
depending on whatever you're using, and yes, Excel is still the most used tool. Keep that in mind. Another resource developed by good friends of mine was the epidemiologist R handbook, because what we need is materials to train, materials to build capacity, and that's often neglected, right? And for me, this has always been a really important subject,
48:42
because this is, for me, pivotal, building capacity in countries, building capacity on site, and actually making countries less dependent on a need from academic institutes in our parts of the world. And as you can see, this was a project that was actually instigated and supported by, and I say that explicitly because you see the organizations here,
49:02
but these were actually individuals within the organizations that shared a mindset and actually wanted to do this in their free time, yes, in their free time. And they made this huge resource. Why? Because not many people are interested in funding these types of really important pieces of work. And just to show you how popular it is, the website, since its launch in May 2021,
49:25
has been visited 250,000 times by 80,000 unique visitors in over 203 countries, over 600 users per day, and one in four users have returned and still are using this as a resource. Okay, just to conclude, for a new generation of scientists as well,
49:43
I wanted to kind of discuss one thing. It always starts with a useful question. Are you fishing in a data set, or do you have really questions that you want to answer? Do you know where your data is coming from? Simple things like time, place, person, key epidemiological questions, are really important to us.
50:01
Also know what the limitations of your data are. What are the assumptions you'll have to make, and how solid will these assumptions be? Know your audience, and know how they read figures. What is their understanding of statistics? How often we had to explain things like exponential growth during this crisis to even fellow scientists or medical professionals, for example.
50:23
Put your limitations always on top of your document. Is your output actionable? This requires from start to finish interactions with public health professionals. Talk with each other. Have you involved local actors from start to finish? A really important subject that's often forgotten within academic institutes.
50:40
Do you pay attention to global goods? And what I mean is, are you contributing to national capacity on site? And one of the things that is really important that we always forget, we talk about the technicalities, but one of the things we neglect is to talk about new ways, standards, and methods to actually collaborate with each other. Because, as I said, we do need each other.
51:02
So, just to kind of end on a quote from Carl Sagan, and it comes from the pale blue dot, and if you haven't seen it, to look at inner obscurity and all this fastness, there's no hint that help will come from elsewhere to save us from ourselves. I think it by itself underlines the necessity for better interdisciplinary collaboration,
51:20
but also honest collaborations. Thank you. Thank you, Amrich, for that very helpful perspective from the humanitarian world. I guess you made the point that sometimes modeling really is a distraction, although I assume for modelers it must be odd to see their work represented by the poop emoji in your presentation.
51:44
I want to ask them in the discussion after this how they feel about that. But first, Julia is back, and I'd like to invite her to finish her talk before we move on to the discussion. Julia? Thanks a lot.
52:01
So, actually, I think Amrich was saying quite a lot of the things that I was going to put forward as well. So, I mean, there is actually a lot of data, and there is a lot of different models, but we often have not the local knowledge, or it's not taken into account, and the context of the data is not used.
52:22
The data is collected, and good analysis is done, but that analysis is actually not shared, especially in low- and middle-income countries, because they just don't have the time to also publish it and share it. Other aspects that might influence the dynamic of the progression is not taken into account, and the result of models are sometimes contradicting
52:45
and not so transparent. And as the others, I think it's really the open discussion among the different experts, and especially multidisciplinary experts is really, really needed, and it is not currently happening enough.
53:03
And the last slide that would come now is sort of the idea that we are starting to sort of think a little bit more out of the box with opening of this pandemic and Epidemic Intelligence Hub. Next slide, please. Does she still have that? Where we hope that we can bridge maybe some of those things,
53:23
and I think it's really the sort of plea for also all the new scientists to sort of help us really get these things a little bit better, move a lot of the good things that are already there, and it's also been trying to get the data better, because a lot of the data could be better used if we don't do it anymore
53:44
with the classical surveillance collection, but actually use federated data access and get it more semantically linked so that we can use more of the context and that we show them this kind of data more in real time,
54:02
that we do have more collaborative exploration of the data and have more actionable real-time insight of these, and then hopefully also we can move towards better decision locally and globally so that the different people can see the results of these analyses,
54:23
and it was nicely said also from Sheetal, we need to show the results, and all this only works if we do it collaboratively and if we do it learning together. Thanks a lot, and sorry for the technical problems. All right. Thank you very much, Julia.
54:43
I think, thanks to all the presenters for their talks, I think some common themes are emerging, although there's also some differences, but clearly you all think that the models very much depend on the context. You need to have good local information.
55:02
You need good data to come in for good models to come out. There needs to be more collaboration. I think that's a common theme in gathering the data and building the models, and the communication is very important. How do you share the outcomes of the models with the people,
55:21
the clients that need them, and with the public? And one thing I heard is the models need to be more useful sometimes. But I kind of want to start with the poop emoji that Amrish shared, because I think we all have seen in the past year and a half
55:42
that sometimes the models are wrong, even in countries that do have good data. I mean, Sebastian, maybe I can start with you. In the United Kingdom, for instance, some models predicted that numbers would go up, case numbers would go up rapidly after the reopening in July,
56:03
and some people were very alarmist about that. And that doesn't seem to have happened. But there have been many other innocences, I think, where the models were actually off. That clearly influences people's trust in the models, right?
56:20
How do you deal with that huge uncertainty and the fact that you're sometimes going to be wrong? Yeah, I think it ultimately comes down to communication. And I think the way in the UK these kind of model scenarios have been used were really as a policy tool, not as a prediction. I think there's a very clear communication channel
56:44
between the modelers and politicians in the UK, and there's a lot of translation work and interpretation work that goes on there in order to make sure that really both relevant questions are being answered by the modelers, but then also the answers are kind of interpreted correctly.
57:01
I don't think anybody of the modelers or any model would have been able in July to predict what's going on. And often, in fact, the scenarios are set by politicians. So it might be a scenario where, for example, an expectation, further opening as what's going on in July, if we assume that people have twice as many contacts as they have before,
57:21
what would happen? And then you get the output from the models. But no model in the world is able to make a reliable prediction for something like a COVID-19 epidemic for more than a few weeks ahead. So if you want really firm, reliable predictions, then, well, there's nothing you can do. A model can't answer you that.
57:41
And, in fact, it's often a plausibility check you often need to do with models. And I like the kind of how Amrish said, you provide the information and then some magic happens, and then you get the output. And I think often it really, the insights from models
58:02
are often not particularly controversial. Once you look at the assumptions that go into the models, they often seem obvious. It's just that they provide a certain clarity in putting these assumptions together and then getting an outcome. But in terms of what's going to happen in the next months or so with COVID,
58:23
then there's just such huge uncertainty in the assumptions and whether the assumptions are going to be correct, that no model in the world, whatever it does, will be able to answer that correctly. So I think to answer your question, I think it ultimately comes down to communication and to ensuring that there's no expectation
58:41
that these models that are scenarios with a very clear set of assumptions where every modeler knows that none of the assumptions will exactly come to pass and there are planning tools. If that is clearly communicated and that expectation is avoided, that it's a genuine prediction, I think that is how we avoid this kind of critique
59:03
that might then be levied at the modelers, that predictions were wrong if they weren't genuine predictions. Sheetal, do you want to weigh in on that? I completely agree with what Sebastian has said. So I think what we've learned in this pandemic is that as much as you may have had a group of modelers doing modeling,
59:23
the modelers themselves almost needed a communications team behind them to be able to do this kind of explanation to the public. And I think that really would have helped our cause if we were able to have more resources to dedicate towards almost teaching the public the basics behind disease modeling.
59:41
And as Sebastian so eloquently put, what the differences are between predictions and projections and scenarios. But we were operating in an emergency setting, and perhaps other than writing a couple of articles in the media and maybe having a Twitter account, it was very hard to establish a public or a communication platform.
01:00:00
form for your clients or your stakeholders as well as the public. But do you understand that this undermines the public's confidence in the models, that at some point they're like, I don't know all these models, I don't know what to believe anymore? Certainly. I think that has been an issue, and one that we hope to, I would believe, rectify as we model into the future.
01:00:21
Okay. And do modelers get training for that, for communication, or is that something you learn on the job? I think for most people, you learn it on the job, and particularly as we're saying emergency settings now, but we have, at least in some of the networks in which I work, we do try offer media training for up-and-coming modelers, so how do you conduct yourself in the media,
01:00:43
what are ways in which to communicate? But that costs money, and money isn't always available for communications when you're trying to make salaries as a modeler. Okay. Sebastian, how have you learned to communicate the outcomes of your models and the uncertainty that comes with them?
01:01:01
The hard way, by being misunderstood often enough that eventually you kind of learn to attach enough notes of caution to results that you present, and enough uncertainty. And it doesn't have to be, you know, it's not purely, and I don't want it to come across
01:01:22
as purely being misunderstanding from the recipients of the model, it's also, obviously it's on us to learn to become better in communicating and to not communicating modeling results without also communicating the limitations and the assumptions that go into them. The other issue is that in the UK, for example, and this is quite often the case
01:01:43
when we work with policy makers, the models really are a tool, and they arise from a specific conversation with policy makers, with decision makers, or with politicians often via intermediaries or health departments or ministries of health or charitable organizations like MSF, and there's
01:02:06
specific questions, and then we as modelers try to find specific answers. And then it's good that this is not in public, and I think it's important that the public is aware of what informs policy, but then that communication that led to generating these results
01:02:21
often gets lost when results just get presented, so all you see of the general public then ends up seeing is a plot of future cases that maybe looks like a prediction, but never was designed to be one. And so I think it's clear communication that's required at all levels of
01:02:40
interaction there. Right. Julie, do you want to weigh in on that question about communication of the uncertainty in models? Well, I mean, I said it in my present question as well. I think this, and as the other speakers have said, the uncertainty is an important thing, and I mean risk communication is an area on its own, which we've started engaging more with, and I think
01:03:11
it's another thing that we need to even take forward more, and translating technical results in more usable language that everybody can understand I think has shown now in the
01:03:27
pandemic that it is very, very important, and there's a lot of good, good examples on how that can be done, and I think we need to do that even more. And the uncertainty, it's not only starting from the model, that's what I try to bring across as well, and I think that
01:03:42
others have been saying as well, it starts with the data, and we probably need to get better to not just always just put the data as a linear, but use more semantics with the data. Right, okay. We're getting some questions from the audience, and I would encourage you to send
01:04:01
in more if you have them for our speakers. I'll take the first one. It's pretty specific, but it's interesting. The question is, how do mobile phones change some negative aspects of data collection from your point of view? Mobile phones are omnipresent, have many sensors, data transmission, et cetera, and I think that's interesting because, theoretically,
01:04:22
with the data from mobile phones, you can collect a vast amount of information on people's movements, how many other people they see, whether they go to bars, restaurants, et cetera. So, I can imagine that that's a dream almost for modelers, but clearly, there are also huge privacy aspects related to that. Shita, can you?
01:04:45
Sure. So, during the COVID pandemic and modeling the epidemic in South Africa, we did make use of mobile phone records, in particular, the data event type of data, so recording how many pings were in the area in which you woke up and then pings in other districts or wards throughout
01:05:03
the day. That was useful for us, particularly with respect to understanding the spatial connectivity between minor districts because you could, perhaps at your easiest assumption, use an inverse distance-weighting method where you consider that people will most likely travel
01:05:22
to areas around them with a greater probability than areas further away, but especially when we bring economic travel into account, and you take restrictions on certain kinds of travel, these inverse distance-weighting models don't really work. So, the mobility data from mobile phone records helped us get a much better idea between the different restriction levels,
01:05:43
how movement was actually occurring during the pandemic. The difficulty, of course, was, number one, securing the contract to actually get the mobile data. So, we only got the data maybe three, four months into the pandemic because these things take time. We also couldn't
01:06:01
are not necessarily able to publish the data because it's not our data to publish. There are severe restrictions on that. So, I think we have to, as much as mobile data is extremely useful in looking at spatial connectivity and spatial movement for individuals, we do have to take into account the privacy concerns as well. Because Sebastian mentioned that human behavior
01:06:25
is one of the most difficult things in the end to model, but this gets you pretty far along with collecting data on human behavior, right, Sebastian? In principle, yes. In practice, in my experience, it's often more difficult. And I think I really like the examples that
01:06:46
Sheetal gave and how they've used the data in South Africa. And I think there's definitely huge potential in there to unlock. But at the same time, it's fairly complicated data. And I think from my perspective, the real data gaps that we have had in the pandemic weren't
01:07:05
more complex data on people's behavior. It was really the simple stuff. Early on in the pandemic in the UK, and I think the same elsewhere, we were really flying blind because we didn't know how many cases there were across the country and because there wasn't much testing going on.
01:07:24
We had to try to paint a picture from disparate sources of data on hospitalizations and there wasn't any testing in the community. We tried to set things up and get things off the ground. And now in the UK, I think it's an exemplary data collection and data provision system with
01:07:43
a lot of data being publicly available and a lot of useful analysis being done on that purely on the clinical data. Then there's all the kind of contextual stuff like, okay, how people move about and mobile phones. And it's much more difficult to do something with on the rapid timescales that you often have to respond in a pandemic.
01:08:02
That said, I mean, as Sheetal very nicely laid out, there's huge potential in using that kind of data and certainly something that will play a role in years to come. Okay. I have another question from the public. Several of you stressed that models are policy tools. Could you elaborate how are they used in practice? So let's give another set of
01:08:22
concrete examples briefly, please. Maybe I can start with you, Amrits. How are they policy tools? Let's make this a bit more concrete. Yes. I think in many ways they can inform us better in terms of what different scenarios and how they could play out. And I think that's the
01:08:41
most interesting element of policymakers as well. And I'm not speaking again about the context that we as Doctors Without Borders work in because gathering that quality data, we're not there yet. Can you give an example of a disease that you have worked on where the model was an important policy tool where it led to a concrete change? Not by itself. At least I can come up with
01:09:02
examples where we've used that data in that way that it led to concrete changes. Good epidemiological studies. Yes. Right. Descriptive studies. Yes. Okay. But modeling not directly. And maybe because of the limit in the context that we work in. Right. Sheetal, can you give an example? So, I'll give a non-COVID example. Back in 2017, 2018,
01:09:25
we were conducting at the time a malaria elimination investment case in South Africa. So, the purpose of this modeling exercise which was initiated by government for the purpose of policy change was to develop a model for South Africa, make projections, and develop a set of
01:09:45
scenarios as to with the current policies where the country might be in achieving malaria elimination in the next 10 years. And if we made certain changes to our policy, so did a few more interventions of different kinds, how could we in fact get to elimination? We had to cost
01:10:02
it and then we were to work out the funding gap. So, if we were to implement this policy, what would the outstanding funding be? So, we did that. We developed a mathematical model of malaria transmission in South Africa, ran a variety of scenarios, some that were in line with current government policies, some completely out of the box. And it turned out that the
01:10:23
one scenario or the general scenario that led to elimination within the timeframe that we were happy with was one that required investing domestic South African resources, so money from South Africa's taxpayers, investing this money into reducing malaria in Mozambique, our neighboring
01:10:43
country. And that's quite an out of the box policy in the sense that you're using domestic funding to help a neighboring country reduce their malaria. That came out purely from the model. The model was then used as evidence to seek funding from the national treasury in the country to implement the costed amount, costed by the model, and then it was subsequently granted.
01:11:05
And is now underway. Okay. So, it's an example where you really tell the government where to put its money in a way or what it can expect for a certain investment. May I just add to that, to complete the example, that it was only possible because it wasn't just, oh, the
01:11:20
modelers are telling the government. The government and modelers together came up with the project and were partners throughout the project and in the communication right from beginning to end. So, it was a true partnership and I think that is why it's successful. Julia, do you have an example of the models as an important policy tool for the World Health
01:11:40
Organization? Well, there are lots of them, honestly. And I mean, sticking with COVID even in the very early beginning, there was this question, does it make sense to close borders? Does it make sense to do these rigid closing of economies? And a lot was done through these
01:12:05
different scenarios and they were done in the different ways. They were done at a global level and in discussions with the different experts and they were done in each of the different countries as well. And it was a very, very hot discussed thing. The problem at the very
01:12:22
beginning obviously was that there wasn't a lot of data in there. So, there was a lot of uncertainty about a lot of the things, but the models absolutely helped and were used and are used all along. So, I mean, now also, I mean, how is vaccination taking out? How is the
01:12:40
Delta variant or a new other variant influencing potentially the effects? All this is done with models and there are different models. There's also the model just on how not necessarily it is now cast and forecast. The model is also used to actually see effects and going out of COVID, it's like even the global burden studies, all that are models and
01:13:09
they're helping to compare burden to each other and to put them to see what is the most important part to take actions now. So, yes, there are lots of them to use modeling for decision
01:13:23
making. Thank you for that. I want to go back to Amrich's talk because he basically said, his basic point is, yes, sometimes the models are a distraction and apparently in these really dire situations in refugee camps, they're not all that useful. And you showed us a paper
01:13:42
about COVID in a camp in Bangladesh and you said it's not, you know, these people have never been on the ground, they don't really know the situation. So, it's kind of sad if the models don't work for you. How can that change? And has the pandemic changed that? I mean, are people working together a little bit better than
01:14:02
they used to? Well, I think what all of us agree is that it's all about the interdisciplinary concept, right? So, we need strong academics to help us because we're busy with our day-to-day work and we can do it by ourselves because we lack that type of specific knowledge. At the same time, to feed models, you need high quality data. And I think that's kind of like the whole
01:14:25
prospect on models itself during a pandemic like this. It's one of the tools, right? A large part of the tools, and I think Mike Ryan at WHO said it best, right? If you wait too long before you act, you will always be too late. If you need to wait for models to get data fed
01:14:40
as a policymaker, you still need to take some forms of decisions and those decisions at that point in time maybe need to rely more on the operational experience that you have on simple descriptive, you know, epidemiology. But then if, you know, your health professionals don't understand what exponential growth is, your policymakers don't understand what exponential growth is and your national public health agencies have very limited capacity to actually
01:15:04
do descriptive analytics because on the top layer they have to deal with all the questions that policymakers keep on asking them, and they know what questions to ask but they don't have the capacity to actually answer those simple questions. That's when you get stuck. And I think this pandemic has been visualizing that in a very good way. So I think, you know, it's time
01:15:22
for acknowledging that there is a healthy codependency between all of us, that we are going to an era where we see more difficult crises, and that we also need to be very honest in the attributions and the bits of the puzzle pieces we bring together to work towards solutions. But, as I said, it does require some honest solutions. I think sometimes we are very distant
01:15:43
from each other's worlds, and, you know, that's sometimes something that's neglected because we immediately jump to the technical aspects of the discussion. We don't talk about the context and the body around it. Right. Okay, the next question from the public, a technical question maybe for our modelers. Which areas of computer science do you think are the most helpful
01:16:01
for epidemiologists? Sebastian or Sheetal, can you talk about that? Well, if you're speaking about computer science specifically, for me I would imagine two aspects. One is if we have complex sets of data that require a quick analysis of perhaps large data sets like contact
01:16:23
data or mobile data. Having skills in computer science I think are very vital to be able to process that data really quickly. Two would be on optimizing our modeling code, so in order to make the models run more efficiently, but in a sensible way that they can be altered with the
01:16:41
relative ease. Often the modelers themselves have learned and become computer scientists in themselves, but to have those who will come from a computer science background working with the modelers, I think that just serves to make the process easier. And actually, the third one I think where the computer sciences can also be really useful as a skill is in
01:17:03
developing dashboards and tools and writing up R packages and all of that in order to make these tools available quickly for those who need them. Okay, Sebastian do you want to weigh in on that? Yeah, I very much agree. I was going to say tools and I really like that Amrish mentioned that
01:17:22
I think it's an equity issue. I think that modeling capacity and analytical capacity is hugely concentrated in rich countries and I think there's a kind of monopolization of
01:17:41
having the ability to do things, analytics and I think there's huge amount of scope for developing more general tools as well as documentation of those tools and instructional material that could really be a huge boost to capacity around the world
01:18:04
in doing outbreak analytics and do useful things in emergencies. I don't know if that's exactly computer science, but it's certainly a contribution that computation could make. All right, thank you. You are modelers, but you're scientists and we all know that
01:18:23
in science it's publish or perish. You need to publish papers to advance your career, but at the same time you also spend a lot of time on making these models, making forecasts for decision makers, publishing these fancy dashboards where people can do their own
01:18:43
interactive modeling. So what to you is more important, publishing those papers or doing all of these other things and can you get credit for all of these other things as an outbreak modeler? Sebastian, maybe you first? Yeah, you're really running into the open doors there with how I
01:19:05
think about these things. I think the academic model is an obstacle to useful outbreak response and it really needs to change and I think that there is some change and there needs to be a lot more of it. I think contributions that people in science and academia make to responding to
01:19:24
outbreaks, humanitarian crisis, crisis in general, doesn't have to be crises, I think need to be acknowledged beyond academic papers and things like dashboards that are created or we talked about tools. It's very difficult to get credit for it and there's no real
01:19:41
incentive to make a generalizable tool. The incentive is much more to publish your one high impact paper and then move on to the next thing and it's really not sustainable and I think what the COVID crisis or COVID pandemic has brought out is that there are crises that happen at a scale that is beyond any government's capacity and there's
01:20:03
real benefit in drawing in expertise and capacity from academia but there then needs to be some kind of system that rewards it and I don't think that's in place and I think in my case I've had the luxury of a fairly long-term
01:20:23
contract and job so I could de-incentivize writing papers and focus more on trying to provide useful info to policy but many don't have that luxury and it's particularly on the back of early career researchers that are doing a lot of the work that is relevant in a moment like this and
01:20:40
I think there's some change happening but a lot more needs to come and really we've done this with the goodwill of a lot of young researchers I think who have put their careers back a little bit in order to provide a useful contribution I'm hoping that that will be recognized going forward in job applications and promotion procedures but I'd like to see more
01:21:01
of it. Okay so there needs to be a change in the in the rewards system I assume you agree Sheetal? Have you published much in the last couple of years? No no no I've got many many papers on the back burner because we have an incentive to work and by establishing these relationships with government in order to keep them you have to produce you have to make projections and help to answer policy questions as they arise. I'm lucky at my
01:21:25
university that these kind of social engagements or social entrepreneurship or these kinds of ideas are credited but that's not the same everywhere so I think there's a real need for the change in the academic structure of credits. Okay I have another interesting
01:21:42
question from the audience. What lessons if any do you think that the climate change modelers can learn from epidemiological modeling for instance about dealing with different stakeholders? Julia can you say something about that? Well I actually think it again the multidisciplinary
01:22:03
is key and I think we need to work together in both ways because climate change is and disease are actually closely linked and so I think we should not even think in these kind of barriers of one or the other and making it more making
01:22:23
results more available and being able to communicate the outcomes is an important part for both of them and actually let me one go back also to the question before I'm sorry but I have a strong feeling with this publications as well. I actually even think that
01:22:42
the publications are so hard because we hold back information because it needs to be published and it is difficult to actually read through all the publications because there's a long text and we don't actually get to the thing so I really think that there is
01:23:01
time for thinking of this rewarding through publications. Okay we're nearing the end so I have one more question maybe for each of you quickly if you can because some people in the audience might be interested themselves in modeling and might consider a career.
01:23:23
What do you think you need as a mathematician or computer scientist to become a modeler? What does it take beyond empathy which Sheetal already mentioned? Sebastian I'm going to start with you.
01:23:40
I think really not much more than an open mind a willingness to listen and to learn and to understand what kind of key concepts and issues are in other disciplines. I think you know in principle in terms of technical capability we have a lot of mathematicians computer scientists in modeling and so I think there's really people are well set up from those
01:24:05
disciplines to make a contribution but it does require kind of a really a humble and open mind and to be able to learn what the key concepts are in another field of science which is also
01:24:20
it's fascinating and interesting to learn. Yeah I guess that's true for almost any real scientist right. Julia you know lots of modelers what do you think they really need? Just a few words. Well I think one is obviously the idea for data and the idea for math
01:24:41
for computing as well and they need to be I mean for a good one it really is to sort of think about the context and take critique and be open and transparent. I think that's really helpful it's this trust that we need for them that they need to build for their models to be
01:25:00
accepted. Okay Amaresh. Well I think the proper understanding that whatever data set you are working on only represents a small subset of reality right and that you need to think in limitations whatever output you generate and yeah I mean humility is definitely a big aspect to that as well. Yeah I think humility is always good. I certainly agree with all my colleagues
01:25:24
have put forward. I think to add to the key ideas of humility and technical skill I'd like to add integrity and this is this speaks for all disciplines sure but I think with respect to modeling and as scientists when you're operating in the face of the media in the face of powerful
01:25:41
members of government all you have is your integrity and I think that that should really be at the forefront of any kind of communication and modeling endeavor. Okay I think that's a terrific point to end this discussion on. In a few minutes you will be able to join the fishbowl sessions where you can ask our panelists even more questions about what you've heard but maybe
01:26:05
also other things maybe you want to know more about their careers or their organization or maybe you want career advice so please join them and you can wander from room to room. We will not go back to this plenary session afterwards so I would at this point like to
01:26:22
thank our speakers Sebastian, Julia, Sheetal and Amrish very much for their interesting presentations and thoughtful comments and I'd also like to thank you the audience for your interest and the excellent questions. I hope the rest of the forum is as interesting as this session and that's it for us. Thank you and goodbye from Heidelberg.