We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Building Catastrophe Models With Open Data And Open Software

00:00

Formal Metadata

Title
Building Catastrophe Models With Open Data And Open Software
Title of Series
Number of Parts
95
Author
License
CC Attribution - NonCommercial - ShareAlike 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language
Production PlaceNottingham

Content Metadata

Subject Area
Genre
Abstract
A catastrophe model is a tool/technique which estimates the potential loss of property and life following a major catastrophic event. Different types of events or perils are modelled including; windstorm, earthquake, flood, and storm surge. ELEMENTS is the in-house catastrophe modelling software which is developed by Impact Forecasting, part of Aon Benfield Analytics. Behind the software are models for a wide range of different event and peril types across many countries and regions of the world. To develop the different components of the catastrophe model, Impact Forecasting use a variety of proprietary and open solutions. Open Data sources such as OpenStreetMap, SRTM, CORINE land cover datasets are used, amongst others. The open-source programming language, Python, is also used extensively to create hazard footprints and files needed for the catastrophe model. The use of Open Source software and Open Data supplemented with other available proprietary data sources allow Impact Forecasting to build more flexible and transparent catastrophe models.
Open setOpen sourceMathematical modelBuildingMathematical modelVisualization (computer graphics)Data typeInsertion lossClient (computing)Digital rights managementStrategy gameEvent horizonOnline helpElement (mathematics)Computing platformSoftware developerTexture mappingMaxima and minimaData structureDenial-of-service attackRadiusFrequencyHazard (2005 film)Addressing modeNumberLimit (category theory)Characteristic polynomialVulnerability (computing)Content (media)Insertion lossCatastrophismVelocityNichtlineares GleichungssystemDataflowBitSummierbarkeitMathematical modelMathematical modelThumbnailComponent-based software engineeringService-oriented architectureElement (mathematics)Event horizonNumberVideo gameClosed setOpen setInformationComputing platformGoodness of fitBuildingHazard (2005 film)Denial-of-service attackData structureUniform resource locatorDifferent (Kate Ryan album)Figurate numberOpen sourceDigital rights managementMereologyFunction (mathematics)Group actionVisualization (computer graphics)Analytic setField (computer science)CASE <Informatik>DialectAreaVulnerability (computing)Integrated development environmentCalculationPhysical systemSoftware developerOvalSet (mathematics)Exception handlingComputer animation
Online helpCovering spaceComputer-assisted translationMathematical modelMathematical modelNatural numberMenu (computing)Element (mathematics)Regulator geneForm (programming)Metropolitan area networkConditional-access modulePhysical lawVisualization (computer graphics)Standard deviationOpen setHazard (2005 film)Component-based software engineeringPairwise comparisonOpen sourceVulnerability (computing)SoftwareTable (information)Level (video gaming)Time zoneBuildingPort scannerWeightEvent horizonInformationDistribution (mathematics)Function (mathematics)CodeCASE <Informatik>Denial-of-service attackHardware-in-the-loop simulationCatastrophismNatural numberDenial-of-service attackMathematical modelEvent horizonOnline helpMathematical modelHazard (2005 film)Sound effectLibrary (computing)AreaPlotterCodeSoftwareTime zoneOpen setFunction (mathematics)Dot productPolygonBitOpen sourceMultiplication signBuildingTheoryCalculationOrder of magnitudeTable (information)Different (Kate Ryan album)Slide ruleCombinational logicCuboidQuantileWeightElement (mathematics)Extension (kinesiology)Component-based software engineeringCoprocessorSystem administratorCovering spaceLevel (video gaming)Vulnerability (computing)Visualization (computer graphics)CASE <Informatik>Single-precision floating-point formatSet (mathematics)Software developerStochasticMedical imagingDigital electronicsState of matterConcurrency (computer science)Human migrationReduction of orderExplosionReading (process)Utility softwareExecution unitPrime idealIdentity managementSensitivity analysisComputerUniform resource locatorAuthorizationInstance (computer science)DialectFunctional (mathematics)Disk read-and-write headFrequencyPopulation densityClosed setPoint (geometry)FreewareDefault (computer science)Matching (graph theory)Computer animation
Computer clusterOpen sourceOpen setDifferent (Kate Ryan album)Level (video gaming)Mathematical modelMathematical modelMaizeHazard (2005 film)FreewarePort scannerGeometryVisualization (computer graphics)Server (computing)Element (mathematics)Task (computing)CountingInsertion lossExtension (kinesiology)Component-based software engineeringPointer (computer programming)Event horizonTotal S.A.AverageCodeTexture mappingBuildingSummierbarkeitState of matterUniform resource nameGoogle EarthSoftwareSpectrum (functional analysis)Vulnerability (computing)View (database)Mathematical modelProcess (computing)CASE <Informatik>DialectQuicksortCycle (graph theory)Open setMathematical modelComputer configurationBoss CorporationMenu (computing)CodeVariety (linguistics)Element (mathematics)Video gameEvent horizonService (economics)Insertion lossPresentation of a groupNumberExtension (kinesiology)Point (geometry)Slide ruleWhiteboardBitComputer fileLink (knot theory)ResultantMixture modelProper mapVisualization (computer graphics)Division (mathematics)ComputerRight angleBus (computing)Data storage deviceSet (mathematics)Local ringDampingFunction (mathematics)Open sourceCatastrophismAverageSurjective functionGreatest elementUniform resource locatorTime zoneMultiplicationWordLatent heatLastteilungLevel (video gaming)Denial-of-service attackWeb 2.0Different (Kate Ryan album)SoftwareAbsolute valueComponent-based software engineeringBlack boxArchaeological field surveyMultilaterationAreaComputer animation
Element (mathematics)Mathematical modelOpen setComponent-based software engineeringMathematical modelMathematicsComputer fileTable (information)Exponential functionSlide ruleSound effectMathematical modelShape (magazine)Type theoryBuildingBitResultantSoftwareInformationMathematicsComputer animation
Element (mathematics)SoftwareMathematical modelOpen setOpen sourceComponent-based software engineeringKey (cryptography)Visualization (computer graphics)outputMathematical modelOnline helpTask (computing)Level (video gaming)Standard deviationDifferent (Kate Ryan album)outputFunction (mathematics)Mathematical modelQuicksortPoint (geometry)Element (mathematics)Bit2 (number)PlanningComponent-based software engineeringOpen sourceMathematical modelDifferent (Kate Ryan album)Open setInsertion lossSoftwarePresentation of a groupComputerImplementationFunctional (mathematics)AlgorithmCatastrophismEvent horizonProcess (computing)Software frameworkPurchasingHazard (2005 film)Mixture modelAbsolute valueWeb 2.0MappingMultiplication signOverhead (computing)Strategy gameSeries (mathematics)Computer configurationStandard deviationC sharpProjective planeChemical equationArchaeological field surveyMixed realityMereologyDecision theoryMathematical analysisUniform resource locatorDynamical systemPhysical systemDirection (geometry)Level (video gaming)MetreInformationWeb pageBuildingFrame problemData modelView (database)DemosceneProper mapSampling (statistics)Flow separationVideo gameSuite (music)Boom (sailing)Set (mathematics)Product (business)CASE <Informatik>Moment (mathematics)Game theoryDigital photographyGoodness of fitService (economics)Metropolitan area networkExecution unitState of matterSign (mathematics)Canonical ensembleFluidDisk read-and-write headNoise (electronics)Visualization (computer graphics)Musical ensembleGroup actionResultantColor confinementIncidence algebraOvalCountingVirtual machineTelecommunicationAuthorizationRight angleWater vaporDatabase normalizationSlide rulePhysical lawDevice driverArithmetic meanComputer animation
Transcript: English(auto-generated)
Thanks, Ken. Thanks, everyone, for attending. Can you hear me at the back? Cool. That's good. One thumb. OK, so today I'm just going to tell you a bit about how we're building catastrophe models using open data and open source. So I work for, I'll tell you in the next slides,
I work for a reinsurance broker. So I'll tell you a little bit about reinsurance. Who knows about reinsurance? Put your hand up. I recognize a face. OK, yeah, another people too. Great. And who's ever used a catastrophe model before? OK, a few. OK, great. So I'll talk about how we build catastrophe models, how we use an open source and open data for the purposes
of developing the model and also for visualization. And in the previous days, just before FOSS4G or on Tuesday and Wednesday, there was the AGI conference in the UK. And their theme this year was making things open for business. So I thought I'd bring that into the equation too.
OK, so reinsurance is really just insurance for insurance companies. So we need insurance, and then a big disaster can occur, and the insurance company can't afford to pay anybody. So then they get reinsurance. I am Benfield. We are brokers, essentially.
We sell reinsurance to reinsurers. Oh, you might have seen us before. I don't know if anyone, maybe not the guys from the US, I don't know if you follow soccer, football. But yeah, we've got basically part of the Aon group. So we sponsor Manchester United, and the part that we do
is this reinsurance broking. So within Aon Benfield, the analytics team do catastrophe management where one of those functions, in impact forecasting, this little logo here, I've actually got some socks that are branded like this. I can show you later on. We develop catastrophe models.
So the team, there's about 60 of us worldwide. We're catastrophe model developers, and we're building the platform called Elements. This is where we run our catastrophe models for different perils, earthquake, flood, windstorm. And we've been developing that for about four years.
So what this lets you do, lets the user do, is to put in insurance portfolios, so information on where the sums insured are for different houses and different buildings, and give an estimated loss for any probability of a certain event. So the components, I'm going to get onto the open bit
soon, don't worry, it's coming. But with the catastrophe modeling, really, we've got three key components. We've got the hazard component, the vulnerability, and the exposure. So within the hazard, we might be looking at things like, for earthquake, the shaking intensity. So if an earthquake happens in a certain place, what's the intensity of shaking at a location close to that?
For windstorm, we're talking about wind speed, wind strength. And then for flood, it might be the flood depth or the flow velocity of the flood. The next bit about the vulnerability, that's how a building or a structure would respond to a catastrophe, so the damage that
might occur. So different buildings will respond very differently. We've got, for windstorm, a greenhouse that would get damaged very quickly, compared to something with a bit more robust, maybe. Then exposure, this is the bit that really defines some of the risk, the portfolio. So things like the building, the sums insured,
how much is insured at each location. Maybe in some cases, if it's life insurance, the number of people that are living or in a certain location. And then together, these three components come together in a catastrophe model. And the output is the loss calculation,
so giving a monetary figure for the potential losses for an event. So I guess why people are using catastrophe modeling is to help insurers and reinsurers price their catastrophe cover. So to say, this is how much money we need to keep if an event happens to pay people out.
They were formed over the last 25 years, really. Started in the US following Hurricane Andrew, but also the European windstorms the late 80s and early 90s. We've seen since, so normally natural perils,
things like earthquake, flood, and windstorm, but then also some man-made perils, so following the World Trade Center attacks, things like terrorism models have been started to be developed, too. So people that use catastrophe models
as insurers and reinsurers, I'll move on to later about some other people that might or could use catastrophe modeling. Has anyone heard of the top three, AAR, Equicat, and RMS? Put your hand up if you have. OK, great, thanks. So these are the big three commercial modeling companies out of the box.
Anyone can use that software. We're developing this one in impact forecasting elements. OK, so here we go. On to the open. So we've got open standards, obviously, open data, and open source software. And I'll show you in the next few slides how we're using a combination of all of these in the things
that we're doing in the model development and in the visualization of the models also. OK, so with the first slide on model development, when we're creating the model components, each of these hazard vulnerability and exposure were really developing the model components using open source a lot of the time. And we're often finding it's faster, it's more efficient,
and it's obviously, in some cases, cheaper. So some examples. We've got a footprint map here for an individual event. Now, I'm not sure if you can see, but these are individual dots. So this is a kind of calculated footprint on a one kilometer grid for Turkey.
And the earthquake team were quite proud of that because we can match. This is the, these zones, the polygons there, that's from the USGS, did you feel it? So an earthquake occurred. People go in and say, we felt this intensity in this area. So the dots are the modeled output. So we're closely matching what people
felt for that earthquake. So in this visualization, we used a bit of Python with map plot lib and base map as well. The next thing we want to do is not just look at a single event, but look at hundreds of thousands of events. And that's where we build up this probabilistic event set, stochastic event set.
So when we're creating hundreds of thousands of footprints, you can imagine that's pretty computer, processor intensive. And we're using Python to do that. The next thing we want to do with all those footprints is to say, well, these are the regions or zones, these are the admin zones that that earthquake affected.
And that's where we bring it all together into a thing called a master table, which I'll show you in the coming slides. And we're using R to do that with a weighted quantile function, which I'll show you in a second. A bit of theory just behind that. So this is used in all the models
that we develop in earthquake and flood and in windstorm. This example is for earthquakes. So you can imagine we've got an administrative zone there. We've got a commercial building, and we know it sits somewhere in that admin zone. We don't know quite where. So using population density, population, we're using a land scan actually, so one kilometer
population grid. So 90% of the time, we think we're going to place the commercial building in the city. 9% of the time in a larger town. And then 1% of the time, the commercial building might be in the woods. So in this example, an earthquake occurs. So we've got a magnitude 7.6.
And then we've got different intensities felt at different locations, so different amounts of shaking, effectively. So this kind of concept, I'll just show you another example. So another earthquake occurs. Different intensities are felt again. So this is where we basically weight the hazard. So we're saying it's more likely to affect places
within the administrative zone where there are more people effectively. This is all down to data. If we could get data on a very accurate level, that would be great. But often, we get aggregated data, so aggregated to a region, aggregated to a large administrative area. So I was mentioning the r-weighted quantile method.
And that's used to generate the output of that kind of theory. This gives you, for each event, we have the administrative zone it affects, and then the probability of a certain intensity. So you can see you're getting a lower probability of higher hazards moving up to a higher probability of a lower hazard,
going down to no hazard most of the time. So that's one example. This is a bit more graphical, just for floods, just showing you how we bring it together. So we're showing a flood extent here. I have some postal codes. And then we're looking at just one event, in this case.
And then the administrative zones, that event affects. So in these models, these tables are made of millions and millions of rows, because we've got hundreds of thousands of events. And each one affects a certain number of zones. It gets pretty big pretty quickly. So moving away from the open source and onto open data,
I guess we're making use of open data in a variety of ways for the models we're creating. I made a new word up there, freeness. But I guess it really depends on different levels of freeness. Whether it's for personal or educational or commercial use, can we use it?
Can we use the data? Is it suitable for what we're trying to do? If we want to do an in-depth scenario model, maybe we can't use open data. It might not be detailed enough for what we want to do. There was a link that I think has been going around, done by a guy in the UK.
So it's got a list of about 300 or 400 open data sources. So you could visit that link. I guess the slides will be made available later. So some examples, I guess, we're currently developing a tsunami model for Chile.
And that makes use of SRTM data. So that's an open worldwide data source of terrain. 90 meter, basically a grid with elevation values. And really, we went with that because we were creating a country-wide model.
And it was basically the only option. There were a lot of other options which were much higher cost and would probably have had implications for processing and so on. So other sources of, or some of the other, sorry, proprietary data sources we're using, people like Ordnance Survey here in the UK, using some of their data, TomTom, GfK, and LandScan,
I mentioned a bit earlier. And then some of the open sources, so SRTM, USGS produce VS30, which is very useful for soil modifiers for earthquake models. We're using Corinne, which is a land use data set for Europe. Ordnance Survey also offer open data sources,
so we're making use of those. And we're starting to get into the use of OpenStreetMap and some of the points of interest data there as well. OK, so in terms of visualization, we're using a variety of open source tools for the visualization. So a mixture of what we've just created this tool called
Elements Explorer. And what that does is let you map the outputs from the model. So I'll show you some examples in the next slides. And this is built using GeoServer and OpenLayers. We've got an OpenStreetMap background mapped to the solution. And we're also serving the data out as web map services
and creating pretty heavy style files, so one or two megabyte SLB files to allow users to change classifications of the data they see and things. So the other thing that we do, I guess, with open source, it's allowing us to get into the nuts and bolts
of underneath the hood, really. And we're also much more easily able to extend the software, too. Some examples, three of them are from the Czech Republic, my boss is from the Czech Republic. But this is some of the output of the tool. So the top left, we can see we've got the average
inundation depth per postal code for one event from the Austrian flood model. So you can see it's built on OpenLayers here. And all that's given you is just a flood depth per post code for that event. On the right here, top right, we've got the average annual loss.
So in insurance, they refer to that as the burning cost. So I guess what it lets the insurance company do is try and work out which areas are more at risk than others. I mean, you couldn't say the absolute values mean something, but at least it's relative. Higher numbers mean there's a bigger risk there.
Bottom left, we've got exposure, so we can take a portfolio, just do a simple map of where the insured locations are on a map. And then the bottom right here, we've got similar to the top left, but this is looking at a specific scenario event and showing what the total loss is per region.
So the tool actually looks at multiple geographies as well, post codes, districts, municipalities, and regions as well. Also, just to mention, we're also using not just open source, using proprietary tools too, so a bit of ArcGIS.
Global Mapper, that's a really great low-cost GIS tool. Does anyone use Global Mapper? OK, maybe more, but you don't want to admit it. Google Earth as well, and Pitney Bowes we're using for some of the geocoding solutions too. So we're kind of using a mixture of open source and
proprietary tools. So the second point, which is really just another point I wanted to make in the presentation, is trying to make our models open for business. I guess in the past, and still to this day, some of the catastrophe modeling software is kind of black box.
You can't really see into it and see what's happening. I hope this isn't too much a sales pitch, but the thing that we do, we like to, with elements, say that we can really see inside the model and see what's happening, so we can change components if we want to. And that's why this slide comes in really, just to show the software as a bit like a shape sorter.
So you can change the hazard, the exposure, the vulnerability, and the portfolio information, and see the effect that has. For example, with vulnerability, if you wanted to, you didn't know the type of buildings that you were insuring, which happens quite a lot, you can make an assumption. Maybe they're not made of stone.
Maybe they're made of reinforced concrete. You could make that assumption, run the model again, and get some new results out of that. So I guess this is also true for the other catastrophe models as well. I mean, they're used throughout the insurance and reinsurance industry, but there are many other
potential uses. So really, government agencies, emergency planners, looking at what if scenarios, if this event were to happen, a catastrophe model could give you an output, could help with evacuation planning and so on. Humanitarian agencies, also commercial businesses and academic institutions. They might want to take a model and use it for a slightly different purpose, all those sorts of things.
I guess the other point is that we're making use of both open data, proprietary data, open source, and proprietary software with what we're doing. So I guess it's a mixture of the knowledge that people have got within the team that we have, and also the best
tools for the job. So if we can find something from the open source community that helps us, then we will use that and obviously contribute back also. So I guess just in summary, catastrophe models allow you to better understand your risk to catastrophes.
We can build the components using open data and open source. The open data sources we're using, they're especially useful for some of the country-wide models we're doing, where we need a general view of catastrophic risk, rather than a scenario model that might be much more detailed.
The open source software really can help us speed up big data tasks. We saw where we're using Python to compute hundreds of thousands of earthquake footprints. Python's really helped us speed up that task. With visualization, we're using GeoServer and Open Layers, and some of the open standards as well, WMS
and SLD also. And also, just this final point, the catastrophe model is really used in insurance, but maybe we could be using them in many other sectors and industries. So I'm saying we're open for business. So thank you for listening. If you have any questions, then that would be great.
Thanks. A bunch of questions. I raced through that. You've left yourself wide open. Oh, no. So who would like to kick off the questioning? Sort of probably two quick questions is one is a great
presentation that I've caught on my eyes through the church, but I've never seen that before. So two quick questions. One is, how do you balance being open with your models and at the same time protecting your IP to some extent? And secondly, as you're mixing and matching open source,
proprietary, what is your strategy to deal with the different licensing minefield that's out there? I guess with the first point about, I guess we're using open source to develop the components. So I guess the final product, so the web mapping component
is still open, so we're kind of putting that back into the community, but the actual software elements is still built on C Sharp and SQL Server, so that's the proprietary bit. So the actual tool, someone can go away and extend that. I mean, that's something that's done by us. But I guess the components, the things that we're doing
to build the inputs to the model, and that's where the open source comes in. Second point with licensing, yeah, I guess it's, sorry, repeat that question again. You know, you're mixing and you're bringing together proprietary and open source, how are you guys keeping track of or dealing with the licensing issue of what
you can and can't use, what you can mix and match, give the strategy, how do you deal with that? Yeah, I guess it's a communication thing. I mean, within the group, we have different parallel leaders, so we've got an earthquake leader within 60 of us, a windstorm, flood, you know, terror and so on. And then I guess it's really just communication, so
between, we're in London, Prague, and Chicago, just talking between us and making sure if we're purchasing licenses, are we doing it for all of us at once? And is that a good idea, do we want to do it on a country level or a worldwide level? But it's just communication, so making sure we've got a handle on what we're licensing, what we're not, and getting rid of stuff that we might not use
anymore, that kind of thing. I hope that's kind of answered it. All right, yeah. OK. Well, your model looks a lot at sort of dollars, right, to be capital of the financing?
Yeah, so that's something that, really, the software is kind of agnostic. It could be dollars, potatoes, people, anything. So that's something that we're also looking at. So that's something that maybe a use could be in academia is to take the model, which is very much used for monetary values, and say, OK, let's develop it for
looking at loss of life, or something like that. Yeah, so absolutely, the components are the same, really, rather than a hazard is obviously the same. But then, instead of looking at damage to buildings, it would be to people and populations instead. So I think the framework's there, it just needs
adapting, really. The models that you based on published papers, or they developed yourself, and that's how all the models actually open, the algorithms that you use,
and all the implementation of it. So it's a mixture, and it depends on the team. So we're kind of working, I guess I could use an example. We just developed a European windstorm model, and that was done with the University of Cologne. So some of the science behind the model, it comes
from academia, and we're working in conjunction with them, and then other models, we might be developing it in-house. So I guess I'm in the earthquake team, so some of the attenuation functions to describe the level of shaking and so on, we'd get used from academic papers, and obviously reference those, and so on.
So I guess we're using already, a lot of the time, we'll be publishing papers about that, and then if not, we will have used already published sources for some of the science behind it. Yeah. There isn't, no, at the minute.
So I guess you mean from the same, these are the algorithms? There isn't for elements. Now, A.M. Benfield are also involved in a thing called OASIS. And that's where the community's getting together, and so catastrophe modeling, insurance, and re-insurance
are getting together, and making some of the science behind catastrophe modeling, so some of the theory, so like you're talking about algorithms behind some of the components, more open. So that's, if you look up OASIS, I think it's oasislmf.org, and A.M. Benfield are part of that too, and that's about trying to really document some of the processes behind the catastrophe modeling, so.
And that's covering hazard, vulnerability, and loss as well. Yeah.
Yeah. Oh, it's open, ah. Yeah, I think we're quite lucky in the team that we're
in, I mean, we're a bit like an R&D department in a way, so the guy that makes the decisions is my boss, and it doesn't go much further than that, I think. So I mean, if we need, if we can find the best tool for the job, and the best tool is open source, then there's no issue with, it's really just finding
the best tools for the job, so in a lot of cases, that is open source rather than proprietary, but it'll depend on the situation. So I think we can just run with it. I mean, a lot of these projects we're doing are very short time span, and we need to kind of get something out very quickly, and it's, I guess, yeah, quite, yeah.
Things are fast paced, so we can just go with the flow, or something. You're using open data collected from, I mean, sources, I suppose, governments, and so on. Yeah. But some open data is good, some open data is bad. Yeah. Do you have a way to evaluate the quality of the data?
Yeah. Yeah, yeah, absolutely. So I guess I could use the example of the, you know, SRTM is quite a coarse terrain data set, and we were using that for some of the tsunami modeling that we're doing in Chile, and yeah, I mean,
really what we were trying to do is, we had some field survey data, so that was a great way of being able to kind of match, you know, to verify how good the data was. So that helped us kind of, you know, in some cases, correct the terrain data as well. So that's one example. I mean, I guess it's just trying to, if you've got,
if you've got open data sources, just try and, if you've got other, you know, other bits of, sources of information, you kind of cross reference and so on, so we can, we do that as much as we can, really, yeah. Okay, so let's take, I was gonna say a final question, but there's two hands.
Let's take, get them in the blue shirt, and then we'll come to you. Chris, I was gonna ask, surely, by using course data, as you just put it, there is a risk that your models are gonna be slightly incorrect. Surely the cost of that inaccuracy
is far more devastating to you as a business than paying for better data. I think that it's really, I mean, it's really down, you know, different situations will dictate what we use, and I think that, you know, with every model we create, we're obviously, we're documenting the whole process
and saying, well, these are the assumptions. I mean, like any model, you make a series of assumptions and whether they're, you know, and those can be, you know, assumptions about the data, so this is the data we're using, and this is all we've got, so this is, at the end of the day, we need, I mean, if the only other option is to put your finger in the air and say, well, we think we should be purchasing this much insurance, then I guess,
even a course model is better than no model, so. But I think, yeah, like you say, I mean, if there is better data there and, you know, we've got the time and the money to use it, then we will, but again, I mean, there's also other implications for using, you know,
less coarse data, you know, more refined data, there's processing overheads and so on, so especially with terrain data and flooding, I mean, you can go down to a one meter DTM, but then you've gotta spend two and a half years processing it or something, so, yeah. Question, how, so you're selling reinsurance
to the direct insurance market. Yeah. These tools are closely coupled, are you the pricing models, is it built into these models, or is that a totally separate? So the outputs of our model would go into something, you know, like dynamic financial analysis tool, so from this tool,
unfortunately, I showed that on a slide, I mean, you can get a loss per event from the model, and that information would feed into a pricing tool, so it would say, you know, I don't know, 100,000 events, these are the financial losses, with a standard deviation for each event and for each loss on each location.
That can be. Are those two systems loosely coupled or tightly? They're loosely coupled, you take an output from this and put it as an input into the next tool, so, I mean, the outputs of this are ready for the pricing tool, essentially, yeah. Okay, thank you very much. Thank you.