Bestand wählen
Merken

Spatial-Temporal Prediction of Climate Change Impacts using pyimpute, scikit-learn and GDAL

Zitierlink des Filmsegments
Embed Code

Automatisierte Medienanalyse

Beta
Erkannte Entitäten
Sprachtranskript
today we talk about so the work that my colleagues and I are doing that he could trust and around climate change and so we've been working or in other Pacific Northwest Natural Resources and Forestry in agriculture out to their talk about how applying this agriculture to new project not all the way through but I think there's some really interesting and it admits that might be helpful to you all all that so this report came out this summer called risky business is an economics reporter about the potential economic impacts of climate change they looked at the normal things like sea level rise and and he exhaustion and all those things and the for 1 of the also the commodity agriculture in 1 of the things that they concluded was and that the agriculture industry actually was best prepared to adapt to climate change and because they can plan set crops every year etcetera and and then this is that they want the same as the quot I love scenes as a suggests analysts the analysts armed the rate information science at our farmers can mitigate some of these impacts and so immediately we think what information they need to have we have a really answer that question but we have developed a toolkit to sort of try to answer the question absolutely go work and the conceptual framework that we use called by envelope modeling and and then show you some preliminary results and see what we can do with this framework of and then look at the actual implementation hopefully 1 that you guys can apply think I can win the and using your problems the the and so this I'm in no this is this is the whole talk right here of the conceptual basis the whole thought of by American modeling sometimes called species distribution modelling climatic niche modeling effectively take observations of a species usually so in this case it's green for the species present a white circle for us piece is absent and then you want to in this Statute space to differentiate suitable from unsuitable points so this is a really naive approach models will be much better but you can think of drawing a line in this section space and this is 2 dimensions most of the data that we work on his note 9 hand up to 30 dimensions data and so you imagine this boundaries line the the complex but effectively it's the same thing we're trying to take a new observation of temperature and rainfall any of these climatic variables plotted on these actually see were false it's inside the circle suitable it's outside it's not suitable but it's really a simple model of this some people's it's really simplistic but doesn't take into account the biological education doesn't take into account interactions with other species it doesn't take into account migration and seen sourcing and so on it's a flawed model and it's wrong the so this is 1 of the quotes was talking about this model because it is wrong in all those weights very simplistic but it's also very useful provides a 1st approximation of vulnerability to to climate change and the where usually described as this if you were growing out your what's your farm rearing winter wheat today your future climate is projected to be unlike anywhere that currently grows we that's a red flag that's an indication of all ability so when we talk about climatic suitability know and draw that line and on the axis and say this is suitable for this not suitable it's not black and white or in this case purple and green and there's a range of of variability in their this certainty or this this sort of this range is actually interpret has a degree of certainty over the dark green we're very certain that in this case this is douglas for 1 of the iconic tree species of the Pacific Northwest this is that dark green is that's that's strong that's where were absolutely certain that it's viable of the purple absolutely certain it's not an in between you know depending on the conditions may or may not work so so how do we draw that line on those climatic axis and so we use a technique called so supervised classification of comes out of the machine learning the literature and there's a lot of different techniques to do this this is sort of a general overview so you take a training data where you know you know your X in your what you the X variables are your climatic explanatory variables you think and derived the distribution of the species are interested and then you have the Y variables to observations of that species exist doesn't exist I um you draw of relationships relationship between the sexes wise and n it's that I use the black box it's only some ironic but it's the the black box really what goes on inside the black box differs depending on what analytical technique used so you can you might have heard of decision trees or logistic regression all neural networks these are all different kind mechanisms for drawing that relationship between X and Y and but when it comes down to it they all do effectively the same thing and that's given a novel set of explanatory variables predict the response and so in this case were predicting yes it's suitable there's a 90 per cent chance that it's going to be suitable for this speech me and and so each of these observations you can think of this and on a roster always work is done in a kind of a roster data model so I think of each of these observations as a single pixel so for every pixel or trying to predict how to do it and so on so just have taken a step back another kind of a little brief background on climate model not a climate scientist so just use the data they produce and it's helpful to if you look at the back of the 1st they start with emission scenario future you are contribution or in humans contribution to greenhouse gasses those get fed into what call a general circulation model this dozens of these they and therefore dimensional models really complex Somalia's processes so which give you a predicted future climate usually daily time steps well into the future at a very coarse spatial of those coarse grids avenge the calibrated to local weather stations you get this downscaling small and and then finally in order to get some sort of meaningful metrics you do some sort of temporal aggregation of original care if it's going to rain on June 22nd 1978 further 2078 so you wanna know the average rainfall for June in that decade it so our goal is to take this technique of about American will model and applied to food production so it's in the Pacific Northwest actually normal by region which includes California all in the last and develop the whole workflow using open-source tools well documented and and we're actually developed a couple kind of utility tools to go around this make the workflow of so just getting as kind of the specifics of this project will work in all of these are the 9 explanatory variables that explain roughly 96 per cent of the variance in the production of food productions of the sort of these contiguous areas that have similar characteristics for agriculture I'm and these variables including the ones in italics have far out of the climatic variables of are the ones that actually drives in the majority of the definitions of its bouncy can think with these climatic variables we have current everywhere present day uh comedic roles we can also swap in future climates as predicted by the climate models and so those are X variables and the explanatory and these are wise this is the what we're trying to predict this is a map of California Idaho Washington Oregon and we cut it off of just above the the country the national border not because data was available but will partially because data was available in Canada and also because not a lot of agriculture happens in in the Pacific in the north west area on up to us and so on and mostly just the states and the zones here but have been finding another process and so what we're trying to do is basically predict where these zones may shift in the future so we plug in those 2004 prior future climate variables and for each pixel try to predict what the most likely zone so this is our 1st
cut at the 2070 what the zones in 2017 and so couple things to note of the coastal areas actually don't change all that much and if it all out and that's consistent with a lot of the climate models which inland areas and expand much greater temperature and then you look at some areas like this is the kind of bread basket of Washington there were a lot of the apples is very productive region you see that shifting a lot of so they're going to see a lot of novel conditions over the next 60 years and so that indicates sort of this so the I guess the take home headline fruit for this map would be that that vulnerability is geographically variable just because the plan is changing globally does mean all experience the same summaries the makeshift more dramatically than others so that was sort of the predicted you know this is this is our always right and so it's the most likely so you can look at a given this is getting down to a little bit more like fine scale you can look
at a given set and see what the probability of future climates being similar to that so and then animated over time so this is and this is an animation of the wine valley from the area of the productions of end what we can see it is so this is the low emission scenario so this is a basically as if we are this is modeled as if humanity got its act together and sort of reducing emissions that but that was the thing that striking about this is that when a valley conditions are sort of shifting northward you look up to Bellingham Bellingham body in the century that should be fairly similar climatically in terms of agricultural productivity but presumably as today's 1 valley and the dilemma Delhi itself aside from that little western edge over there stays roughly it remains roughly the same conditions and if you look at a high emissions scenario the and this
is this is sort of a business as usual for continue emitting ugliness gasses at the rate we're currently so you see the same northward shift we also see that the lantern is actually transitioning into or or you know with some probability will transition into a hotter and drier and sort of agricultural climate and so it's not necessarily that it's a vulnerability you can also you can also see the glass half full sees an opportunity and especially I like to think you have Pena right now working on war breaks only growth are grow best in the 1 that but as see Bellingham maybe the next place but the opportunity of all the but we can do we
apply the same thing to individual crops and this is this is just the I literally do this 2 days ago so I don't know what this is just kind of edge where we're going but looking at the productivity of pre yield a herb per yields a different crops and see how those affect the future so this is winter we found a lot in the Washington Idaho and this is where those yields in running through the kind of similar by American will modeling approach and this is where this yields might occur in the future of so we see that there there still is a lot of viable area for winter wheat it's just not a so the same areas to see that so this is getting toward some information the farmers might actually use in the future to adapt to climate change so that is a just a good indication the purple or the purple is the areas that are sort of losing productivity in the many is gaining per to inform abilities and opportunities it
OK some so have dualist that this we use Python 1st scripting the whole process of and use all the open-source tools here and so this was both from a quick and raw stereo and is John's giving a talk on that already so all seen that but it's great data data access for were arrested data elements it's it's beautiful simple that it says no that's built with G on the hood ontology OK and this is a project I'm working on a contributing to say about that is it is great enough you used appendices for of just working with kind of 2 dimensional pattern would be a structures of but in my mind it replaces excel NGO and this adds due capabilities are known by for some of the array data structures Rasta stats this module for doing vector on Rasta operations the site hit learned that that cycle learned is the machine learning but that's the supervised classification of the and then pi is the software that were developing to kind of tiled together it's mostly just like those in Chile functions the tenant clean up a lot of tedious work that would otherwise have to do um self promoting ratification of this this is sort of a rough stereo you know a canonical example Ross stereo but you get done you open the roster you read you read the data in the metadata simple i in a Python environment and a lot of work in IPython notebook interactive environment so I wanna go from so that in memory and excitation of the data dump it to disk just give look at it the cages so it helps to have to be able to visualize it in the right there in the session in your work so I is known to have this but not public function plot to data to people plotting the rainfall data but it's not quite as precursor graphically and as C using using you just sort of best of package but it's really good from the suspect data life it um so the
training data so this is sort of the this is the crux of it really this is new model will only be used as your training data so you're explanatory climatic variables of lists the moderates this with this takes them as a list of patterns your of rosters here is your response variable your y variable there's just the path and applying Q provides this little training Rastas a function that just takes you're uh lists of grasses and pulls them into the data structure that's required for secular I is little detail on that so the bandit on the words you're your Rasta band at this is really small example 200 by 140 pixel image and notice the training data of flattens out out of it you know that we have 20 800 pixels and so effectively does the rooms that 2 dimensional spatial component to industries each pixel an individual observations of so your your so that you're removing any spatial dependence and you really treating each pixel is a separate observations the and and if you were a lot of times you'll have your observation data not Erastian but as vectors so you have for instance points of point observations we've gone and taken as output a plot down and taken vegetative sample so and so if you if you have if you every day as a vector that you typically points you can also this also supports the same sort workflow you define your list of Rastas but then your explanatory variables can be explained by the to be explained by a vector dataset and you name what field contains a response variable so all the nitty gritty of how this kind of relationship between x is the wise gets a gets built is all the details of psych learn and and this is a brilliant machine learning package that I really really saves a tunnel work and for doing anything in sort of predictive algorithms indefinitely the try of was 1 of the great things about it all forget to that of so there's Kennedy to has a couple a couple steps you go on the 1st is used instead see the classifier classifier you um can you fit your data to and that fitting is that drawing of the relationship between the axis lots of 1 thing the hasn't commented out a bunch of different classifiers and these are all uh these are all in these in these that a dozen possibly dozens of them in psych learned what this allows you to do is pick a different kind of mathematical model for how to draw that relationship and use the same exact API can literally common out 1 of those lines and rerun analysis and so this is where we started doing this work in and you know we wanted to switch from the technique a detecting the and that means like switching libraries to read you know kind of reconfigured data formats and data structures completely this allows you to learn just comment underlined try so it's it's really great for experimentation and what you fit that model that relationship you know you can evaluate the model of using a couple different metrics and so you know all this kind of breeze through this other the confusion matrix for use in a sense of like where you get the false positives and false negatives in overall accuracy score of 83 % all agree that were men of feature importance is so in other words how much did each of your explanatory variables contribute to the predictive power of the model so in this case you have feature 10 which don't call exactly what that was was that the most important words and truth most it explain the most variability in on anything you cross validation which is cyclic leaving out part of the dataset using using the rest predict that part the left out as so that process is very iterative her hands on it's very much finding a model that really works really explains data um so once you have a model that you feel as it has a good amount of predictive power you can plug you can plug in future data into that so unseen of unseen predicted future climates and so in this case we use in 27 the data plug that in and we load the target data and we use this in function from pine Q to actually run pixel-by-pixel the predictions for that for that some geographic space and right the whole result helps to rest it so what is that look like um ages get a response a Rasta dataset which is sort of the most likely zones and so this reduced resolution versions kind of hard to see the pattern but those are those are predicted zones so again it's it's just predicts the most likely you can also look at the probability of any given out so in our classification example flags and we had about 180 agricultural zones you can look at the probability of any 1 of those zones occurring so it's not against not just a binary yes no we can say what's the probability that z where of the occurrence of the conditions that defines own 127 occurring 27 something of this probabilistic surface of this sort of gives you a sense and we're starting to make decisions it becomes really important it gives you a sense of the uncertainty in estimates and and because
all the status is loaded up as an umpire raise but if you wanna do any sort of further analysis of the results results this during half of so for instance if you wanna see the difference between the future zones in the current zones you can literally becomes from the mind that will command of subtract them out of the woods and you think a map of where the zones changed and whether and so there's a lot of further analysis you can just in compiler just working relates directly and then so that that's
that's all lawyer 4 slides of I Ireland but we're about a 3rd of the way through this project and and I think the 1st 3rd was really dependent devoted to so the gathering
a lot of data and building in these techniques and becoming comfortable these techniques
of the next 2 thirds of projects that were in the actually died in how you apply this stating get real meaningful results for farmers are of so if you have any ideas about the next directions can take this things to explore and talk to me of and in the meantime I hope some of the tools that we developed to do this might be applicable to some problems here thanks 2000
b this time the questions a curious about how time figures into really do in the particular your training dataset do you have a a series of snapshots in time of props at different locations where you just taking 1 sort of to slice and time where crops are now that's yeah that's how we're trained the data and on the the currents last time but I actually would be kind of incorporate past you could play a barrier is has absolutely also forward of all have you looked at predicting how things will evolve over time this was just a case 1 . future ones yeah yet we have and we're looking at me were sort of limited by that of the data that we're using was kind of aggregated to use of the 20 year periods so we're looking at in 2030 2050 so but we can actually get access to that of the the daily times that data and we could theoretically predict on an annual basis for instance where the zones might be some whether that is useful not were not sure where I think we paying like kind of a decade-old time step and is running the models can recommendation and yet you have ever definitely getting there and it's it displaces the challenge of dealing with all this data rock climber data is 4 terabytes and if any derivative works there the exceeding the cost thanks this that and an introduction of I was wondering if your scripts have any interaction with uh the GCM service that the theory is the and a word or you assuming when you start this process that you have this narrative you want and you have the data already locally yeah of it in terms of the scripts that we're using i where we assume that you've already found a source of downscaled climate data that you're comfortable with or multiple sources of and you have already processed sentencing spatial stand in a so yeah there's a lot of preprocessing that is assumed before you go into something like this and but yeah that's a that's a good point about the different she stands I and shown results from a single 1 here this so this emerging themes in the literature that there is no sense in picking a climate models and that one's right you all models are wrong right so if there's there's this kind of growing notion that you wanted to run your model on multiple different GC and be able to look at where they agree where they disagree new status of the measure of uncertainty in in Sweden some work that the force you're on is really interesting you see these core areas where 11 different climate models all of the other than there's these fringe areas where they don't have that so that's you know when you're making decisions those sorts of uncertainties of play you you mentioned that so we have a lot of data like portraits data and running these models again philology CM's are you running into any sort of processing challenges and how you on overcoming this challenges by 1st part yes we are a him processing challenges but I think a lot of it is I think did actually said my abstract doesn't talk about performance implications I couldn't fit all slides in 20 minutes but yeah there there is therefore a lot of tuning in of the algorithm that you can do the site learn and to to get reasonable performance on the so he's some of these items are really memory-hungry and so that's the limitation of the front and use so far as this is just 1 of but then it was a this and you know what as the 4 terabytes of versus of the daily 5 times the data for all these different and and really what we do is preprocessed using you know that CDF of you that's a given I come we do a lot of preprocessing the data get down to a reasonable time aggregation so the data they were actually working with here isn't isn't all that this is just the the the time it takes to run through some of these so these algorithms in the land that's required to students listen to any idea any injured distributed processing to handle some of memories you I the yes we have there's not really a good way to paralyze a lot of the psych learned of problems themselves and so were doing so you know we sort of do that the cheap eralisation where if you wanna do 20 50 in 20 16 27 the predictions just run among 3 different nodes so that's the sort of parallelization were after this point but but parallelizing the algorithm itself we haven't uh that's that's kind of a deeper problem with psych learned that I have put figure this is really cool uh machine learning algorithms from coming out of people actually working that work like the file system and that so haven't tried them to the owning how you narrowed it down to those 9 variables I guess as being the ones that matter in suitability yes we didn't know it is sort of a pseudo stepwise you a stepwise approach these right here yeah yeah we did a stepwise approach where we are I think
we we considered uh probably 30 variables and this applies serve to bring down those the ones in only explode they explain the most of its as in other words that they are the ones who left out really have much explanatory power and is only about someone who is wanted to know what they would have a large effect in reality like for example you have the same mean precipitation blue by eoffee it's bigger range coming less frequently yes that seems like that would still have a dramatic effect on unsuitability absolutely I think that's sort of 1 of the emerging fields and in climate data is how will always temporal aggregation and how do we take of these data Delia sub-daily timestep data and having so Bagri them into biologically meaningful variables I'm not awful lot of tree species were working in the forestry on there's the a lot of tree species require a certain number of days of frost or conversely can handle somebody's a frost or require a certain number of days of of below 20 degrees to kill off the primary test or something like that and so there's a lot of very species-specific biological indicators that can be pulled out of this data of but it's the the space of variables in the a whole lot of the times that data is is really large and there's not a lot of research been done as to what the best variables or but certainly these right here predicts I think the mean monthly summer precipitation for instance is a good indication of how much does it rain when the during the growing right so that's sort of getting at some biological responses I the the thank you the I and I just want with some of the charges around the scale of your data
but spatially spatially and temporally have you looked at using or implanting some database technology database technologies so so may under at so got lots of Erastus which is stepping up on desk for his employer novel if you looked at building a database to do that instead so just just starting a database yeah I mean we will we use HTS and that's the size of the file format which in my mind is a database of to do that the climate data processing and want yeah in and I think I've I've never really really got into Rastas in the database personally but I I don't know it could it could be I haven't seen the damage of over it's but but if it's greater than it doesn't need to be a roster Chris you can then because essentially workman's vector in the vector space analysis true was last Wednesday the rate is significantly faster I think for a force and its analysis but yet we could we could definitely do that but I think for offer a lot of the work done forestry on um we're dealing with point observations of an action you plot this a lot of that stuff yet were using it for the post yesterday the I'm this but how many variables that you start with of the idea honestly I don't recall exactly but we had over 2 dozen variables probably close to to 30 that of and these were the significance Chase me from for you see the publishes peer-reviewed journal hopes of great now this this idea of the think I think we have a lot of work to do a yeah sort on the data side and the interpretation of the results of a allottees or I guess the preliminary results very heavily on this underscore preliminary it's here were really really focused on getting the analytical technique down and then getting the house and that sort of thing and so we will use the next step really is to stop producing meaningful results right at the ones you get those will start to think about publishing so it is open source and get help that had been the the out of the pion Q code is on behalf of then you honestly it doesn't really do much it's I think it's a so it's it's it's just a bunch of functions that kind of I do a little bit of data manipulation and make this kind of thing just a little bit and so the real work is it's like it and most area tools the the the I so I just taking a course look at some results by it seems a bit things it where of clustered area became were segmented and fragmented so question some that are around is it a degree of fragmentation big impacts the scale of agriculture the as or does it not impact that and then the other thing is at by these areas being spread into areas where it's not feasible to do agriculture like the national forests right so yeah that's that's a big that you could identify the 2 biggest problems with the current approach of may yet were working on basically masking name non-agricultural of the load is feasible for climate for non-agricultural shift from a really really works so that's the 1st part of the speech people had made the data and you know I've mentioned in passing that Europe each pixel is independent of the observation reality we know there's some sort of spatial autocorrelation at the time so I see Speckle across the country a so there's a lot of the sort of incorporate statistics into this we have to figure out how to do that but the ideally we would be sort of bring in some sort of measure of clumsiness to prevent that from happening on so that that may or may not represent real people is also going to keep in mind that a lot of the the shift of zone uniforms donated B may not actually be all that great it could be that the same crops around the same agricultural practices from exist it's just you know slightly different that so there's a lot of interpretation the and I think we're sort of shifting away from the idea of using zones themselves in North more towards kind of predicting productivity of individual crops sets that's what you were thinking about the I didn't talk so the human it is further questions few
Distributionstheorie
Vektorpotenzial
Prozess <Physik>
Punkt
Blackbox
Gesetz <Mathematik>
Raum-Zeit
Computeranimation
Eins
Netzwerktopologie
Fahne <Mathematik>
Lineare Regression
Gerade
Umwandlungsenthalpie
Kraftfahrzeugmechatroniker
Approximation
Temporale Logik
Stellenring
Ähnlichkeitsgeometrie
Bitrate
Biprodukt
Bildschirmsymbol
Zeitzone
Variable
Entscheidungstheorie
Randwert
Menge
Rechter Winkel
Konditionszahl
Garbentheorie
Projektive Ebene
Information
Charakteristisches Polynom
Ordnung <Mathematik>
Aggregatzustand
Subtraktion
Gewicht <Mathematik>
Wellenpaket
Hausdorff-Dimension
Interaktives Fernsehen
Implementierung
Sprachsynthese
Einhüllende
Framework <Informatik>
Überwachtes Lernen
Demoszene <Programmierung>
Virtuelle Maschine
Informationsmodellierung
Spannweite <Stochastik>
Variable
Mittelwert
Migration <Informatik>
Mathematische Modellierung
Endogene Variable
Arbeitsplatzcomputer
Luenberger-Beobachter
Logistische Verteilung
Indexberechnung
Varianz
ART-Netz
Pixel
Kreisfläche
Linienelement
Green-Funktion
Open Source
Softwarewerkzeug
Datenmodell
Einhüllende
Sollkonzept
Quick-Sort
Mapping <Computergraphik>
Zirkulation <Strömungsmechanik>
Minimalgrad
Flächeninhalt
Softwareschwachstelle
Basisvektor
Modelltheorie
Normalvektor
Verkehrsinformation
Neuronales Netz
Zentrische Streckung
Bit
Mathematisierung
Automatische Handlungsplanung
Biprodukt
Zeitzone
Term
Quick-Sort
Computeranimation
Gefangenendilemma
Mapping <Computergraphik>
Flächeninhalt
Softwareschwachstelle
Konditionszahl
Schnitt <Graphentheorie>
Informationsmodellierung
Flächeninhalt
Softwareschwachstelle
Gruppenoperation
Kontrollstruktur
Ähnlichkeitsgeometrie
Information
Indexberechnung
Biprodukt
Bitrate
Quick-Sort
Computeranimation
Bitmap-Graphik
Resultante
Matrizenrechnung
Prozess <Physik>
Punkt
Versionsverwaltung
Kartesische Koordinaten
Element <Mathematik>
Raum-Zeit
Computeranimation
Negative Zahl
Algorithmus
Prognoseverfahren
Gruppe <Mathematik>
Fahne <Mathematik>
Mustersprache
Gerade
Array <Informatik>
Bildauflösung
Funktion <Mathematik>
Nichtlinearer Operator
Lineares Funktional
Güte der Anpassung
Prognostik
Plot <Graphische Darstellung>
Zeitzone
Entscheidungstheorie
Datenfeld
Rechter Winkel
Konditionszahl
Festspeicher
Dateiformat
Projektive Ebene
Programmierumgebung
Fitnessfunktion
Instantiierung
Subtraktion
Web Site
Wellenpaket
Ortsoperator
Interaktives Fernsehen
Dualität
Überwachtes Lernen
Virtuelle Maschine
Informationsmodellierung
Webforum
Software
Flächentheorie
Mini-Disc
Notebook-Computer
Mathematische Modellierung
Stichprobenumfang
Endogene Variable
Programmbibliothek
Luenberger-Beobachter
Zusammenhängender Graph
Datenstruktur
Bildgebendes Verfahren
Analysis
Leistung <Physik>
Schätzwert
Beobachtungsstudie
Videospiel
Pixel
Linienelement
Ontologie <Wissensverarbeitung>
Open Source
Validität
Gibbs-Verteilung
Mailing-Liste
Vektorraum
Objektklasse
Modul
Quick-Sort
Ordnungsreduktion
Array <Informatik>
Mereologie
Dreiecksfreier Graph
GRASS <Programm>
Wort <Informatik>
Resultante
Subtraktion
Compiler
Prognostik
Mathematisierung
Zeitzone
Quick-Sort
Computeranimation
Mapping <Computergraphik>
Rechenschieber
Temporale Logik
Drei
Analysis
Instantiierung
Resultante
Reelle Zahl
Temporale Logik
Gebäude <Mathematik>
Mathematisierung
Prognostik
Vorlesung/Konferenz
Projektive Ebene
Computeranimation
Richtung
Resultante
Prozess <Physik>
Punkt
t-Test
Extrempunkt
Raum-Zeit
Computeranimation
Eins
Netzwerktopologie
Prognoseverfahren
Algorithmus
Dateiverwaltung
Skript <Programm>
Parallele Schnittstelle
Figurierte Zahl
Einflussgröße
Feuchteleitung
Softwaretest
Zentrische Streckung
Abstraktionsebene
Reihe
Strömungsrichtung
Quellcode
Zeitzone
Frequenz
Variable
Entscheidungstheorie
Rechenschieber
Arithmetisches Mittel
Dienst <Informatik>
Datenfeld
Forcing
Rechter Winkel
Festspeicher
URL
Arithmetisches Mittel
Instantiierung
Subtraktion
Wellenpaket
Zahlenbereich
Interaktives Fernsehen
Term
Physikalische Theorie
Virtuelle Maschine
Informationsmodellierung
Variable
Multiplikation
Knotenmenge
Spannweite <Stochastik>
Endogene Variable
Inverser Limes
Indexberechnung
ART-Netz
Leistung <Physik>
Soundverarbeitung
Quick-Sort
Abstand
Minimalgrad
Flächeninhalt
Connection machine
Mereologie
Basisvektor
Speicherabzug
Wort <Informatik>
Zeitzone
Resultante
Bit
Punkt
Gruppenoperation
Sprachsynthese
Code
Eins
Variable
Datenverarbeitung
Luenberger-Beobachter
Einflussgröße
Analysis
Verschiebungsoperator
Lineares Funktional
Interpretierer
Zentrische Streckung
Statistik
Wald <Graphentheorie>
Pixel
Datenhaltung
Vektorraum
Biprodukt
Bitrate
Zeitzone
Quick-Sort
Minimalgrad
Menge
Forcing
Flächeninhalt
Last
Autokorrelation
Mereologie
Dateiformat

Metadaten

Formale Metadaten

Titel Spatial-Temporal Prediction of Climate Change Impacts using pyimpute, scikit-learn and GDAL
Serientitel FOSS4G 2014 Portland
Autor Perry, Matthew
Lizenz CC-Namensnennung 3.0 Deutschland:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
DOI 10.5446/31675
Herausgeber FOSS4G, Open Source Geospatial Foundation (OSGeo)
Erscheinungsjahr 2014
Sprache Englisch
Produzent Foss4G
Open Source Geospatial Foundation (OSGeo)
Produktionsjahr 2014
Produktionsort Portland, Oregon, United States of America

Inhaltliche Metadaten

Fachgebiet Informatik
Abstract As the field of climate modeling continues to mature, we must anticipate the practical implications of the climatic shifts predicted by these models. In this talk, I'll show how we apply the results of climate change models to predict shifts in agricultural zones across the western US. I will outline the use of the Geospatial Data Abstraction Library (GDAL) and Scikit-Learn (sklearn) to perform supervised classification, training the model using current climatic conditions and predicting the zones as spatially-explicit raster surfaces across a range of future climate scenarios. Finally, I'll present a python module (pyimpute) which provides an API to optimize and streamline the process of spatial classification and regression problems.
Schlagwörter machine learning
regression
climate
raster

Ähnliche Filme

Loading...
Feedback