Bestand wählen
Merken

Open Source Geospatial Production of United States Forest Disturbance Maps from Landsat Time Series

Zitierlink des Filmsegments
Embed Code

Automatisierte Medienanalyse

Beta
Erkannte Entitäten
Sprachtranskript
so my name is chris Tony I work for the US Forest Service in the Rocky Mountain Research Station and I'll be talking about forest disturbance mapping using Landsat time series like to acknowledge my coauthors scripture moistened consciously lies and touch Schroeder responsible for a lot of the work i'm talking about and also to point out that this work is part of a larger project along with other research teams contributing so have some citations throughout the presentation to acknowledge the work as well and this work is
part of a a larger project called North American Forest Dynamics it's funded by NASA and and is a collaboration among NASA University of Maryland the recent and research branch of the US Forest Service is designed to characterize patterns in recovery rates of forest across the continent from with the goal of determining the role of forest dynamics in North American carbon balance so when part
of North American Forest Dynamics was to map the recent disturbance history for the conterminous United States and using Landsat time series since 1984 and that
was done with a change detection algorithm called vegetation change tracker developed at University of University of Maryland so I'll start with just a brief overview of the of the vegetation change tracker BCT products which indicate the presence of disturbance in a 30 meter pixels for each year during the time series but DCT does not provide information on the causes of disturbance
and so the focus of the team I've been working with this to determine cause agents of disturbance so things like harvest wildfire insect and disease outbreaks on using the VCT products as a starting point this is done with for predictive modeling at the pixel level and I'll describe the production of rested products for Conus using open source and all describe the software implementation NASA's high-performance computing environment so
um I'll start with the product from the city in the the next slide I'll show the algorithm but and the products included a land water mass so every pixels classified as forest and non-forest or water and in their annual roster layers where each pixel indicates the presence or absence of disturbance within the forest mask for each year and then there some disturbance of magnitude metrics associated with the disturb pixels so this map on the screen is down this is all the years overlaid composite with each other so that the green colour indicates areas where there was no change detected during the period and then the the other colors indicate a year of disturbance how so this has been produced by the team in University of Maryland for Conus on the algorithm the products have
been described in detail in the literature on it's an automated process the input is a Landsat time series stack and there's a the step for image selection and to get images from the growing season and the minimal cloud cover and there also steps and therefore image composite ing from multiple dates and in cases where cloud contamination is excessive and in the selected images for each year undergo some cloud and Chatham masking on the and then 4 different spectral indices are calculated from the original Landsat bonds so things like in the the IRA used and there's a lot of custom index in there that was developed specifically for the algorithm the spectral indices are used in a time series analysis on 2 detect shifts in the spectral trajectory that indicate a forest disturbance the the so with the time series analysis
the is looking across the spectral trajectory in trying to find out some kind of shift that's consistent with a disturbance event once an example on the right is for all Hobbits followed by rapid regeneration so there's a sudden shift in the in the spectral signature when the trees are removed and then there's that's followed by rapid recovery over just a few years back to something that looks a lot like the original forest cover the but so the is there at the with different disturbance types and regeneration dynamics the pattern of the shape of this trajectory could take several different forms and so all come back to that idea a bit by the 1 point out that the city does not characterize the the pattern the shape of the trajectory of it's just looking across the trajectory inidentifying shows better consistent with force disturbance
I the so this is
an overview of the the current work to assign a causal agents of disturbance to the pixels based on empirical modeling so it starts with a set of training data and is the training samples from locations with known disturbance types of the training samples are associated with a set of predictor variables so things like spectral values in other predictors on and in predictive modeling is done with random forest which is a machine learning technique based on classification and regression trees and so once these ones the random forest models are developed that can be used to predict 30 meter pixels across the country and as long as we have the roster datasets for all of the predictor variables in the model so have few details but the next
couple slides and give just a little overview of our software and computing environment and open source was used for all the processing and we we use the G all utility programs extensively in particular we the that some work with the polygon enumeration algorithm and you call and G . Pollack eyes command line script topic will but about that the
combination of a Python with did dollar number I was kind of a lower-cost approach to a variety of different processing and some things are done in C + + and our research organization is very are centric for statistical modeling so we we relied on are quite a bit of and with the ah all package were able to do to do Rasta processing directly with an art which is really convenience of for some modelling work computation that depends on other code or is already implemented in in R packages the in the snow in snowfall packages for or us to do parallel computation and we also use to GIS quite a bit for visualization and and other data checking the most
processing was done by Landsat pass through their 434 path through the cover cones on each path through has approximately 46 million total pixels pass-throughs that are in areas with a lot of the forests can have the 28 of 30 meter force pixels and pass through I some passes have much fewer but and cross all path rows and columns of 4 . 3 billion pixels have been classified as forest so that that's aerial estimate because the path of overlap but I that so we had from a Data Processing perspective the and
all the processing was done in masses of high-performance computing environment using a cluster called ladies I'm please says 11 thousand 176 compute nodes the 3 different node types that have either 12 16 or 20 CPU cost per node passes a total just under 185 thousand CPU cores has lots of disk storage and then I the all the open source software that we needed is well supported the when saw step through some of the
details of the workflow starting with the generation of the predicted data so we know where you were gonna use the spectral data they obviously as predictors but we also want to derive information about the spectral trajectories through time and make use of that's all I'll talk a little bit about that we also looked at the geometric attributes of disturbance patches of doing that and then there's some other ancillary data for predictors that already exist lecturing on the forest type classification and burn severity product for the wildland fires the so I'm
the CT tells us which pixels have been not disturb it doesn't use a cause of disturbance but we do know that different types of disturbance look different from the landscape so for example harvest all tend to be confined to a certain range of size and they don't get really large and they can be fairly simple shapes that are often can square each I where's fire wildland fire on the other hand I may be small but can get very large and it often has a complex geometric geometric patterns so the of the geometry of disturbance patterns may be helpful predictor of causality
so the PC products are arrested based so we went through each of the annual disturbance layers and then delineated polygons as regions of connected pixels for this we used and then in generated vector data for each year and for this we use the dual polygonized utility and 8 connected this mode so they connectedness just meaning that the pixels can be connected along the edges on the corners I and then in a simple program to go through that vector data and calculate the geometric attributes of each polygon so we calculate area perimeter the shape index in the fractal dimension index these 2 indices are and similar and they describe the the complexity of the shape based on area to perimeter ratios so they they basically are normalized to a square as the simplest possible shapes and and then used you all restaurants to generate rasta versions of all the polygon attributes to use in in modeling and across all the methods for conscious lead delineated little more than 210 million of polygons the so for another
class of predictor variables we wanted to try to derive some information about the spectral trajectories through time on so this this is based on work by Mary Mary Meyer and shape restricted regression so it's a nonparametric curve fitting technique where the curve is constrained to fit a certain shape and in conjunction with this work on the Landsat time series an R package called cone product was developed to implement the computations we're currently working with
7 predefined shapes so that that are believed to indicate some kind of underlying forest dynamics so the algorithm is to fit and each of the shapes to the temporal trajectories at each pixel on that involve that's uses iterative curve fitting for the nonparametric shapes and in which is the best fit shape based on information criterion that includes a penalty for model complexity so it's handling overfitting by the the more complex shapes Ch
the and then the the outputs from this are written to roster format so we and we get a the shape classification itself with the selected shape and then the curve for that and allows us to derive some of the some associated parameters such as the year of the shift the magnitude and duration of shifts and then change rates before and after disturbance we can also write out the fitted values of the from the curve which we were doing for future use but the shape itself the classification these parameters derived from it are then used as predictor variables and the disturbance agent modeling so it's very
CPU-intensive there's been some work to optimize the shape fitting algorithms in R which are implemented in C + + and so there was a good bit of improvement there but that is inherently CPU intensive because it involves iterative curve fitting for the nonparametric shapes and then we have to fit each of the shapes of of 2 to the trajectories at each pixel on the so I to implement this for because we have to fit 4 . 3 billion trajectories I to our approach has been to do parallel computation and so in in these graphs I'm showing the time in hours on the Y axis versus the number of seats use on the x axis I miss examples for path well wrote 28 which has 30 million force pixels and it takes a week to run this on 1 pass through the running it sequentially on 1 CPU the so again the cluster has 3 different node types with 12 to 20 CPU's each once we can get that down to just under 10 hours on 20 CPU nodes or about 16 hours of 12 CPU no and so it scales that really well to multiple CPU's and and out in this range of 10 to 20 CPU's were still getting good good efficiency and processing so we
implemented this was orange and we so were were running for different versions of this reach pass through for the for 4 different spectral indices so we assign a pass through for a given index the 1 compute nodes so for all foreign 34 pass-throughs many 1736 total nodes and then within those pass-throughs node we split the data and do the computations in parallel and we use the snow and snowfall for that and then the curve fitting relies on cone proj and if we use the 12th CPU nodes on plays then processing all 4 . 3 billion force pixels will use a total just under 21 thousand CPU's on and that would have about a 16 hour run time if we were able to set it up in a single job we've done this twice now all for all pass-throughs and there is some wait time in the in the job queue because systems busy but we've been able to turn this around in under week maybe for 5 days if it's not too busy the and so
again this is this is an intermediate product for us but it's kind of a new product and and really optimistic about the the potential for a minute the so it works so we can and we can do it but I'm not sure I know how we could do this without this kind of computing resources so this also other predictors
of all say too much about the model development other than it uses training data collected in the project using a photo interpretation technique training data is limited we'd like to have a lot more time supplement that there was some data from land far which is another of the National Remote Sensing program and from a file a which is a national forest inventory program and the modeling real uses random forest package in R and also make use of the model Map package developed by Louis Freeman also in the Rocky Mountain Research Station and model map has some nice features for working with a random forests in a geospatial context has extensive model diagnostics associated with the and then the prediction
code is very similar to the shape that encourages described and uses this based on orange at all is the mass to the DCT and predictions are done with an are with the random forest package certain predictor variables are generated at run time there's an option for parallel processing within pass-throughs but it's not really needed because the processing time as I have been a little under an hour for a path through running on 1 CPU and then we can run all 434 Astros in parallel so the map
product but it is has predicted values of disturbance agents and 30 meter pixels against master BCT non-forest and water but so far the assessment of product has been based strictly on the random forest model diagnostics and visual assessment there's just not enough training data to be able to set aside any any data for an independent validation set so there's some work on going to generate additional ground truth data I the and as
far as data distribution it's being done through the distributed archive center at Oak Ridge National Laboratory the vegetation change tracker of disturbance years in magnitude products are being prepared for data distribution now and they will be available in the near future on the site as far as the predicted causal agents and associated products there and they're still being refined and undergoing quality assessment so some of those products will be available in some form and what exactly what we distribute and the timing of that is still being determined so that's all I have a me up to take any questions before be at without fear the as interesting time I have a lot of questions but title myself to 2 other 1 is did you try to do any disaggregation by each class and the 2nd 1 is did you in any way attempt to identify many disturbances debtor
irreversible such as conversion of forest into developed class and you know the answer to the 2nd question is yes conversion is a calls a latent class and the in the national legend so we are trying to get at that I didn't have example to show but there's there's training data for and it it's being included in the model and and the shape the shapes of the spectral trajectories are probably pretty helpful in getting at that and as far as age class I know we had not been incorporating age class that's a good idea and we would need you know we need national data for it and which may or may not be possible but I I thought that was on the consider 2 point the a and how deep his Yale in each stack and and then 2nd question uh how much does it cost to run this on on the the its cluster and the image stack is 1984 through 2012 1 image per year so the manual stack earlier versions of UCT were biennial stats but we're currently working with all the universe around teams latest products were based on annual stacks for those years of the that's 29 years and and the cost I have no idea we don't get charged for it right now and it's a collaboration with NASA and there is some interest in this as sort of a demonstration products to do this type of processing with Landsat data and so so far it's just a strictly collaboration and and there's no charge for it at least in this this can this isolated cases normally you know there there's a whole job scheduling and you get building this accounting for it but I we luckily and having to deal with that so far thanks the the so with the URA predicted map of causal agents of you have any sense of media products it can qualify the uncertainty around as predictions that match the be what edges predict particularly there be that cases where you want to search that possibly wise to have accidents that you were looking Othello but the model Map package that I mentioned it does some of that and so was ready if if if you familiar friend of force no it's it's an ensemble of trees as a but it takes a majority vote from a large number of trees like 500 decision trees so in this model map has some functions and there were it looks at the variation in those predictions from the individual trees to can I get the uncertainty of the prediction so weird it what we're going a little bit in in we have a set of test seems that we we selected to to test things on where up producing the in the uncertainty measures for the next nationally for all the pass-throughs but we are looking at that a little bit on this test scenes in trying to include that in in some of the development environment the head of 2nd question you aggregating this data back to those shapes of CO you and other words are you saying that you attached matching those polygon so this is what the buyer this was a hard so the of a vector that represents history well and we have not yet but the idea is to get their you know once we found we we have the roster disturbance Asian product at a point we feel ready to move forward with that's definitely a next step is to make use of those polygons live enumerated to and to do that and you know what questions to was were predicting at the pixel level so how much speckle is there be you know are we going get and so far it's not so far the Rasta output looks like polygons so but those polygons that we have enumerated point going to be helpful and and assessing that variability of the predictions within the polygons and then hopefully labeling polygons and that's that's the next that's the point you know I I thank you very much
Mapping <Computergraphik>
Open Source
Reihe
Wald <Graphentheorie>
Zeitreihenanalyse
Mereologie
Endogene Variable
Projektive Ebene
Extrempunkt
Kombinatorische Gruppentheorie
Wald <Graphentheorie>
Computeranimation
Wald <Graphentheorie>
Diskretes System
Verzweigendes Programm
Bildauflösung
Gasströmung
Bitrate
Wald <Graphentheorie>
Computeranimation
Kombinatorische Gruppentheorie
Wiederherstellung <Informatik>
Summengleichung
Reihe
Kollaboration <Informatik>
Zeitreihenanalyse
Meter
Mereologie
Mustersprache
Projektive Ebene
Wiederherstellung <Informatik>
Punkt
Mathematisierung
Mathematik
Wald <Graphentheorie>
Computeranimation
Übergang
Algorithmus
Zeitreihenanalyse
Supercomputer
Mathematische Modellierung
Determiniertheit <Informatik>
Meter
Biprodukt
Algorithmus
Pixel
Physikalischer Effekt
Open Source
Bildauflösung
Quellcode
Biprodukt
Fokalpunkt
Kombinatorische Gruppentheorie
Reihe
Meter
Information
Programmierumgebung
Pixel
Größenordnung
Wasserdampftafel
Mathematisierung
Keller <Informatik>
Mathematik
Trajektorie <Mathematik>
Wald <Graphentheorie>
Computeranimation
Multiplikation
Algorithmus
Zeitreihenanalyse
Trennschärfe <Statistik>
Biprodukt
Indexberechnung
Bildgebendes Verfahren
Touchscreen
Verschiebungsoperator
Extremwert
Pixel
Wald <Graphentheorie>
Linienelement
Ruhmasse
Biprodukt
Ein-Ausgabe
Frequenz
Verdeckungsrechnung
Mapping <Computergraphik>
Rechenschieber
Flächeninhalt
Automatische Indexierung
Größenordnung
Kantenfärbung
Sollkonzept
Streuungsdiagramm
Subtraktion
Bit
Textur-Mapping
Trajektorie <Mathematik>
Wald <Graphentheorie>
Analysis
Computeranimation
Entscheidungstheorie
Netzwerktopologie
Mathematisches Modell
Bildschirmmaske
Zufallszahlen
Datentyp
Mustersprache
Widerspruchsfreiheit
Verschiebungsoperator
Shape <Informatik>
Mathematische Modellierung
Wald <Graphentheorie>
Diskretes System
Prognostik
Programmierumgebung
Elektronische Unterschrift
Ereignishorizont
Netzwerktopologie
Reihe
Forcing
Framework <Informatik>
Rechter Winkel
Wiederherstellung <Informatik>
Entscheidungsbaum
Pixel
Prozess <Physik>
Wellenpaket
Mathematisches Modell
Entscheidungsmodell
Computerunterstütztes Verfahren
Textur-Mapping
Wald <Graphentheorie>
Polygon
Punktspektrum
Computeranimation
Entscheidungstheorie
Eins
Mathematisches Modell
Virtuelle Maschine
Variable
Zufallszahlen
Algorithmus
Prognoseverfahren
Software
Abzählen
Mathematische Modellierung
Stichprobenumfang
Datentyp
Randomisierung
Meter
Skript <Programm>
Gerade
Mathematische Modellierung
Wald <Graphentheorie>
Pixel
Physikalischer Effekt
Open Source
Stichprobe
Softwarewerkzeug
Prognostik
Rechenschieber
Framework <Informatik>
Menge
Entscheidungsbaum
URL
Programmierumgebung
Pixel
Bit
Subtraktion
Prozess <Physik>
Selbst organisierendes System
Schaltnetz
Zahlenbereich
Computerunterstütztes Verfahren
Code
Computeranimation
Überlagerung <Mathematik>
Mathematisches Modell
Datensatz
Perspektive
Softwarewerkzeug
Mathematische Modellierung
Visualisierung
Datenverarbeitung
Meter
Drucksondierung
Bildauflösung
Schätzwert
Pixel
Wald <Graphentheorie>
Flächeninhalt
Forcing
Parallelrechner
Message-Passing
Varietät <Mathematik>
Subtraktion
Prozess <Physik>
Reihenfolgeproblem
Zentraleinheit
Trajektorie <Mathematik>
Supercomputer
Wald <Graphentheorie>
ROM <Informatik>
Punktspektrum
Computeranimation
Intel
Knotenmenge
Supercomputer
Software
Mini-Disc
Speicher <Informatik>
Attributierte Grammatik
Wald <Graphentheorie>
Open Source
Bildauflösung
Speicher <Informatik>
Ruhmasse
Biprodukt
Patch <Software>
Generator <Informatik>
Disk-Array
Ablöseblase
Speicherabzug
Information
Compiler
Programmierumgebung
Geometrie
Bitmap-Graphik
Subtraktion
Umfang
Versionsverwaltung
Polygon
Komplex <Algebra>
Computeranimation
Spannweite <Stochastik>
Font
Datentyp
Mustersprache
Flächeninhalt
Indexberechnung
Optimierung
Hilfesystem
Fraktale Dimension
Attributierte Grammatik
Einfach zusammenhängender Raum
Fraktale Dimension
ATM
Shape <Informatik>
Mathematische Modellierung
Pixel
Physikalischer Effekt
Softwarewerkzeug
Indexberechnung
Vektorraum
Biprodukt
Dialekt
Umfang
Netzwerktopologie
Generator <Informatik>
Quadratzahl
Polygon
Flächeninhalt
Automatische Indexierung
Attributierte Grammatik
Versionsverwaltung
Geometrie
Shape <Informatik>
Lineare Regression
Nichtparametrisches Verfahren
Objektklasse
Shape <Informatik>
Pixel
Wald <Graphentheorie>
Diskretes System
Mathematisches Modell
Iteration
Computerunterstütztes Verfahren
Trajektorie <Mathematik>
Biprodukt
Komplex <Algebra>
Punktspektrum
Variable
Algorithmus
Zeitreihenanalyse
Trajektorie <Mathematik>
Lineare Regression
Ausgleichsrechnung
Information
Kurvenanpassung
Shape <Informatik>
Drucksondierung
Bit
Subtraktion
Prozess <Physik>
Zahlenbereich
Iteration
Ungerichteter Graph
Zentraleinheit
Trajektorie <Mathematik>
Knotenmenge
Variable
Spannweite <Stochastik>
Algorithmus
Mathematische Modellierung
Ausgleichsrechnung
Kurvenanpassung
Verschiebungsoperator
Funktion <Mathematik>
Parametersystem
Zentrische Streckung
Shape <Informatik>
Pixel
Güte der Anpassung
Bitrate
Forcing
Parallelrechner
Dateiformat
Größenordnung
Pixel
Zentraleinheit
Message-Passing
Shape <Informatik>
Fitnessfunktion
Bitmap-Graphik
Vektorpotenzial
Pixel
Prozess <Physik>
Versionsverwaltung
Rechenzeit
Computerunterstütztes Verfahren
Physikalisches System
Objektklasse
Parallelverarbeitung
Biprodukt
Zentraleinheit
Computeranimation
Zwölf
Knotenmenge
Forcing
Automatische Indexierung
Trajektorie <Mathematik>
Punktspektrum
Warteschlange
Indexberechnung
Kurvenanpassung
Parallele Schnittstelle
Message-Passing
Drucksondierung
Konfiguration <Informatik>
Prozess <Physik>
Wellenpaket
Mathematisches Modell
Zentraleinheit
Wald <Graphentheorie>
Code
Computeranimation
PROM
Variable
Zufallszahlen
Prognoseverfahren
RPC
Digitale Photographie
Reelle Zahl
Mathematische Modellierung
Randomisierung
Softwareentwickler
Optimierung
Maßerweiterung
Parallele Schnittstelle
Interpretierer
Shape <Informatik>
Mathematische Modellierung
Wald <Graphentheorie>
Prognostik
Rechenzeit
Ruhmasse
Sollkonzept
Elektronische Publikation
Kontextbezogenes System
Keller <Informatik>
Konfiguration <Informatik>
Mapping <Computergraphik>
Projektive Ebene
Distributionstheorie
Distributionstheorie
Addition
Objektklasse
Web Site
Pixel
Wald <Graphentheorie>
Wellenpaket
Physikalischer Effekt
Wasserdampftafel
Mathematisierung
Mathematisches Modell
Validität
Biprodukt
Packprogramm
Computeranimation
Bildschirmmaske
Generator <Informatik>
Randomisierung
Meter
Visualisierung
Identifizierbarkeit
Größenordnung
TVD-Verfahren
Umsetzung <Informatik>
Bit
Objektklasse
Abstimmung <Frequenz>
Punkt
Prozess <Physik>
Mathematisches Modell
Versionsverwaltung
Keller <Informatik>
Zahlenbereich
Trajektorie <Mathematik>
Polygon
Übergang
Demoszene <Programmierung>
Netzwerktopologie
Variable
Prognoseverfahren
Gruppe <Mathematik>
Datentyp
Vorlesung/Konferenz
Softwareentwickler
Grundraum
Bildgebendes Verfahren
Funktion <Mathematik>
Schreib-Lese-Kopf
Softwaretest
Lineares Funktional
Shape <Informatik>
Wald <Graphentheorie>
Pixel
Physikalischer Effekt
Vektorraum
Biprodukt
Quick-Sort
Entscheidungstheorie
Mapping <Computergraphik>
Scheduling
Kollaboration <Informatik>
Menge
Forcing
Hypermedia
Wort <Informatik>
Programmierumgebung

Metadaten

Formale Metadaten

Titel Open Source Geospatial Production of United States Forest Disturbance Maps from Landsat Time Series
Serientitel FOSS4G 2014 Portland
Autor Toney, Chris
Lizenz CC-Namensnennung 3.0 Deutschland:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
DOI 10.5446/31699
Herausgeber FOSS4G, Open Source Geospatial Foundation (OSGeo)
Erscheinungsjahr 2014
Sprache Englisch
Produzent Foss4G
Open Source Geospatial Foundation (OSGeo)
Produktionsjahr 2014
Produktionsort Portland, Oregon, United States of America

Inhaltliche Metadaten

Fachgebiet Informatik
Abstract The North American Forest Dynamics (NAFD) project is completing nationwide processing of historic Landsat data to provide annual, wall-to-wall analysis of US disturbance history over nearly the last three decades. Because understanding the causes of disturbance (e.g., harvest, fire, stress) is important to quantifying carbon dynamics, work was conducted to attribute causal agents to the nationwide change maps. This case study describes the production of disturbance agent maps at 30-m resolution across 434 Landsat path/rows covering the conterminous US. Geoprocessing was based entirely on open source software implemented at the NASA Advanced Supercomputing facility. Several classes of predictor variables were developed and tested for their contribution to classification models. Predictors included the geometric attributes of disturbance patches, spectral indices, topographic metrics, and vegetation types. New techniques based on shape-restricted splines were developed to classify patterns of spectral signature across Landsat time series, comprising another class of predictor variables. Geospatial Data Abstraction Library (GDAL) and the R statistical software were used extensively in all phases of data preparation, model development, prediction, and post-processing. Parallel processing on the Pleiades supercomputer accommodated CPU-intensive tasks on large data volumes. Here we present our methods and resultant 30-m resolution maps of forest disturbance and causes for the conterminous US, 1985 Ð 2011. We also discuss the computing approach and performance, along with some enhancements and additions to open source geospatial packages that have resulted.
Schlagwörter remote sensing
parallel processing
GDAL
R
forest disturbance

Ähnliche Filme

Loading...
Feedback