We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Step-by-step tutorial to optimization of geocomputing (tiling and parallelization) with R

Formale Metadaten

Titel
Step-by-step tutorial to optimization of geocomputing (tiling and parallelization) with R
Serientitel
Anzahl der Teile
27
Autor
Mitwirkende
Lizenz
CC-Namensnennung 3.0 Deutschland:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache
Produzent
Produktionsjahr2020
ProduktionsortWicc, Wageningen International Congress Centre B.V.

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
Large datasets e.g. large rasters or vectors can not be easily processed in R, mainly because of the RAM and storage limitations. In addition, within a High Performance Computing environment we are interested in running operations efficiently in parallel (e.g. using snowfall, future or similar packages), so that the total computing time can be minimized. Within the landmap package several functions are now available to tile large rasters and then run processing by applying custom based function per tile in parallel. This is at the order of magnitude more complex than running functions on the complete object, hence in a step-by-step tutorial I will try to explain how to make your own custom functions and combine R, GDAL, SAGA GIS and QGIS to run processing and eventually also Cloud-Optimized GeoTIFFs to share large datasets.