We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Step-by-step tutorial to optimization of geocomputing (tiling and parallelization) with R

Formal Metadata

Title
Step-by-step tutorial to optimization of geocomputing (tiling and parallelization) with R
Title of Series
Number of Parts
27
Author
Contributors
License
CC Attribution 3.0 Germany:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language
Producer
Production Year2020
Production PlaceWicc, Wageningen International Congress Centre B.V.

Content Metadata

Subject Area
Genre
Abstract
Large datasets e.g. large rasters or vectors can not be easily processed in R, mainly because of the RAM and storage limitations. In addition, within a High Performance Computing environment we are interested in running operations efficiently in parallel (e.g. using snowfall, future or similar packages), so that the total computing time can be minimized. Within the landmap package several functions are now available to tile large rasters and then run processing by applying custom based function per tile in parallel. This is at the order of magnitude more complex than running functions on the complete object, hence in a step-by-step tutorial I will try to explain how to make your own custom functions and combine R, GDAL, SAGA GIS and QGIS to run processing and eventually also Cloud-Optimized GeoTIFFs to share large datasets.