We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Modeling of forest landscape evolution at regional level: a FOSS4G approach

00:00

Formal Metadata

Title
Modeling of forest landscape evolution at regional level: a FOSS4G approach
Title of Series
Number of Parts
351
Author
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language
Production Year2022

Content Metadata

Subject Area
Genre
Abstract
In the last decades the European mountain landscape, and in particular the Alpine landscape, has dramatically changed due to social and economic factors (Tattoni et al. 2017). The most visible impact has been the reduction of the population for mid and high altitude villages and the shrinking of part of the land used for agriculture and grazing. The result is a progressive reduction of pastures and meadows and the expansion of the forested areas. Forest plots become also more compact, with the loss of ecotones. The study of this phenomenon is important not only to assess its current impact on the ecological functionality of forest ecosystems including biodiversity and natural hazards, but also to build future scenarios, taking into account also the climate change issues. The limit of the mountain treeline is gradually shifting upwards and the monitoring and modeling of these changes will be crucial to plan future interventions and try to implement effective mitigation plans. For these reasons, a dataset describing the forest, meadows and pasture coverage for the Trentino region, in the eastern Italian Alps, has been created. A set of heterogeneous sources has been selected so that maps and images cover the longest possible time span on the whole Trentino region with the same quality, providing the necessary information to create a LULC (Land Use/Land Cover) map at least for the forest, meadows and pasture classes. The dataset covers a time span of more than 160 years, with automatic or semi-automatic digitization of historical maps and the LULC classification from aerial images. The first set of maps includes historical maps from 1859 to 1936, with an additional map from 1992 which was not available in digital format and has been digitized for this project: Austrian Cadastral (1859, 13297 sheets, scale 1:1440), Cesare Battisti’s map of forest density published in his atlas ”Il Trentino. Economic Statistical Illustration” (1915, single sheet, 1: 500 000), Italian Kingdom Forest Map (IKMF) (1936, 47, 1:100 000) and Map of the potential forest area and treeline (1992, 98, 1:50 000). A new procedure has been developed to automatically extract LULC classes from these maps, combining GRASS and R for the segmentation, classification and filtering with the Object Based Image Analysis (OBIA) approach. Two new GRASS modules used in this procedure have been created and made available as add-ons on the official repository (Gobbi et al., 2019).. The second set of maps are aerial images, covering the time span from 1954 to 2015. The four sets which differ for mean scale, number of bands, resolution and datum: "Volo GAI" (1954, 130 images, mean scale 1:35 000, B/W, resolution 2m, Rome40 datum), "Volo Italia" (1994, 230, 1:10 000, B/W, 1m, Rome40), "Volo TerraItaly" (2006, 250, 1:5 000, RGB+IR, 0.5m, Rome40) and "Volo AGEA" (2015, 850, 1:5 000, RGB+IR, 0.2m, ETRS89). The "Volo GAI" imagery set has been ortho-rectified using GRASS, images in the other sets were already ortho photos. The aerial images were classified with OBIA to create LULC maps, with particular focus on forest, meadows and pasture classes. The same training segments were used across the 4 sets and the custom classification procedure has been scripted. The number of training segments ranges from 1831 for the 2015 dataset and 2572 for the 1954 imagery set. The evaluation of the results of the classification for all the maps and images has been carried out with a proportional stratified random sampling approach. A procedure has been scripted in GRASS to select 750 sampling points, distributed in each stratum (LULC class) proportionally to the area of the class. The resulting points have been manually labeled and used to assess the classification and filtering (where present) accuracy.[c] For the historical maps, the application of the custom filtering procedure has increased the accuracy from a minimum value of 67% (for the IMF map) to 93% (for the same map), with a maximum of 98% for the cadaster map. For the imagery datasets the accuracy (percentage of points correctly classified) was between 93% and 94%, with the latter value corresponding to the higher resolution 2015 imagery dataset. Higher accuracy, up to 95% was obtained for the forest class, which is the main focus of the study. The analysis of selected landscape metrics provided preliminary results about the forest distribution and pattern of recolonization during the last 180 years. A comparison between the capabilities of FOSS4G available systems for landscape metrics was performed to evaluate the best analysis tools (Zatelli et al. 2019). Finally, these time series of LULC coverage were used to create future scenarios for the forest evolution in a test area of Trentino in the next 85 years, using both the Markov chain and the Agent Based Modeling approaches with GAMA (Taillandier et al. 2018). Given the large number of maps involved, the great flexibility provided by FOSS for spatial analysis, such as GRASS, R, QGIS and GAMA and the possibility of scripting all the operations have played a pivotal role in the success both in the creation of the dataset and in the extraction and modeling of land use changes. The development of new GRASS add-on modules, based on the scripts created during this study, is planned.
Keywords
Mach's principleGeneralized extreme value distributionIntegrated development environmentCivil engineeringTime evolutionForestTheoryFunction (mathematics)Hazard (2005 film)FaktorenanalysePlot (narrative)Compact spacePairwise comparisonTerm (mathematics)Uniform convergenceSocial classImage resolutionComputer-generated imageryPopulation densitySet (mathematics)Musical ensembleScale (map)OrthogonalityGrass (card game)SubsetOrientation (vector space)Point (geometry)Ground controlParameter (computer programming)Wage labourProgrammable read-only memoryObject (grammar)Mathematical analysisModul <Datentyp>Digital filterGraph (mathematics)Abelian categorySubstitute goodPixelLine (geometry)Boundary value problemProcess (computing)Error messageDisplacement MappingPerformance appraisalMacro (computer science)Metric systemConfiguration spacePatch (Unix)Logical constantTotal S.A.Flow separationOutlierNumberPrice indexSurfaceAreaThermal expansionAlgebraic closureSeries (mathematics)Markov chainChainAnalytic continuationSound effectUsabilityData managementBoundary value problemElement (mathematics)Interrupt <Informatik>Artificial neural networkAdditionVariancePoint (geometry)Population densityForestResultantField (computer science)Message passingMedical imagingTexture mappingLevel (video gaming)Metric systemMetreObject (grammar)Module (mathematics)SurfaceSound effectTable (information)Uniform resource locatorError messageAreaSeries (mathematics)NumberMultiplication signParameter (computer programming)MathematicsCombinational logicPlotterMappingGrass (card game)Displacement MappingWebsiteMereologyGraph (mathematics)SoftwareTerm (mathematics)Fisher informationObservational studyPixelMaxima and minimaGraph coloringNumerical analysisVariable (mathematics)Similarity (geometry)Set (mathematics)OrthogonalityMathematical analysisCASE <Informatik>TwitterFunktionalanalysisOpen sourceMusical ensembleDifferent (Kate Ryan album)Image resolutionNetwork topologyChainAverageGenetic programmingCategory of beingRepository (publishing)Sampling (statistics)BilderkennungEndliche ModelltheoriePatch (Unix)Insertion lossEvoluteDevice driverShape (magazine)Materialization (paranormal)Constructor (object-oriented programming)Goodness of fitView (database)DatabaseModule (mathematics)Digital photographyElectronic mailing listProcess (computing)Standard deviationBuildingIdentifiabilityTask (computing)Manufacturing execution systemRectifierAutomorphismClassical physicsPresentation of a groupSpline (mathematics)Square numberScaling (geometry)Form (programming)StapeldateiMatrix (mathematics)Focus (optics)Computational sciencePredictabilityThresholding (image processing)RandomizationPhysical systemSocial classComputer animation
Transcript: English(auto-generated)
So, good morning, I transform myself into a presenter, so I will show you the basically the construction of a karaoke data set at the regional level for a certain purpose, which is to study the evolution of the forest. So I will give you an introduction why we are doing this,
what kind of materials are available, how we did it, and obviously the results. The problem is well known in the sense that obviously the landscape is changing everywhere, but especially in the mountains and therefore also in the alps, and the main effects which are
visible are progressive decrement of the pasture, so they are abandoned and they are basically vanishing, and this means that on the other hand you have an increase of forest areas, not only the surface increases, but also the shape of the plots change in the sense that
they are more complex, and this means that from many points of view they are affecting this change because you are very complex forest, and this means that the
function of the forest changes in time with respect to the ecological function of the surface of the forest, but also for example as a protection from natural risks, think of avalanches or landslides and so on, and also we have taken into account the fact that the
climate is rapidly changing, so there is also an additional driver to this change. So what we have done is to build a cartographic database for a large region, which is the Trentino region, more than 13 square kilometers, so it is quite a large region. We want to use
this data set to analyze the modification of the forest, so we need a series of maps which are uniform as land use and land coverage classes, which is not always the case. We need
a consistent resolution and also a high resolution because we want to apply landscape analysis, so we need a resolution which is high enough to apply some tools, and finally we want to cover the longest possible time span, so obviously some maps already exist but they
do not have all these features, so we want to build a new data set to do this, and we have been able to cover 155 years from an old cadastral map to a recent aerial image, obviously the last image is from 2015. As new images are available, we can add new maps to this,
and so we have basically two sets of maps. The first one is of the historical map, which requires a certain type, and then we show you, of processing from 1859 to 1936. There is an
additional map which is more recent, so we cannot say it is historical, but it has been processed as the historical maps because it is not available in digital form, and then we have a series of aerial images from 1954 to 2015. So this is the list of the historical maps.
As you can see, there are different years obviously, very different scales, very different resolution, in the sense that all these maps, but the last ones, were already available as digital
maps, so we do not have access to the original paper map, so we have the scan, and as you can see, the resolution is very different. One note about what we all know is the map, which is a map of forest density, not of forest location, so we try to
guide this map, which also there are some peculiarities about this map, I will show you something, but we will not use this map for evaluating the forest coverage modification. These are the features of the orthophotos. As you can see, they are more uniform with respect to the
historical maps. The main differences between these images are the first two sets are in black and white, so more difficult to use, and the first set is available only as images, not as
ortho images, so we also need to ortho rectify them. So the first step is obviously to ortho rectify the 1954 image data set. The number of images which cover the inner region is 130, but we pre-selected 19
images, because there are obviously in this kind of set, there is a lot of overlapping, so we choose the best one, because some of the images are blurred and so on. Then we found at least 16 counterpoints for each image, and I don't know if any of you have tried this,
in the mountain region this is very difficult, because you identify buildings, roads which existed in 1954 in this case, and they exist nowadays, so you can use the coordinates of the current location to identify the points, so this is a very time consuming task.
Once we have all the digital, the historical map and all the ortho photo, obviously the next step is to classify them. We use the standard object image analysis using GRASS and R,
there is a module in GRASS which runs all the classification in R. There are differences between the ortho photo which are classical image and the historical maps in the sense that in the historical maps you have a lot of spurious elements such as labels, name of the places for example, symbols, you have the political boundaries and so on.
The second point is that in some of our maps which are painted, the colors for the same take every variance from one sheet to another, from one part of the map to another, so you have
also to take into account this. And finally some of the maps have hatching or halftones and so on, and this requires the use of additional artificial bands such as texture or high pass field around some bands and so on. To do this, to filter out all this unwanted feature,
we have developed some GRASS modules which you already can find in the official GRASS add-on, GRASS modules add-on repository. The first one, the R.field category is the one which removes all the labels, symbols and so on. The second one is the module which can be used to
estimate the size of the field that you apply to remove all the unwanted objects on your image. So the results, the original image, you see on the left the original image, on the right side
the original image, as you can see the RMS error is quite low because one one meter and 28 centimeters. We tested also the displacement of some points and we found that
the mean value of the displacement of the points is about 10 meters, but the good news is that higher values are in the higher part because there it is difficult to find other points, but in those regions there is no forest, so for our part was this high value is
not so troublesome because the errors of course occur where we are not interested basically in the in the surface, in the location of the surface. So this is a complete coverage of the photos, you can see some of them are darker, but again we have the complete data set.
So the next step for the classification is the simulation. If you have some experience with Obia, you know that this is the critical step in the sense that if you are able to
create segments in a good way, then it is quite easy to classify them. And here you can see the parameters for the segmentation of the historical map data set, the first table, and for the second table. In GRASS it is available a module which tries to
guess or provide the best combination of threshold, which is the parameter which drives the similarity between the colors, let's say, of the pixel belonging to the same segment, therefore the same glass, and the minimum size of the segment of the area in pixel. And
you can apply this module, but then you have to modify the values of the automatic way, usually it does not work well, you can use that as a starting point to make a better judgment, but you cannot use the values directly. And these are the results after adjustment,
as you can see for the historical maps there is a certain variability in both the parameters, while for the numbers are more or less the same, and this is because obviously the historical
maps are very different, so different values are needed. So the next step is the classification, so we have to select some training segments, then classify the image, and you can see there are thousands of images and maps, so this is a very time consuming task, but we have
obviously scripted everything, then for each map we have found 750 sampling points using the random sampling approach, which is done obviously after the classification, and these
are the results in terms of accuracy, as you can see for the historical map the values are high. All the best results are for the color images, which are the two last images, while values are lower for the black and white images. This is the results in terms of map, this is the
old Bautista map, as you can see we only have the forest density for each district, we don't have the location of the forest, so we tried this because it was an interesting study case for
classifying historical maps, but we cannot use that for our purpose, and these are the results for all the years, and this is the last year, so in green the forest average in 2015, and the same information as table and as a chart, here you can see that there is an increase of the forest
from the original, the first year to the 1994, and then you have more or less a constant value, a very small increase in years, so the next step, once we have all these maps, we can apply
landscape analysis, which means that we evaluate some metrics about this forest, just to understand how the function of the forest changes in time, and what we see is something we expected, but we
can now quantify, which is we have very fewer forest areas, forest plots, but they are larger because they are merged together, the fetch density obviously decreases because there are less fetches, because they are larger, while the edge density remains more or less the same because
you have larger fetches, but fewer of them, so I will show you some graphs about this, but you have to take into account that for historical maps,
obviously we have less information, which has less resolution, for example for the cadastral map, we have for each parcel, the parcel is not about the coverage, but about obviously the ownership of the land, for each parcel we have a label which say forest,
faster, or something like that, so we have something which is at the special level, less resolution, with the less resolution, the second point is that for the 1954 which we identified, there are some effects which we do not still understand, in the sense that
the landscape metrics do not have a behavior which we expect, so something is wrong, but we do not know what, so these are the landscape metrics, the first one is the number
of patches, they obviously decrease because there are fewer larger patches, and this is what you see on the right side, and obviously the fetch density is somehow a mirror
behavior with the patch size, while the landscape, the edge density is more or less constant, and there is these values for 1954 which is somehow still to investigate, so in conclusion
we have created this very large data set, I assure you that there are a lot of data here, and with this data set we are able to quantify some trends which are already known, but we can quantify them, a comment about the approach, it works well, but you still need some experiences
because you can more or less automatize all the steps, but you still have to calibrate some parameters, and this is not possible automatically, well, in principle you can do that,
but experience says that it doesn't work well, and well, the fact that the forest plots became larger and there are fewer of them means that we are losing ecotones, so the ecological function of the forest is changing in this area obviously,
what we are doing now, well, we have used this maps, now we have some time series, we can apply some modeling or predicting future scenario, and we have done this using Markov chains and agent-based modeling only on a small area because they are very time consuming
to run, so we are trying to do this and we already have done something on small areas, we are trying to understand what happened in 2018 because this area has been affected by the storm, which had a very deep impact on the forest with loss of trees and so on,
and as a general comment, the fact that we have been able to process all these kind of different and very numerous maps has been possible only because we are using open source
software in particular graphs and everything has been scripted, so basically we can run all the analysis with one comment and come back after a week, a couple of weeks, and find the results, it depends on how fast is your own builder, and finally the availability
of this data set, one of these data sets is already online and I guess some of you maybe have already used it, the 1936 map is already online, there is a website where you can see the
map, you can download the map for the whole Italy, the other data sets are not online because we are still hearing some problems about copyright of some maps and so on, but we hope in let's say a couple of years, a year, I'm not sure, to be able to publish all the data on a website
where you can see the data, you can download the data and so on, so this is more or less everything.