Modeling of forest landscape evolution at regional level: a FOSS4G approach
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 351 | |
Author | ||
License | CC Attribution 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/68951 (DOI) | |
Publisher | ||
Release Date | ||
Language | ||
Production Year | 2022 |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
| |
Keywords |
00:00
Mach's principleGeneralized extreme value distributionIntegrated development environmentCivil engineeringTime evolutionForestTheoryFunction (mathematics)Hazard (2005 film)FaktorenanalysePlot (narrative)Compact spacePairwise comparisonTerm (mathematics)Uniform convergenceSocial classImage resolutionComputer-generated imageryPopulation densitySet (mathematics)Musical ensembleScale (map)OrthogonalityGrass (card game)SubsetOrientation (vector space)Point (geometry)Ground controlParameter (computer programming)Wage labourProgrammable read-only memoryObject (grammar)Mathematical analysisModul <Datentyp>Digital filterGraph (mathematics)Abelian categorySubstitute goodPixelLine (geometry)Boundary value problemProcess (computing)Error messageDisplacement MappingPerformance appraisalMacro (computer science)Metric systemConfiguration spacePatch (Unix)Logical constantTotal S.A.Flow separationOutlierNumberPrice indexSurfaceAreaThermal expansionAlgebraic closureSeries (mathematics)Markov chainChainAnalytic continuationSound effectUsabilityData managementBoundary value problemElement (mathematics)Interrupt <Informatik>Artificial neural networkAdditionVariancePoint (geometry)Population densityForestResultantField (computer science)Message passingMedical imagingTexture mappingLevel (video gaming)Metric systemMetreObject (grammar)Module (mathematics)SurfaceSound effectTable (information)Uniform resource locatorError messageAreaSeries (mathematics)NumberMultiplication signParameter (computer programming)MathematicsCombinational logicPlotterMappingGrass (card game)Displacement MappingWebsiteMereologyGraph (mathematics)SoftwareTerm (mathematics)Fisher informationObservational studyPixelMaxima and minimaGraph coloringNumerical analysisVariable (mathematics)Similarity (geometry)Set (mathematics)OrthogonalityMathematical analysisCASE <Informatik>TwitterFunktionalanalysisOpen sourceMusical ensembleDifferent (Kate Ryan album)Image resolutionNetwork topologyChainAverageGenetic programmingCategory of beingRepository (publishing)Sampling (statistics)BilderkennungEndliche ModelltheoriePatch (Unix)Insertion lossEvoluteDevice driverShape (magazine)Materialization (paranormal)Constructor (object-oriented programming)Goodness of fitView (database)DatabaseModule (mathematics)Digital photographyElectronic mailing listProcess (computing)Standard deviationBuildingIdentifiabilityTask (computing)Manufacturing execution systemRectifierAutomorphismClassical physicsPresentation of a groupSpline (mathematics)Square numberScaling (geometry)Form (programming)StapeldateiMatrix (mathematics)Focus (optics)Computational sciencePredictabilityThresholding (image processing)RandomizationPhysical systemSocial classComputer animation
Transcript: English(auto-generated)
00:00
So, good morning, I transform myself into a presenter, so I will show you the basically the construction of a karaoke data set at the regional level for a certain purpose, which is to study the evolution of the forest. So I will give you an introduction why we are doing this,
00:21
what kind of materials are available, how we did it, and obviously the results. The problem is well known in the sense that obviously the landscape is changing everywhere, but especially in the mountains and therefore also in the alps, and the main effects which are
00:43
visible are progressive decrement of the pasture, so they are abandoned and they are basically vanishing, and this means that on the other hand you have an increase of forest areas, not only the surface increases, but also the shape of the plots change in the sense that
01:06
they are more complex, and this means that from many points of view they are affecting this change because you are very complex forest, and this means that the
01:21
function of the forest changes in time with respect to the ecological function of the surface of the forest, but also for example as a protection from natural risks, think of avalanches or landslides and so on, and also we have taken into account the fact that the
01:42
climate is rapidly changing, so there is also an additional driver to this change. So what we have done is to build a cartographic database for a large region, which is the Trentino region, more than 13 square kilometers, so it is quite a large region. We want to use
02:04
this data set to analyze the modification of the forest, so we need a series of maps which are uniform as land use and land coverage classes, which is not always the case. We need
02:20
a consistent resolution and also a high resolution because we want to apply landscape analysis, so we need a resolution which is high enough to apply some tools, and finally we want to cover the longest possible time span, so obviously some maps already exist but they
02:42
do not have all these features, so we want to build a new data set to do this, and we have been able to cover 155 years from an old cadastral map to a recent aerial image, obviously the last image is from 2015. As new images are available, we can add new maps to this,
03:10
and so we have basically two sets of maps. The first one is of the historical map, which requires a certain type, and then we show you, of processing from 1859 to 1936. There is an
03:26
additional map which is more recent, so we cannot say it is historical, but it has been processed as the historical maps because it is not available in digital form, and then we have a series of aerial images from 1954 to 2015. So this is the list of the historical maps.
03:50
As you can see, there are different years obviously, very different scales, very different resolution, in the sense that all these maps, but the last ones, were already available as digital
04:03
maps, so we do not have access to the original paper map, so we have the scan, and as you can see, the resolution is very different. One note about what we all know is the map, which is a map of forest density, not of forest location, so we try to
04:25
guide this map, which also there are some peculiarities about this map, I will show you something, but we will not use this map for evaluating the forest coverage modification. These are the features of the orthophotos. As you can see, they are more uniform with respect to the
04:47
historical maps. The main differences between these images are the first two sets are in black and white, so more difficult to use, and the first set is available only as images, not as
05:02
ortho images, so we also need to ortho rectify them. So the first step is obviously to ortho rectify the 1954 image data set. The number of images which cover the inner region is 130, but we pre-selected 19
05:24
images, because there are obviously in this kind of set, there is a lot of overlapping, so we choose the best one, because some of the images are blurred and so on. Then we found at least 16 counterpoints for each image, and I don't know if any of you have tried this,
05:45
in the mountain region this is very difficult, because you identify buildings, roads which existed in 1954 in this case, and they exist nowadays, so you can use the coordinates of the current location to identify the points, so this is a very time consuming task.
06:06
Once we have all the digital, the historical map and all the ortho photo, obviously the next step is to classify them. We use the standard object image analysis using GRASS and R,
06:21
there is a module in GRASS which runs all the classification in R. There are differences between the ortho photo which are classical image and the historical maps in the sense that in the historical maps you have a lot of spurious elements such as labels, name of the places for example, symbols, you have the political boundaries and so on.
06:50
The second point is that in some of our maps which are painted, the colors for the same take every variance from one sheet to another, from one part of the map to another, so you have
07:03
also to take into account this. And finally some of the maps have hatching or halftones and so on, and this requires the use of additional artificial bands such as texture or high pass field around some bands and so on. To do this, to filter out all this unwanted feature,
07:24
we have developed some GRASS modules which you already can find in the official GRASS add-on, GRASS modules add-on repository. The first one, the R.field category is the one which removes all the labels, symbols and so on. The second one is the module which can be used to
07:47
estimate the size of the field that you apply to remove all the unwanted objects on your image. So the results, the original image, you see on the left the original image, on the right side
08:04
the original image, as you can see the RMS error is quite low because one one meter and 28 centimeters. We tested also the displacement of some points and we found that
08:23
the mean value of the displacement of the points is about 10 meters, but the good news is that higher values are in the higher part because there it is difficult to find other points, but in those regions there is no forest, so for our part was this high value is
08:44
not so troublesome because the errors of course occur where we are not interested basically in the in the surface, in the location of the surface. So this is a complete coverage of the photos, you can see some of them are darker, but again we have the complete data set.
09:08
So the next step for the classification is the simulation. If you have some experience with Obia, you know that this is the critical step in the sense that if you are able to
09:21
create segments in a good way, then it is quite easy to classify them. And here you can see the parameters for the segmentation of the historical map data set, the first table, and for the second table. In GRASS it is available a module which tries to
09:43
guess or provide the best combination of threshold, which is the parameter which drives the similarity between the colors, let's say, of the pixel belonging to the same segment, therefore the same glass, and the minimum size of the segment of the area in pixel. And
10:05
you can apply this module, but then you have to modify the values of the automatic way, usually it does not work well, you can use that as a starting point to make a better judgment, but you cannot use the values directly. And these are the results after adjustment,
10:26
as you can see for the historical maps there is a certain variability in both the parameters, while for the numbers are more or less the same, and this is because obviously the historical
10:41
maps are very different, so different values are needed. So the next step is the classification, so we have to select some training segments, then classify the image, and you can see there are thousands of images and maps, so this is a very time consuming task, but we have
11:05
obviously scripted everything, then for each map we have found 750 sampling points using the random sampling approach, which is done obviously after the classification, and these
11:21
are the results in terms of accuracy, as you can see for the historical map the values are high. All the best results are for the color images, which are the two last images, while values are lower for the black and white images. This is the results in terms of map, this is the
11:46
old Bautista map, as you can see we only have the forest density for each district, we don't have the location of the forest, so we tried this because it was an interesting study case for
12:03
classifying historical maps, but we cannot use that for our purpose, and these are the results for all the years, and this is the last year, so in green the forest average in 2015, and the same information as table and as a chart, here you can see that there is an increase of the forest
12:28
from the original, the first year to the 1994, and then you have more or less a constant value, a very small increase in years, so the next step, once we have all these maps, we can apply
12:46
landscape analysis, which means that we evaluate some metrics about this forest, just to understand how the function of the forest changes in time, and what we see is something we expected, but we
13:02
can now quantify, which is we have very fewer forest areas, forest plots, but they are larger because they are merged together, the fetch density obviously decreases because there are less fetches, because they are larger, while the edge density remains more or less the same because
13:25
you have larger fetches, but fewer of them, so I will show you some graphs about this, but you have to take into account that for historical maps,
13:44
obviously we have less information, which has less resolution, for example for the cadastral map, we have for each parcel, the parcel is not about the coverage, but about obviously the ownership of the land, for each parcel we have a label which say forest,
14:04
faster, or something like that, so we have something which is at the special level, less resolution, with the less resolution, the second point is that for the 1954 which we identified, there are some effects which we do not still understand, in the sense that
14:28
the landscape metrics do not have a behavior which we expect, so something is wrong, but we do not know what, so these are the landscape metrics, the first one is the number
14:44
of patches, they obviously decrease because there are fewer larger patches, and this is what you see on the right side, and obviously the fetch density is somehow a mirror
15:01
behavior with the patch size, while the landscape, the edge density is more or less constant, and there is these values for 1954 which is somehow still to investigate, so in conclusion
15:21
we have created this very large data set, I assure you that there are a lot of data here, and with this data set we are able to quantify some trends which are already known, but we can quantify them, a comment about the approach, it works well, but you still need some experiences
15:48
because you can more or less automatize all the steps, but you still have to calibrate some parameters, and this is not possible automatically, well, in principle you can do that,
16:03
but experience says that it doesn't work well, and well, the fact that the forest plots became larger and there are fewer of them means that we are losing ecotones, so the ecological function of the forest is changing in this area obviously,
16:23
what we are doing now, well, we have used this maps, now we have some time series, we can apply some modeling or predicting future scenario, and we have done this using Markov chains and agent-based modeling only on a small area because they are very time consuming
16:44
to run, so we are trying to do this and we already have done something on small areas, we are trying to understand what happened in 2018 because this area has been affected by the storm, which had a very deep impact on the forest with loss of trees and so on,
17:08
and as a general comment, the fact that we have been able to process all these kind of different and very numerous maps has been possible only because we are using open source
17:26
software in particular graphs and everything has been scripted, so basically we can run all the analysis with one comment and come back after a week, a couple of weeks, and find the results, it depends on how fast is your own builder, and finally the availability
17:47
of this data set, one of these data sets is already online and I guess some of you maybe have already used it, the 1936 map is already online, there is a website where you can see the
18:02
map, you can download the map for the whole Italy, the other data sets are not online because we are still hearing some problems about copyright of some maps and so on, but we hope in let's say a couple of years, a year, I'm not sure, to be able to publish all the data on a website
18:25
where you can see the data, you can download the data and so on, so this is more or less everything.