Post-fire hazard detection using ALOS-2 radar and Landsat-8 optical imagery
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 39 | |
Author | ||
License | CC Attribution 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/52933 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | |
Genre |
36
00:00
Hazard (2005 film)DISMAMathematicsOpticsComputer fontMathematical analysisSound effectResultantOpticsSoftware developerHazard (2005 film)MathematicsObject (grammar)Universe (mathematics)Pattern recognitionComputer animation
00:33
ForestFocus (optics)AreaBounded variationPattern recognitionEstimationSatelliteDenial-of-service attackHelmholtz decompositionSensitivity analysisSpectrum (functional analysis)Contrast (vision)System identificationOpticsPairwise comparisonPatch (Unix)Support vector machineEndliche ModelltheorieThresholding (image processing)Level (video gaming)Price indexCalculationMathematical optimizationHyperplaneSample (statistics)Social classSet (mathematics)Sampling (music)Total S.A.Computer-generated imageryNumberObservational studyResampling (statistics)Electronic data processingObject (grammar)PolygonFlow separationBoundary value problemWave packetVirtual machineType theoryObservational studyRadial basis functionParameter (computer programming)PolynomialKernel (computing)Regular graphOpticsNormal (geometry)Subject indexingArchaeological field surveyMarginal distributionInternet forumSeries (mathematics)Social classSatelliteSystem identificationPairwise comparisonSoftware testingMereologyProcedural programmingSensitivity analysisNumberHelmholtz decompositionForestMultiplication signMedical imagingLevel (video gaming)Validity (statistics)Slide ruleVisualization (computer graphics)Bounded variationMathematical analysisChainEstimatorAreaInterpreter (computing)DivisorDatabaseSet (mathematics)Price indexMusical ensembleSampling (statistics)Spectrum (functional analysis)Support vector machineContrast (vision)2 (number)AlgorithmPatch (Unix)HyperplaneRadiometryGeometryScatteringData dictionaryTerm (mathematics)Basis <Mathematik>Phase transitionDenial-of-service attackData structureNetwork topologyDegree (graph theory)XML
05:36
Matter waveOrbitLevel (video gaming)Product (business)Polarization (waves)Order (biology)Insertion lossDescriptive statisticsSurface of revolutionSet (mathematics)Medical imagingSlide ruleXML
06:08
AreaComputer-generated imageryDenial-of-service attackMedical imagingAreaScatteringReduction of orderGaussian eliminationRaw image formatTerm (mathematics)Different (Kate Ryan album)Volume (thermodynamics)ResultantComputer fileComputer animation
06:49
AreaComputer-generated imageryMusical ensembleNormal (geometry)Musical ensembleCombinational logicRaw image formatPairwise comparisonAreaDifferent (Kate Ryan album)Subject indexingComputer animation
07:15
ForestLevel (video gaming)Duality (mathematics)Musical ensembleSampling (statistics)Social classAreaMedical imagingInternet forumFlow separationForm (programming)Error messageWave packetMathematicsExtension (kinesiology)Patch (Unix)
07:48
AreaObservational studyOpticsSupremumOpticsSupport vector machinePatch (Unix)Musical ensembleAlgebraic closureBranch (computer science)Computer fileShape (magazine)AreaDifferent (Kate Ryan album)Sound effectSatelliteField (computer science)Finite differenceComputer animation
09:06
Spectrum (functional analysis)Spektrum <Mathematik>Data structureDenial-of-service attackTime zoneFlow separationPerimeterCohen's kappaCoefficientSystem identificationMathematical analysisForestAreaObservational studySensitivity analysisLocal ringAngleIncidence algebraIntegrated development environmentFaktorenanalyseGeometryDigitizingOpticsCondition numberData fusionHelmholtz decompositionSatelliteTexture mappingPoint cloudTheoremPrice indexEstimationSoftwareSatelliteMusical ensembleGoodness of fitPerimeterField (computer science)System identificationMathematical analysisResultantSensitivity analysisAcoustic shadowRaw image formatError messagePolygonDivisorFilm editingBand matrixForestPatch (Unix)Medical imagingTime zoneCoefficientAreaType theoryGraph coloringBitPoint (geometry)MappingPoint cloudHelmholtz decompositionLocal ringTheoremGeometryAngleLevel (video gaming)Projective planeDigitizingObservational studyIncidence algebraScatteringCuboidSet (mathematics)State of matterMultiplication signOpticsAlpha (investment)Denial-of-service attackCohen's kappaBuildingComputer animation
Transcript: English(auto-generated)
00:03
I'm from Kenya and work with the International Fund for Agriculture Development by my co-author Dr. Ling Chang is from the University of Twente, NITC in Netherlands. Our research paper entitled Post-Fire Hazard Detection using ALOS-2 radar and ALANDSAT optical imagery.
00:21
Our analysis will mainly focus on background objective, methods, results, discussion and conclusion. I'll go straight to the background. The effects of wildfire has attracted a lot of recognition, both globally and locally. And this has been because of especially the increased fire in Australia over the years. And there has been a lot of studies that have been done in Australia focusing on the
00:43
factors that are influencing the spread of wildfires and also how to mitigate the wildfires. However, few have gotten into the understanding of analysis in affecting how variation in geographical aspect of an area and the type of tree structure influence fire severity estimation. Our main objective is to analyze the use of satellites, satellite data and comparison
01:05
to the optical imagery and identification of both burnt and unburnt forest fires patches. Our specific objectives will only be three, one, to develop a forest fire burnt severity map. This will explain the levels of fire that are existing.
01:20
Number two, we'll explore the sensitivity of radar that mainly includes polarimetric decomposition and backscatter intensity in identification of both burnt and unburnt areas. And last but not least, we'll compare the degree of spectral contrast, both looking at spectral sensitivity and polarimetric sensitivity between both the burnt and the unburnt areas.
01:42
Our methodology was involved was mainly had four steps. One was a pre-processing steps in both radar and optical. For radar, we looked at radiometric calibration, terrain, geocoding, and speckle filter. However, for optical data, we looked at radiometric correction and geometric correction. When we looked at the burnt-sensitive spectral composite, that was the second one, it mainly
02:04
focused on vegetation indices, that's the normal-less burnt ratio indices. Then we looked at the support vector machine, that was the contextual classifier that was used on the machine learning algorithm that was used in our analysis. And second last, we looked at the training data process.
02:20
And here we looked at the method that we used to obtain both our training set and our validation sets. And last but not least was to validate our training sets by using our accuracy assessment and validation data set. When you go to our methods that we used, first, in terms of optical-based spectral indices, we used the normal-less burnt ratio indices.
02:40
And here we mainly looked at this, we looked at pre-fire and post-fire images and we subtracted the two. And however, we also used the United States Geological Survey, that's the global index for the vegetation indices of normal-less burnt ratio, and we calculated ours into four classes, mainly unburnt, low-burnt, moderate-burnt, and high-belly-burnt.
03:04
And later we went to reclassify them into only two classes. So for the unburnt class, it remained unburnt. However, for the low, moderate, and high classes, it remained as burnt, severe, high-bunt patches. Secondly, in one of our methods, we used support vector machine. Why did we choose to use a support vector machine?
03:22
This is because it enabled us to find an optimal hyperplane that maximizes the margin between two defined classes. Looking at our two classes that were mainly burnt and unburnt classes, it was difficult, especially at the edges of the boundary of our polygons, to differentiate which phases
03:42
are burnt and unburnt. And so the support vector machine acted as our very best because it utilizes very few training samples. We also used, because it has three kernels, the radial, the polynomial, the radial and the polynomial kernels, we did a random test of values and also using the parameter
04:01
C, which is a regularization parameter, and were able to settle for the radial basis function, that's the RBF, as our main kernel in support vector machine. And then when we go to, when you move to our next slide, that is, when we move to our next slide, that is our validation, our sampling sets for training were obtained
04:25
through visual interpretation of images, that is before and after the fire, and also the use of the polygon vector, which was provided by the Victoria database, was also acting as our training samples. Then our training and tests were separated through random sampling procedure, whereby
04:42
two thirds of the total area were classified as training samples and a third were classified as test samples. Then when you go to describe our study area, our study area was a part of Victoria, Australia, which looked at a part that has the main reason why we chose this
05:00
particular area in Victoria, Australia, was because of four main reasons. One is because it recently experienced a fire that is a bushfire, and that had highly intense fire. Number two is because of the presence of the image and images available for both radar and optical that were having the same timeline dates. Number three is because the area has over time experienced intense fire over time
05:23
consistently, and that became a suitable study. And number four, because of its one inch graphical phenomena that will be suitable for study analysis. And then the images that we used, the images of the data sets that we used to define this slide data description, whereby our images for the
05:40
pre-fire are collected both around July, that is for pre-fire data set for both Landsat and ALOS, and then post-fire data set was that one for October, that was for both ALOS and Landsat images, and that was good for Landsat, it has a resolution of 30 by 30, and for ALOS it has a special resolution of 10 by 10.
06:03
And then for our ALOS, it was collected in ascending order. And then when we go directly into comparing the pre and the post-fire, that is going directly into our result, when you look just at the raw images of pre and post-fire of ALOS, it's difficult to see physically the differences between both images.
06:22
However, when you look at the images, when you look at the images in terms of their backscatter values, you'll realize that the images before the fire had a high backscatter value of 63.01, that's a volume scatter. However, after the fire, it had a value of 47.13. So there was a reduction in the scattering.
06:41
And that could tell you increase that that particular area had experienced fire and there was elimination of leaves at that particular area. And then as you continue, we also did a comparison of raw images between before and after the fire of both Landsat 8, putting a band combination of RGB 752, and one could
07:01
clearly see there was a difference, especially in that particular area of both before the fire and after the fire. When we go to comparison after we performed our normalized band ratio index, we found that our area had when you compare the size and the extent of changes before between the
07:21
fire before and after the images we compared and we saw that our area had four classified classes that were unburned, low severity, moderate, moderate to high and high severity. And so these classes are very key for us as they were able to form training samples between burnt and unburned areas. Then burnt areas remained unburned.
07:41
However, for the other four classes, that's low, moderate, moderate to high and high severity are classified as burnt patches. Then when you go to classified, when we do need classification or SVM classification of optical and radar satellite, one could clearly see there was a difference in between. If you look at the optical data, it could tell that most of the fires that most of
08:05
the patches within the vector shape file were burnt. However, this is because of spectral classification and thus it only takes the vegetation crown fill. However, when you look, when you look at our data, you could see there was a
08:21
difference in between and you could see there was a difference in between as much of it was not really burnt and that's highly could tell this is because it utilizes the removal of crown leaves and branches. While for this one, it just looks at vegetation fill, that's clown closure for the optical data. That's the difference. Another second difference that we also notified was that the areas that appeared as
08:45
unburned are lowly burnt in optical data were burnt in served data. And that was also another difference that we noticed. And this could also be because of the same effect that optical data utilizes crown closure while radar data utilizes the removal of crown leaves and branches.
09:05
Then when you go to the classification results, it was also key for us as it assured us that in both radar and optical, it utilizes the ability to classify both burnt and unburned patches in forest fire mapping as an accuracy assessment. They showed high values were obtained both for burnt and unburned, looking at both
09:25
the producer accuracy and accuracy of Landsat-8 and ALOS-2. The kappa coefficient for Landsat-8 was 0.8 and for ALOS-2, 0.89. So for ALOS was a bit higher compared to Landsat-8, and this was very accurate and key for us because it showed the ability for both images to classify burnt and unburned.
09:43
However, just a pardon that we could not be able to fully rely on this. This is because we noticed that areas that were classified as unburned, areas that were classified as burnt in Landsat-8, when you look at the images, especially those areas that are outside the fire perimeter zone, some of them are buildings
10:00
and are classified as burnt. This is because of maybe the coloring of the rooftops. And so those are the things one should consider. And an alternative way we could advise is use of raw images or use of other types of field data to confirm your analysis. And then as we conclude in our conclusion is that we successfully obtained good results for identification of burnt and unburned scars using both satellite data and
10:25
optical. The use of L-Bund, which is ALOS-2 in analysis of identification of burnt and burnt was clear and was wonderful. And also we looked at the sensitivity and we realized that when we used radar data, one of the things we cannot fully rely on our results is because some of the
10:41
areas that were highly in opt in radar data, the backscatter intensity was difficult to identify because our shadows appearing and thus we could not fully rely whether they are burnt or not burnt. Another critical factor was when you look at the polygon around our burnt areas is that it was not really matching the fire perimeter zone.
11:00
And that was because it was collected at the initial stages of the fire. While however, looking at our fire zones, it was collected, it was based on post fire analysis. So that's another critical thing that one should look at. However, when you look at the critical factors, one of also another thing that was quite critical in correcting was the local incidence angle and acquisition geometry. And that would also affect the type of backscatter intensity,
11:23
especially when you're doing classification with radar data. However, looking at all of those factors and looking at the post fire analysis, the timing of the data sets and digitization of the fire perimeter are very, very key in the study. And that's another thing that really should be looked into. However, in one of our conclusion, we looked at that another possible thing
11:43
that you have looked at is that both radar and optical can be utilized in first fire hazard detection. Last but not least, our next immediate states would be like we would like to incorporate in our study is maybe much both optical and SAR to improve our timeliness and accuracy of burnt areas.
12:01
Also, because you state alpha decomposition in our classification, we'd like to test more composition like cloud and portrait decomposition theorem. Last but not least, we'd like to look at ways of improving correction of correction of local incidence angle, especially using snap. And this will allow us to and this will greatly improve our analysis.
12:22
Thank you. Your questions are welcomed.