We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Towards automation of river water surface detection

00:00

Formal Metadata

Title
Towards automation of river water surface detection
Title of Series
Number of Parts
156
Author
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
It is well known that climate change impacts are increasingly affecting European territory, often in the shape of extreme natural events. Among those, in recent years, heat waves due to global warming contributed to the acceleration of drying process. Particularly, the Mediterranean areas are expected to face extraordinary hot summer and increasingly frequent drought events, which may clearly affect the population. As a partial confirmation of this forecast, in between 2022 and 2023 Southern Europe was affected by lasting drought conditions, which had several outcomes on the ecosystems. As an example, in Po River (the longest Italian water stream) the worst water scarcity of the past two centuries was recorded (Montanari et al., 2023). Experts agreed on the exceptionality of the phenomenon, stating nevertheless the repeatability of such events in near future (Bonaldo et al., 2022). Willing to face them, local authorities expressed the need of tools for monitoring the impacts of drought on rivers, so to be capable of promptly enacting countermeasures. In this context, the authors partnered with Regione Lombardia for building a procedure oriented at the exploitation of Copernicus Sentinel-1 (SAR) and Sentinel-2 (optical) sensor fusion for water surface mapping, applied in the case study of Po River (Conversi et al., 2023), based on supervised classification of combined optical and SAR imagery. The current work will present an evolution of the proposed methodology, which includes a considerable effort towards the full automation of the process, a necessary step for making it user friendly for public administration. The designed procedure, built in Google Earth Engine, is based on the combination of three images, namely the S-1 VV speckle filtered band (Level 1, GRD) and the spectral indices Sentinel Water Mask and NDWI derived from S-2 (Level 1-C, orthorectified). Input imagery is selected to ensure complete coverage of the area of interest, with mosaicking if necessary images coming from different dates, a reliable assumption considering that the drought is usually a slow phenomenon. The interval of time between images is anyway minimized by the code, depending on data quality and availability. Training polygons are drawn by photointerpretation and then fed to a Random Forest-based supervised classifier, jointly to the three aforementioned images. The outcome of the procedure is constituted by a map of water surface detected over the area of interest, complemented with an estimate of the extent in km2. Results are then validated and correlated with hydrometric records coming from the field, which corroborated the overall performance (Conversi et al., 2023). This paper proposes an advancement in the methodology, aimed at enhancing its usability by non-expert users, so to set the base of the development of a tool that can be exploited by local stakeholders. An efficient automatic extraction of training samples, is achieved by randomly extracting the training set of pixels from a binary mask (water/non-water). This water/non-water mask is derived by the combination of three sub-masks resulting from the automatic thresholding of the input imagery (VV, SWM, NDWI), obtained with the Bmax Otsu algorithm (Markert et al., 2020). The water/non water mask includes only the pixels which have the same behavior for all input images and along the reference period. The thresholding procedure is automated using the concept of Otsu histogram-based algorithm for image segmentation. This methodology allows to define an optimal threshold value for distinguishing background and foreground objects. The inter-class variance is evaluated and the value that maximizes it is chosen, thus maximizing the separability among pixel classes as well (Otsu, 1979). A modified version of the algorithm, the Bmax Otsu, was exploited, which was originally developed for water detection through Sentinel-1. Otsu algorithm is indeed particularly effective in case of images characterized by a bimodal histogram of pixel values, while Bmax Otsu is more suitable in presence of multiple classes or complex backgrounds (Markert et al., 2020), which is the case for the application presented in this work. The Bmax Otsu is based on a checkerboard subdivision of the original image, on user-selected parameters. The maximum normalized Between-Class Variance (BCV) is evaluated in each cell of the checkerboard and sub-areas characterized by bimodality are selected for applying the Otsu algorithm, thus leading to the goal threshold value (Markert et al., 2020). As mentioned, the outcomes of the Bmax Otsu procedure are exploited for extracting random training samples for the machine learning-based classification algorithm. The best classification performance is obtained with a number of pixels that corresponds to the 0.02% of the region of interest. The validation was carried out with respect to another classification of the same area obtained with photo-interpreted training samples (Conversi et al., 2023), showing accuracies of the order of 80-90%. The automated version of the methodology for integrating optical and radar images in mapping river water surface then proved its effectiveness among several date intervals taken as reference. Although the automation of the training sample selection slightly decreases the accuracy of the overall result with respect to the original approach, the gain in terms of usability is invaluable. Indeed, the elimination of the necessity for the user of photointerpreting imagery and drawing polygons to train the classification algorithm represents a relevant step towards the realization of a standalone tool to be used by the public administration in real applications of river drought monitoring.
Keywords
Goodness of fitDifferent (Kate Ryan album)Projective planeAuthorizationMathematical optimizationProcess (computing)MereologyOpticsINTEGRALBitLecture/ConferenceMeeting/Interview
MoistureFrequencyPressure volume diagramDynamic random-access memoryCondition numberData recoverySurfaceNormal (geometry)AlgorithmSensitivity analysisCASE <Informatik>Virtual machineObservational studyWordEvent horizonDifferent (Kate Ryan album)Meeting/Interview
Partial derivativeMoistureSurfacePressure volume diagramPrice indexDifferent (Kate Ryan album)Correspondence (mathematics)
Connected spaceInsertion lossArtificial neural networkSystem programmingEvent horizonRule of inferenceModal logicCondition numberSound effectParameter (computer programming)SurfaceBounded variationDreizehnDependent and independent variablesSource codeComputer fontPixelFrequencyCondition numberSquare numberOpen sourceQuicksortLevel (video gaming)Ring (mathematics)MeasurementAuthorizationSurfaceStreaming mediaProxy serverProduct (business)Texture mappingVector potentialEstimatorDifferent (Kate Ryan album)Exploit (computer security)Data managementMultiplication signSatelliteRegulärer Ausdruck <Textverarbeitung>Computer animation
Denial-of-service attackDependent and independent variablesRow (database)Price indexSmoothingSurfaceSource codeMatter waveIdentical particlesMusical ensembleComputer fontPixelPoint cloudMoistureInformationPolygonPhysical systemThresholding (image processing)Random variableForestLevel (video gaming)AreaSet (mathematics)Term (mathematics)Product (business)Price indexDifferent (Kate Ryan album)Level (video gaming)Error messageAlgorithmRandom variableFrequencyInterpreter (computing)CASE <Informatik>Point cloudInformationTexture mappingResultantMatter waveMaxima and minimaVirtual machineGreatest elementFunction (mathematics)MereologyMusical ensembleDenial-of-service attackEstimationoutputPolygonSource codeWeb crawlerForestForcing (mathematics)INTEGRALSurfaceFamilySatelliteSoftware testingLatent heatLengthValidity (statistics)1 (number)OpticsMeeting/InterviewComputer animationProgram flowchart
Texture mappingSurfaceDisintegrationIntegrated development environmentPolygonProcess (computing)AutomationAlgorithmMaxima and minimaComputer-generated imageryBimodal distributionHistogramSystem administratorModal logicMereologyProcedural programmingAuthorizationExpert systemInterpreter (computing)Point (geometry)Sampling (statistics)SatelliteFrequencyPhysical systemLevel (video gaming)outputData managementPixelIntegrated development environmentProcess (computing)Web applicationReal numberDifferent (Kate Ryan album)Single-precision floating-point formatComputer animation
PixelSurfaceBimodal distributionComputer-generated imageryHistogramAlgorithmVarianceSocial classComplex (psychology)AutomationThresholding (image processing)outputSatelliteCellular automatonDimensional analysisDatabase normalizationMaxima and minimaSoftware testingSampling (statistics)FrequencyQuery languageMusical ensembleCellular automatonRight angleDimensional analysisForcing (mathematics)Point (geometry)EstimatorThresholding (image processing)PixelCASE <Informatik>Revision controlResultantAlgorithmSocial classPolygonCartesian coordinate systemParameter (computer programming)Series (mathematics)AreaFrame problemPhysical systemSupport vector machineLinear regressionFrequencyNumberSoftware testingProcess (computing)Bimodal distributionSensitivity analysisVirtual machine1 (number)Identical particlesProcedural programmingLevel (video gaming)VarianceProjective planeMaxima and minimaEndliche ModelltheorieAutomorphismModal logicDecision tree learningClique-widthWell-formed formulaMeeting/InterviewComputer animationDiagram
AlgorithmFrequencyObservational studyAreaDigital photographyPixelSupport vector machineTotal S.A.Network topologyLinear regressionDecision tree learningMathematical analysisForestRandom variableDisintegrationPairwise comparisonSatelliteGoogolMusical ensembleObservational studyTerm (mathematics)SatelliteDimensional analysisVirtual machineMathematical analysisNeuroinformatikoutputInformationDifferent (Kate Ryan album)INTEGRALPixelSource codeCASE <Informatik>Single-precision floating-point formatProduct (business)Exception handlingOpticsParameter (computer programming)AlgorithmOptimization problemPhysical systemResultantReal numberFrequencyNumber2 (number)Support vector machinePoint cloudMultiplication signAreaMeeting/InterviewLecture/ConferenceComputer animation
Context awarenessMathematical morphologyParameter (computer programming)Reduction of orderProduct (business)OpticsSurfaceCodeGoogle EarthDialectOpticsMultiplication signCodeSelectivity (electronic)INTEGRALAreaSource codeReal-time operating systemPhysical systemCross-correlationNumberSatelliteProjective planePoint (geometry)Pairwise comparisonWindowWeb applicationResultantLevel (video gaming)Validity (statistics)FrequencySystem administratorInformationNear-ringParameter (computer programming)Product (business)Sound effectDifferent (Kate Ryan album)Characteristic polynomialComputing platformIterationMoment (mathematics)Mathematical analysisQR codeMeeting/InterviewComputer animation
outputIntegrated development environmentProduct (business)Endliche ModelltheorieAreaBitGoodness of fitPhysical systemDirectory serviceFrequencyInformationINTEGRALBit rateAlgorithmPresentation of a groupProjective planeCurveDifferent (Kate Ryan album)Observational studyDigital photographyImplementationCASE <Informatik>Level (video gaming)MereologySupport vector machineSensitivity analysisMultiplication signResultantCombinational logicCodePolygonInterpreter (computing)Pairwise comparisonType theoryProcess (computing)SurfaceSheaf (mathematics)DataflowPixelProcedural programmingParameter (computer programming)Mathematical morphologyMeasurementSimilarity (geometry)SatelliteIterationAuthorizationCodeLecture/ConferenceMeeting/Interview
Identical particlesSolid geometryView (database)Point (geometry)BuildingSource codeCombinational logicObject (grammar)ResultantAreaReading (process)InformationObservational studyValidity (statistics)Dependent and independent variablesDifferent (Kate Ryan album)Meeting/InterviewLecture/Conference
Computer-assisted translationComputer animation
Transcript: English(auto-generated)
Good afternoon everyone, I'm Stefano Conversi. I'm here on behalf of my team of work that is made by Professor Karian and Riva from Polytechnic Milano and from engineer Norscini from actually Lombardy region that is the authority, the regional authority in which Milan is located. And I will talk about this
work that is meant to be the second step actually of a different project and work so I will have a little bit of introduction. We will talk a bit on remote sensing for even monitoring and now it is how it's used and then we will go through on the first part actually of the work so the
integration of optical and other imagery to announce reverse-draft monitoring that was presented last year in just a special week. And then we will go through the new part actually that is the optimization of this process towards indeed the automation. We will see why we need the automation in our case study. We also perform the sensitivity analysis and
the elicitation of the best machine learning algorithm and we draw some kind of conclusions on both actually the words that we are talking about. So first of all we know that in the last years Europe suffered lots of different drought events and especially in 2022 we have severe drought events that affected all Europe and in our case of interest
particularly Italy obviously and northern Italy. As you can see here we have the official Copernicus drought indicator for northern Italy and we see that we have this kind of distributed alert we can see in correspondence of Lombardy region and Piedmont region but more in general we
can find actually this issue in the whole Po river basin. You may know that Po river is the largest river in Italy and it's very very important obviously for a different kind of ecosystems actually that are regarding it. So first of all we can see that lots of different impacts
were find on the territory after this and during actually this drought period so we had decrement in agricultural production, we had navigation restrictions, even some kind of potential risk of rationing of drinking water. And thus actually leveraging on this public authorities
requested for some kind of measures and innovative tools specifically for monitoring and enhancing their capability of understanding which are the condition on the territory for what regards obviously a river and during a specific situation such as the drought but even in peacetime you can say. So specifically Reg. Bardia asked us for a tool for monitoring and
visualizing the situation of the streams and this is very important because they are in need of such as we can say maps and even just layers so that they can send and use this kind of images for communicating with the
local stakeholders. Indeed obviously there are several stakeholders that are involved in the management of the river and on the territory itself. So we decided to address these kind of issues with the assumption of using the surface, so an estimation of the square kilometer surface of the river as a sort of proxy of the drought condition of the river itself and obviously the
territory. So we try to propose a methodology to exploit imagery, satellite imagery, obviously open source imagery and that we take obviously from Copernicus Sentinel-1 and Sentinel-2 for obviously arriving we can say to a water map coverage that can be exploited in local authorities
actually work. So for what regards we can say remote sensing we know that remote sensing techniques and especially satellite data are used for mapping water and we have different kind of sensors and different way to approach this issue. First of all we have the distinction you can say between optical sensor and radar sensors we know that with optical sensor we
can explore different wavelengths and different intervals obviously of radiation for the radar sensor indeed we know and we can study actually the backscatter signal that we obtain from these active sensors with optical ones we can also obtain spectral indices that can highlight the presence
of water while in general we can say that for other sensor we are able to easily recognize the presence actually of water because it is very particularly responding to this kind of sensors. Anyway we have different kind of meters for recognizing it especially with the shielding. Anyway we know that we have also some possible issues with both of these families we can say of
satellites and sensors. First of all optical sensors indeed because it is an optical source can have some kind of info of errors such as missing information if an obstacle is in between obviously the sensor and the territory such as in this case and this is one of our other interests you can see that the presence of that cloud that is not also compact cloud
so dense cloud is a light cloud that is completely obscuring that information that is on the ground. On the other side radar sensor are prone on to overestimation actually for about some kind of issues of backscatter or those are also for for example background noise or the presence in
some case of soil moisture and you can see that even in this case we are plenty of these small lengths of water that are indeed not existing on the territory. So what we are proposing what we proposed actually in the first part of the work was to integrate satellite imagery coming from the two different sources with a machine learning algorithm for classification
that was a supervised random forest classification and so we took actually the three different bands one from the Sentinel-1 and so rather imagery and the other two that are two indices made by optical sources so the NDWI and SWM that is the Sentinel water mask so as specific in this two specific indices that are built for detecting water we use these three
stacked bands for feeding a supervised random forest classifier given also as input obviously a set of drone polygons drawn by for the interpretation that were obviously containing the information what on the territory is water and what is background and then we
obviously obtain some classified weather maps and validate that. We can see some kind of information on the values that we considered and on the production of the indices. Obviously we have then split the polygons the training polygons in the two different tests and training set. We obtained as
an output a thematic map of water coverage and then from these maps actually we decided to estimate the water surface in there of interest and we validated with some kind of methods. Here you can see the results for the two we can say extreme results, extreme periods. The first one is the worst case in terms of water scarcity indeed it was in August 2022 so during the most severe
drought period and the second part in the bottom one is that we can say the best situation in terms of water amount that indeed was not at all the best because it was after a flood as you can recognize here so specifically we address it as the best situation but it's clear that it
was not. Okay so let's go now with the automation part so as I said this tool was meant to support public authorities and managers of obviously all the stakeholders. Public administration is
very complex we can say institutions a very complex environment in which we cannot expect to have only remote sensing or GIS experts. So first of all if we want to produce a very actually useful tool that can be addressed and used by almost everyone we need to avoid the necessity for the user of
drawing the training samples by for the interpretation of the imagery. So that was the first point and obviously all the different possibilities were considered for automating as much as possible the process. Then the main idea would be the one of create a real web app that is user-friendly and
can easily be inserted within the procedures of public administration. So what we decided actually to do is to create some way a system that is capable of reaching autonomously a map that is containing only water or non-water pixels to be intended as a reliable water mask from which we can
randomly extract training samples actually. So how can we build this kind of mask? Here we have an image from the paper. First of all we consider the three different satellite inputs so the NDWI, SWM and the SAR images are
better image collection because we are referring to different images in a same period and we will see how we obtain for each of these three inputs a single water mask then we come behind this water mask and we obtain the ultimate and most reliable one. So first of all how can we address the
solution so that we can an automatic procedure for drawing this kind of water map? We decided to go for automatic thresholding through algorithms. This is another algorithm capable actually of individuating the value of the pixel intensity value for which we can distinguish between the
two classes. In this case we need to have bimodal images so we can distinguish background and foreground and basically this algorithm is capable of defining the autonomous we can say this threshold value that distinguish and so basing on this threshold we can obviously realize a classification of pixels. But this also algorithm as I said works only for
bimodal images and we can easily understand that real imagery is not followed by model. So how can we solve this issue? We addressed the problem with a new version of the OTSU algorithm that was presented in 2020 that is the BMax OTSU. The BMax OTSU actually is based on the
application of this kind of the same algorithm on just some kind of small portions of our image. So first of all the image is divided in cells the dimension of the cell is selected by the user through this process that is called the chessboard segmentation. Then each cell is studied through a
bimodality test so the normalized between classes, the maximized normalized between class variants is estimated for each of these classes and all these cells and only the bimodal cells are actually used for evaluating the threshold of the image, the threshold for dividing the two
different classes. Then we decided the dimension as I said before we need to express a dimension and we decided to work if you can see there in the, I hope it is actually visible, we decided to use cells such that more or less half of the pixel in the cell would represent water and the other half
the background. So we try to corroborate this kind of results and we obtain actually a formula, a formulation for the grid size so that basing on one of the parameters that the user will know, so the river width, because the user obviously is interested in a single river, basing on this kind of parameter
the whole system will work and provide this kind of automated process. So as I said we will then extract, we will need to extract from these masks actually some kind of training polygon or pixels actually. So as I said with this kind of thresholding we are obtaining three images,
collection of masks, because as I said each sensor is providing us a series of images. Then we decided to understand in these, among these actually images in the temporal frame which are the pixels that are consistently
classified as water or non-water over the period of reference and only the pixels that are constant we can say remain into an intermediate mask. Lastly we consider the three final masks and only the pixels that are in common among all of the three masks are extracted and put inside the ultimate
mask. In this way we obtain as much as possible reliable mask that is for sure representing water and non-water in the area of interest from which we can extract the points and the points that will be extracted are then validated and calibrated in the 0.15% of the overall
number of pixels in the image. You can see here an extraction from our work. So basically the white areas are the common areas among all the different masks that I mentioned. The yellow ones are obviously the ones related to the water and then the training and test points extracted
for water and non-water in blue and red. So as I said we also performed some kind of accuracy estimation and we can say a sensitivity analysis actually for selecting the right amount of pixels to be used for training the algorithm. We did that by comparing first of all
three different machine learning algorithms, so the classic random force that was already used in the first project, classification and regression tree, so the cart, and then support vector machine. We did that on different dates, so different dates coming from even different periods of
the year, so you see May, July and February of different years, and then we tried simply to change the number of pixels that were used for the calibration and we noticed that actually after a certain threshold that we defined in the 0.15%, as I said before, we cannot obtain a real gain in terms of accuracy incremented, but we would start having some kind of issues
in terms of computational time, that was why. Then the second analysis that we did actually on the results was to try to compare newly these kind of these three actually machine learning algorithms and specifically to combine
them also with information of using the single sensor Sentinel-1, so rather the single sensor Sentinel-2, so the optical and then the integration of the two different satellites. What we could find in this case, in reality we added also a couple more of dates, so this is an exception. What we recognize is that it happens that the single sensor
has the best accuracy with respect to the other, but this is not optimal, this is not constant, because for example you can compare the S2, so the optical source in June 2017 and in July 2022, you can recognize this the same satellite even with the same machine learning algorithm is delivering
completely different results, probably because in that case we had some kind of cloud, so something that is obstacle in some way the issue. So we decided to go on with confirmation that the integration actually between the two sources is the optimal solution and it was particularly announced on the
SVM considering all the different dates. So here we have actually the new and the second area of study that was used and this product is indeed completely obtained autonomously by the system as I said before, so the only input that was requested was obviously that
parameter considering the dimension of the river, nothing else. So in general we can conclude saying that as I mentioned the integration is actually quite effective in mapping water, the integration of the two sources, that this methodology is actually in principle repeatable with whichever
kind of couple of optical and radar sensors. The validation that we did actually in the different periods and different iterations of the project is actually proving this effectiveness and even we have some kind of correlation with ground truth data such as the mentioned hydrometric level comparisons. Still obviously this whole project stands
on the fact that data should be good and should be available, imagery should be available, so this is still obviously an issue. Without the imagery you cannot obtain good results. Then this methodology for now, given that we need some kind of time for having the information from the satellites, cannot be used for real-time nor for near real-time actually analysis of the
territory, but for medium-term monitoring indeed it can be used. Another aspect to be considered is that the Google Earth Engine actually needs a specific agreement with a public administration to be used, and so if we want to deliver a real product to General Omerdia it will be a point
to be addressed. And lastly we will go on with some kind of code refinement, everything is based in the Google Earth Engine as just mentioned by the way, and we want to enhance the aspect of time window selection because we are not so satisfied on how it is working for now, and try to reduce once more the number of parameters that the
user has to define so that the system could be as much as possible autonomous as I said before. Then we obviously want to validate the system also in other areas characterized by a different geomorphology of the territory, because obviously the physical characteristics of the river
are very important for having good results, so this system is optimized on those areas for the moment, but we want to export as much as possible this solution. And then as I said before the real goal is to produce a real tool and a web app that can be maybe also hosted still on
the Google Earth Engine platform or even with other tools to be given to General Ambirdia so that they could have a real tool to be used in monitoring and managing actually risks that are related to the climate change that all of us are actually living. Here we have the screenshot
from the paper and most of all if you want you can find the QR code with the first work, so the one of integration of satellites, here you find the second one with this work and then also the Google Earth code if you want to see how it is built and how the work was deployed. So thank you, if you had some questions.
Thank you very much for this really interesting presentation.
We see a lot of papers every now and then in the literature that compare different but fairly similar algorithms for classification algorithms and I was wondering from let's say anecdotal evidence that support vector
machine is in tendency a bit slower takes more resources than other algorithms so is that an issue that this is feels more heavy weight? Actually we consider all of the different possibilities and different
comparison and combined combination also considering actually the computational time and all the required information and it appeared to be the best solution at least in this area of interest and these two
areas of interest because actually we started forever but in two different sections so the first one was a smaller you saw before the mapping the river mapping and then this other one was much larger so given also more information and potentially more complex situation and still comparing
all the process and all the procedure in five different dates in five different years and different periods of the year and so on. We got this kind of information that the evidence is that SVM with integration is the most suitable and most reliable actually product and obviously all of that was validated the first part of the sensitivity analysis was validated
through ground truth that is coming from a photo interpretation so it is quite reliable and in this other case for defining exactly which is the best performing algorithm we use the other reference the result of the first iteration so well we use as a reference to the water map obtained by the
original method that still is based on for the interpretation on the new area so it's not something also we can say validating but we we try to use as much as possible some ground truth that is not so easy to get so it requires that you draw manually all the polygons.
Other questions? Hi, yeah my question is if you have verified that the measures of flow are like similar to your result because if you know the section of the river if
you know the flow you can you know retrieve also there yeah that area for
now we didn't consider the flow as we can say the dynamic aspect is a bit disregarded but what we did and what we want to do actually in future is to try and understand as I mentioned before if this system can work also for different kind of rivers because with especially with this new implementation
actually of the codes we can insert different parameters that may be used for fitting as much as possible the whole system on a river that is characterized by a different morphology for example if you have a river with quite narrow surface or even characterized by a lot of different curves it may be more difficult to obtain some kind of good results and
this is a problem even because actually we are working with satellite that give us a 10 meter pixel so we cannot apply to narrow rivers because we cannot see anything otherwise for that for that reasons but still we
are working on several different aspects on this project so we hope actually to be to find a solution that is as much as possible on different levels of interest at least given that still it is a project that is triggered we can say by this regional authority we want to be able to cover as much as possible the principal rivers of our region so yeah you know maybe
you know maybe the most complex is the one but we're working on that other questions all right thanks for the presentations are our I have a question
on the new study area are you re-training all those models you have separate model for those study area or you have a single model that you are they on the new study area for those new study area you have separate
model for those area for study or you just have a single model that for single model that you are they on those new area actually we use always
the same model by changing the input information as our specific imagery from the different dates that are provided directly within the ecosystem and the environment of good earth engine directly by Copernicus so we are working on the same system just changing obviously the input the input
imagery other questions I'm not familiar with that region so I have to ask you did you had a problem also that you have like water plants like
read obscuring water or three canopies over the water and or did you had to address this kind of issues or this is not a problem for your study area thank you but we are actually aware that the presence of different
vegetation or even buildings actually close to the to the river can be some kind of can create some kind of issues what I can see is that anyway being that our sources have a 10 meters resolution of ground is very
difficult to have a large coverage of vegetation that can create some kind of issues nevertheless we are estimating also the validation on the whole area so it is possible indeed that we have some kind of small issues given that we can have some kind of obstacles we can say for the for the sensors still
we can say that combining actually the two different sensors can also help us to remove some kind of issues that can be can be created by those those information such as I can just tell you one of the problems for example of the star so the other one for the detection of water is the possible
presence of solids and so actually of objects you can say inside of the water that can give some kind of strange responses while if we combine this information with the optical one that is indeed visualizing and distinguish between the water and that object the results show that actually the combination can easily effectively we can say actually cover the the
deficit that is coming from the other one so we are aware that this kind of small issues can still be there but overall it is quite satisfying the results on our point of view