We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Facilitating advanced Sentinel-2 analysis through a simplified computation of Nadir BRDF Adjusted Reflectance

00:00

Formal Metadata

Title
Facilitating advanced Sentinel-2 analysis through a simplified computation of Nadir BRDF Adjusted Reflectance
Title of Series
Number of Parts
156
Author
Contributors
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
The Sentinel-2 mission, pivotal to the European Space Agency's Copernicus program, features two satellites with the MultiSpectral Instrument (MSI) for high-to-medium resolution (10-60 m) imaging in visible (VIS), near-infrared (NIR), and shortwave infrared (SWIR) bands. Its 180\u00b0 satellite phasing allows for a 5-day revisit time at the equator, essential for Earth Observation (EO) tasks. Sentinel-2 Surface Reflectance (SR) is crucial in detailed Earth surface analysis. However, for enhanced accuracy in SR data, it is imperative to perform adjustments that simulate a nadir viewing perspective (Roy et al., 2016). This correction mitigates the directional effects caused by the anisotropy of SR and the variability in sunlight and satellite viewing angles. Such adjustments are essential for the consistent comparison of images captured at different times and under varying conditions. This is particularly critical for processing and analysing Earth System Data Cubes (ESDCs, Mahecha et al., 2020), which are increasingly used due to their organised spatiotemporal structure and the ease of their generation from cloud-stored data (Montero et al., 2023). The MODIS BRDF/Albedo product presents spectral Bidirectional Reflectance Distribution Function (BRDF) model parameters, enabling the calculation of directional reflectance across any specified sensor viewing and solar angles. Building on this foundation, Roy et al. (2008, 2016) introduced a novel approach leveraging MODIS BRDF parameters, named the c-factor, for the adjustment of Landsat SR data. This adjustment produces Nadir BRDF Adjusted Reflectance (NBAR) by multiplying the observed Landsat SR with the ratio of reflectances predicted by the MODIS BRDF model for both the observed Landsat SR and a standard nadir view under fixed solar zenith conditions. Subsequently, Roy et al. (2017) expanded this method to include adjustments for multiple Sentinel-2 spectral bands (VIS to SWIR). While the c-factor method facilitates straightforward computation for individual Sentinel-2 images, there is a notable absence of a unified Python framework to apply this conversion uniformly across multiple images, especially for ESDCs derived from cloud-stored data. To bridge this gap, we introduce "sen2nbar", a Python package specifically developed to convert Sentinel-2 SR data to NBAR. This tool is versatile, aiming for converting both individual images and ESDCs generated from cloud-stored data, thus streamlining the conversion process for Sentinel-2 data users. The "sen2nbar" package, meticulously designed for simplicity, facilitates the direct conversion of Sentinel-2 Level 2A (L2A) SR data to NBAR through a single function. To streamline this process, the package is segmented into multiple modules, each dedicated to specific tasks within the NBAR computation pipeline. These modules include functions for extracting sun and sensor viewing angles from metadata, calculating geometric and volumetric kernels, computing the BRDF model, and determining the c-factor. "sen2nbar" supports NBAR calculations for three distinct data structures: 1. *Complete scenes via SAFE files*: Users can input a local SAFE file from a Sentinel-2 L2A scene. The package processes this file, generating a new folder where each spectral band is adjusted to NBAR at its original resolution. The adjusted images are saved as Cloud Optimised GeoTIFF (COG) files, with an option for users to choose standard GeoTIFF formats instead. 2. *Xarray Data Arrays via "stackstac"*: For ESDCs obtained as xarray data array objects from a SpatioTemporal Asset Catalog (STAC) using stackstac and pystac-client, "sen2nbar" requires the xarray object, the STAC endpoint, and the Sentinel-2 L2A collection name. This information allows the package to access STAC for metadata retrieval necessary for adjusting the data cube. The spatial coverage and resolution in this scenario might differ from complete scenes, and "sen2nbar" adjusts only the specific area and timeframe retrieved for the given resolution. 3. *Xarray Data Arrays via "cubo"*: When users have ESDCs formed as xarray data arrays through cubo, which builds upon stackstac and incorporates the STAC endpoint and the collection name as attributes, "sen2nbar" directly adjusts these to NBAR, utilising the methodology described in the stackstac case. For the latter two scenarios, "sen2nbar" works without writing files to disk, instead returning an xarray data array object containing the NBAR values. The package is designed to handle available bands without errors for missing bands, acknowledging that users may not require all bands and might have generated ESDCs with selected bands. Additionally, if the input arrays are 'lazy' arrays, created using dask arrays (a default in stackstac or cubo), ...
Keywords
127
Mathematical analysisMultiplication signDifferent (Kate Ryan album)Mathematical analysisCubeLecture/Conference
DivisorBitCubeMereologyDivisorLecture/ConferenceComputer animation
Matter wavePairwise comparisonEntire functionWater vaporDifferent (Kate Ryan album)Image resolutionSet (mathematics)Lecture/ConferenceComputer animation
VolumeForestAcoustic shadowError messageSurfaceFunction (mathematics)Database transactionFunktionalanalysisMathematicsCumulative distribution functionEqualiser (mathematics)AlgorithmAngleSummierbarkeitDivisorThermal radiationEndliche ModelltheorieKernel (computing)Connectivity (graph theory)Characteristic polynomialCase moddingSurfaceAcoustic shadowSpecial unitary groupScatteringView (database)Different (Kate Ryan album)Theory of relativityPosition operatorObject (grammar)CASE <Informatik>Parameter (computer programming)VolumeAzimuthComputer animationLecture/Conference
Case moddingDivisorAzimuthSource codeView (database)AngleIntegrated development environmentMatter waveParameter (computer programming)Special unitary groupView (database)Medical imagingAzimuthMathematical analysisAngleEndliche ModelltheorieParameter (computer programming)Different (Kate Ryan album)DivisorMultiplication signNormal (geometry)1 (number)Numeral (linguistics)AlgorithmLine (geometry)Asynchronous Transfer ModeEstimatorCase moddingLecture/ConferenceComputer animationDiagram
Integrated development environmentSurfaceDifferent (Kate Ryan album)Manufacturing execution systemSheaf (mathematics)Mathematical analysisSound effectSurfaceCubeComputer animationLecture/Conference
Library catalogSet (mathematics)Attribute grammarLatent heatPhysical systemCubeMereologyObservational studyMultiplication signSound effectArray data structureStack (abstract data type)Data storage deviceSpacetimeError messageDimensional analysisRight angleTemporal logicLecture/ConferenceComputer animation
MereologyComa BerenicesDivisorInformationBlock (periodic table)InterpolationMetadataSurfaceForcing (mathematics)Installation artNeuroinformatikLevel (video gaming)BEEPProduct (business)DivisorXMLComputer animationLecture/Conference
DivisorBlock (periodic table)InformationMaizeHausdorff dimensionMetadataKernel (computing)Entire functionNeuroinformatikProcess (computing)Multiplication signDivisorInterpolationMetadataAngleCASE <Informatik>NumberKernel (computing)CubeMultiplicationArray data structureEndliche ModelltheorieParallel portComputer fileParameter (computer programming)Physical systemAzimuthAngular resolutionView (database)Dot productFunktionalanalysisSpecial unitary groupLecture/ConferenceComputer animation
Computer-generated imageryCubeInversion (music)Software bugGreen's functionSound effectDimensional analysisNeuroinformatikComputer fileAreaCubePixelDifferent (Kate Ryan album)Endliche ModelltheoriePatch (Unix)Medical imagingLengthDivisorMultiplication signFunktionalanalysisLevel (video gaming)WebsiteTemporal logicSound effectEqualiser (mathematics)BitFront and back endsProduct (business)ForestLibrary catalogSpacetimeSubject indexingCASE <Informatik>Mathematical analysisCombinational logicGraph coloringMetadataPrice indexPoint (geometry)Kernel (computing)Wrapper (data mining)Client (computing)FreewareCellular automatonBound stateInverter (logic gate)Stack (abstract data type)Computer animationLecture/Conference
Sound effectSurfaceDivisorCubeLibrary catalogLibrary catalogStack (abstract data type)Data storage devicePresentation of a groupComputing platformPairwise comparisonSound effectProduct (business)Endliche ModelltheorieMultiplication signUsabilityAlgorithmOpen sourceNeuroinformatikCase moddingRight angleDifferent (Kate Ryan album)Connectivity (graph theory)Frequency responseAxiom of choiceState of matterData fusionExpert systemCore dumpDivisorBranch (computer science)Type theoryCoprocessorCombinational logicData conversionSimilarity (geometry)Filter <Stochastik>Computer file1 (number)Projective planeLevel (video gaming)BitNumberCubeSuccessive over-relaxationPolygon meshNormal (geometry)Mixed realityCASE <Informatik>ResultantParameter (computer programming)Latent heatMereologySingle-precision floating-point formatProcess (computing)MultilaterationVirtual machinePoint cloudLecture/ConferenceComputer animationMeeting/Interview
Least squaresComputer-assisted translationControl flowComputer animation
Transcript: English(auto-generated)
I will be talking about practically harmonizing Sentinel-2 data according to Nadir-BRDF adjustments. So this is very useful for people who is doing actually analysis that require this kind of satellite imagery, especially with different acquisition times.
This is really important. And also if you are, for example, working with data cubes, this will also help you a lot on it. It's based on Python, so if you are working on Python, this can be very helpful for you. So I will start saying what we are going to talk about. So first a little bit of introduction to Sentinel-2, BRDF, and the C-factor method
for getting Nadir-BRDF adjusted reflectance. Then we go to a little part of data cubes and then we end with the tool that we are going to present here, which is Sentinel-1 for getting the Nadir-BRDF adjusted reflectance or N-bar. For Sentinel-2 data cubes actually.
So just starting with Sentinel-2, well I guess most of us know what Sentinel-2 is, then we have a collection of two satellites out there with a multispectral instrument getting a very high resolution, actually a public high resolution satellite imagery in different
bands of electromagnetic spectrum. So for example, we have here the bands of Sentinel-2 and then we can see that for example we have 10 meter bands in RGB and also the near infrared, then we have 20 meter bands in the red edge, also a little bit of narrow near infrared and the shortwave infrared
bands, and then we have 16 meter bands for aerosols, water vapor, and this one for SIRUs, and this is just the Sentinel-2, so it's a very useful dataset.
And then we go to BRDF and for BRDF we get the definition of bi-directional reflectance distribution function and it is going to describe how actually reflectance changes according to the view and sun angles. So we usually kind of expect that the reflectance is just like the sun, get a sensor radiation
and then the sensor is going to receive this one, but this actually depends on a lot of other factors, so it's not the same and this changes according to where the satellite is, where the sun is, and a lot of other surface characteristics, so reflectance
is different according to where you look at it. And fortunately in 2000 for the MODIS actually sensor there was developed, well there was tested BRDF model, a kernel based BRDF model that is practically the sum of different
kernels or scattered mods, in this case the isotropic, the volumetric and the geometric kernel, they are going to actually describe different components of the BRDF, like this
is uniform scattering, this one is going to depend more on the kind of medium where the light is going to get scattered and this one is going more into how big geometric objects are going to interact with the light and the shadows are positioned and things like this. So these three are spectral parameters that can be derived for each one of the kernels
and these two are actually the kernels that depend on the solar senate angle, on the view senate angle, which is one of the sensors for the satellite and also the view solar relative azimuth angle, which is the difference between the azimuth of the view and the sun azimuth angles.
So this is the intro to BRDF and then the Z-factor method. The Z-factor method is just an algorithm, a very easy algorithm, that uses this BRDF model to adjust the surface reflectance to Nadir BRDF adjusted reflectance.
This started with these two papers by David Pierre Roy and he developed this method for reflectance using the previously shown MODIS BRDF model and it does the following. We have here the sensor, the sun and we have the corresponding angles.
So this is the view senate angle, this is the senate, this is the senate and this is the view senate angle, this is the sun senate angle, then we have the solar azimuth here, this is the north line, this is south line and we have here the view azimuth. Now the difference of these two is the relative azimuth and what this algorithm does is practically
take this sensor and put it in senate. So we have for all our acquisition times or our acquisition images always this adjusted Nadir view. So actually our images can be comparable across time and he does this in a very easy way.
So this is for example the M-bar for the specific band and as you can see this is just a scaling factor against the normal surface reflectance of that specific band and this is the C-factor
and here it's very cool because the C-factor is just the ratio of the BRDF model that we just presented before but between the numerator being set to zero in the senate angle. So in that way the BRDF model that we can compute here can be used to actually adjust
all of the images that we want to actually use for our analysis and that's how we can get Nadir BRDF adjusted reflectance. This was done for Landsat and then the spectral parameters were retrieved from the MODIS spectral
parameters so we have them here and then this was extended to Sentinel-2 and in Sentinel-2 we just use the same Landsat spectral parameters and additionally we add the ones for the
Landsat spectral parameters that were derived from MODIS-2. So here you can see for example how it changes when you use surface reflectance or Nadir BRDF reflectance data. This is surface, this is adjusted and this here is the difference.
You can see that this is the end of one of the swabs and this here is the difference of these two and then you can see that there are actually effects that we have to consider here or what is the reflectance value that we have there in these intersecting sections.
If we want to do analysis they can get messed up if we don't correct that, that's not negligible. So that's pretty important if we for example want to work with data cubes in as it was previously shown in the two talks ago. Data cubes are aligned with a
spatio-temporal grid. So our system data cubes are just multi-dimensional arrays of our system data that have their own dimensions, a set of grids, data and attributes and in the set of grids we have the space and we have the time. A very important part of the
system data cubes are our observation data cubes and this is even more relevant now because we can easily get data cubes like using for example specifications like a stack and if we have a data stored in the stack and then we can use different tools to get actually
the data from a stack and actually converting them as an observation data cubes in its array we can analyze large quantities of data in space and in time but before analyzing that it's very important actually to correct by these effects before. So that's why we
went and started the creation of this tool. The tool is called Send to Ember. This one is available in GitHub. If you want to install it you can just do pip install send to ember
or if you are using conda you can also go conda install from the conda force channel. It's also available there and it works in a very simple way. It works with set files like if you have your own sentinel level to April which is surface reflectance it actually gets all the
data that it needs for computing this factor and computing the nadirbya effectus reflectance using the metadata and the actual values of it. So it's going to use the granule metadata to actually do the c factor computation and then it's going to use the same metadata to extract
also the processing baseline. This is important so we can actually harmonize sentinel 2 first because if you remember after processing baseline number 4 all the reflectance values are shifted
so we have to shift this before doing the correction. Then the c factor is interpolated according to the spatial resolution of each one of the bands and then the ember is just calculated itself and saved in the same save file. But also we have here for data cubes
and in this case for data cubes that are computed or are extracted via stack. So the idea is that we have a data cube with multiple bands and multiple time steps and our specified bounding bots. We can use this data cube to go into the stack where we created it from and then we
get the metadata from it. We do the computation of the c factor for all of the time steps that were included there. We also extract the processing baseline so we can actually harmonize the data
first and then we do the interpolation of the c factor to create the ember computation but this is done for the entire data cube according to the resolution that you have and also according to the reference system that you have and also for the bands that you have. It doesn't have to be all of them if you don't need to and all of this is computed using x array and if you have it as a lazy array with Dask it can be computed in parallel too.
So from behind this is doing this. This is just getting the metadata and then from the metadata is just getting the same heat and azimuth view angles and also from the sun is getting the
same heat and azimuth angles. From this is then computing the kernels in this case just the geometric and the volumetric kernel because the isotopic one is uniform and then using the spectral parameters that were derived from David Roy papers we can compute the BRDF model. In this case this is the Rosli BRDF model that I showed at the beginning and then from this one
the c factor is just computed just in a very quick way using multi-dimensional arrays. Now it is just as simple as doing the multiplication between the c factor and the surface reflectance
for getting the Nadir BRDF. Adjust the reflectance for your whole data cube and well how to do it? Well for safe files it's very simple. You just import function to mbar dot mbar mbar underscore safe and then you just used this function and then you put here the path to your cell file and
it's going to do the whole computation. But if you want to do it for example with data cubes that you created from stack stack this is for example one from planetary computer and using stack stack you just do your whole pipeline as you used to. You get your endpoint collection the
bounds then you do the you open your catalog with the pi stack client and then you define the area where you want where you want to get the data cube from and then you do the search get all the items and then retrieve this stack of images using stack stack and then for converting it to Nadir BRDF adjust the reflectance you just have to use this mbar underscore stack
stack and then you put here and stack your data cube then you put the endpoint because we have to tell the function where to get the metadata from so it's gonna get it's gonna go to this exact same endpoint and get the metadata to compute the c factor and also what what is the name of the collection so it can go to it and easily compute everything and then it's going
to return another data array with the corrected values and this is another example using kubo kubo is just a high level wrapper for creating AI focused data cubes and by AI I mean
data cubes which the spatial dimensions the lengths of the spatial dimensions are the same like for example you usually in AI models use image patches or chips where you use 128 pixels by 128 pixels for example at different time dimensions at different time lengths
this is doing this kubo is just a package that creates data cubes out of my stack but just with a special dimensions of equal length and then for this one is just using this function frontend to mbar dot mbar you import mbar underscore kubo and then you put your cube there
and then it's going to return the same data cube but corrected and these are the three main functions and just to tell you how important it is let's check a little bit effects on reflectance in just one single cube this is a mbar composite of RGB for a single cube
in a forest site and then the rest of things that you are seeing here are the differences between surface reflectance and nadirbrdf adjusted reflectance and for example you can see that for example for RGB is maybe not that big the effect or the difference but when you start
getting close to near infrared for example red edge one red edge two red edge three and near infrared are pretty much affected by the by the by the brdf effects maybe not that much in in space in this case because this is a very small cube it's just like 200 by 200 pixels but if
you can see the temporal dimension which is this one here you can see that there are pretty big differences and if you want to do temporal analysis it's very important to consider this into to consider this in your analysis and also not just in reflectance itself if it's in reflectance
and then you get derived products from it like for example vegetation indices you will see also that there is an effect but it will depend also in the index like if for example you have here ndbi ndbi is almost not affected by it because it's normalizing everything but if you have other indices that are actually not that normalized you will start seeing the effects more
more greatly like for example this is near infrared reflectance of vegetation it's a little bit more affected when you use the kernel ndbi just start seeing more points even in space and in time they are getting affected and as i showed you before the red edge was pretty much affected by it this is the inverted red color filling that's that uses
that uses the all the three red edge bands in combination with the near infrared and if you check the the the differences between the index using mbar and using surface reflectance
well you see that the that the differences are actually pretty big here and if you are using a spectral indices maybe it's a good idea also to do the corrections before performing your analysis so uh that's for free for that's all now now for finishing and just i just wanted to
say the final remarks the rdf effects incentive to imagery are not negligible we should correct this uh the c-factor method can be used to convert a surface reflectance to nadir vrdf reflectance and this will minimize the vrdf effects and that's why we developed sent to ember which is an open source vital tool that uses the c-factor method for convert
for converting the surface reflectance of sentinel 2 to nadir vrdf adjusted reflectance and finally sent to ember works with single safe files but it also works for data cubes that were created from cloud store a stack data catalogs and with that i just want to say thank you
also been fantastically in time make use of the time uh do we have questions we have lots of time for questions
thank you for this interesting presentation um i was looking in this methodology of adjustments
and it seems that it really needs to be done what is your take should it be done centrally like for example uh on compared to store they should distribute already adjusted imagery as we know that they're they're producing products which are already like more user friendly and this seems like a thing that shouldn't we be doing on each computer on
our own because it could be done just once and published like ready data and so on sorry can you repeat at the beginning of the question it was pretty hard to hear you from here sorry yeah do you think that this adjustment should be done centrally like by isa on uh and
pre-distributed for the copernicus data store instead of like that we just download files and run it again and again again and so on i mean that would be great yes but that depends also in the user i mean this is just one algorithm right and there might be others
there and people developing other things right so i think at the end this is more a choice of the user on what to do and this is one tool that goes for it in a simple way for data keeps for example and also for the files that they deliver in the level 2a product but it will depend i think more on the user side more questions um maybe to extend
like the previous question um can you i'm definitely not an expert on brdf but since you are can you extend a bit on like the state of brdf in the community like what are the biggest remaining challenges why why is there no like official product um published for example
right okay i mean actually the only official product that i know is the modis one they they have in bar and it's using the brdf model that i presented at the beginning it's a actually a pretty simple model it just has like three components and they actually do not consider for example different land core types and how are they influencing the brdf effects
and many other things so it's actually not that simple to have a brdf model and this one is a very simplified one of what we can get um so that's also why i say maybe it's it's not
on part of for example esa saying okay we deliver this one and that's it um but it's probably a whole branch of research and just also in the specific uh sensor i mean thank you
we still have uh one or two one three more minutes hello i would like to ask is then lansat 8 and sentinel 2 data similar because they have the difference if they use the
sorry i can just repeat the question again and is the sentinel 2 data and lansat 8 data similar if you use this processor processing okay you're asking if lansat and sentinel 2 are using like the same
no if they are similar if they are similar enough to use okay okay thank you well i wouldn't say they are exactly similar they sure probably similar bands but they are
also not they also don't have the same spectral response function for each one of the bands so i thought this algorithm was developed for lansat it can also be used for sentinel 2 it's also depending on the fact that it's also dependent on their reflectance values itself
and its calendar reflectance values and in that case it kind of serves to harmonize both using in this case for example the same model yeah but all of them are actually based on the spectral parameters of the brdf model of modis it's just a combination of a lot of stuff
yeah that is that is a tricky situation still although the there's some projects going on that try to align the latest the later lansat and and sentinel ones but the frequencies the spectra are not comparable directly so the numbers are usually it's not you cannot just
use one thing and play with the other day you have to be always a bit careful there yeah but it's getting it's getting better i mean there's all this research in the data fusion but that would be again a different mesh yes machine mix and then you would have to recalibrate it again differently okay um um how much of your day do you code
a day i just want to need it no of your normal work day so how much time of your normal work day do you spend coding oh things like this uh well coding most of the time like 90 percent of the time in court okay i think uh when if you have another question oh yeah
sorry i didn't see that um i want to go back with the previous question a bit i didn't completely understand uh like sentinel to um has two different platforms so can i use this to
really compare then the the results for this two platforms a and b do you want to compare yeah yeah exactly so you have it now had uh the differences with lancet and sentinel because there are different platforms so you have to be careful in in the comparisons with this method but now you have that for sentinel with two different platforms i didn't quite
understand how that then works i'm not pretty sure i understood the question i think um then maybe for clarification then we can try to find each other in the in the break