We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

EO Data Challenge proposal

00:00

Formal Metadata

Title
EO Data Challenge proposal
Title of Series
Number of Parts
295
Author
Contributors
License
CC Attribution 3.0 Germany:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
During this 90 minutes slot, the results obtained by the teams at EO Data Challenge will shortly present their results. The following talks will be presented: Visualization and Analysis of Big Multidimensional Geospatial Data on the Web (Candan Eylül Kilsedar), STAC for the decentralized Web (Volker Mische), WebGL for in-browser GeoTIFF processing (Iván Sánchez Ortega), Citizen science in support of landslide detection and monitoring (Vasil Yordanov), EO Data Challenge results (Bang Pham Huu) EO Data Challenge results (Ivian Adrian Albu), LeafS - LEveraging Artificial Intelligence for Forest Sustainability (Teodora Selea).
Keywords
Presentation of a groupPhysical systemMultiplication signMedical imagingStudent's t-testMathematical analysisHard disk driveSet (mathematics)Projective planeMultiplicationVolume (thermodynamics)Visualization (computer graphics)Time seriesFamilyComputer animationLecture/Conference
Mathematical analysisCoordinate systemSocial classProgram slicingWeb 2.0Open sourceMultiplication signComputational intelligenceGraph (mathematics)SoftwareExpert systemRaster graphicsInformationVisualization (computer graphics)AreaVirtualizationMathematicsOperator (mathematics)Point (geometry)MappingOverlay-NetzCoordinate systemStandard deviationGUI widgetSelectivity (electronic)Constructor (object-oriented programming)Range (statistics)Covering spacePixelPhotographic mosaicPermanent2 (number)Representation (politics)Level (video gaming)Computer hardwareSet (mathematics)Mathematical analysisQuery languageDifferent (Kate Ryan album)Social classVideo gameVertex (graph theory)CondensationSurfaceResultantMereologyServer (computing)Medical imagingPhysical systemTheoryNumberFisher's exact testDirected graphData warehouse19 (number)Decision theoryFerry CorstenDigitizingRoundness (object)Lecture/Conference
InformationCovering spaceVirtual realityStandard deviationAddress spaceContent (media)Link (knot theory)Validity (statistics)Web 2.0Uniform resource locatorCartesian coordinate systemCovering spaceLibrary catalogProjective planeField (computer science)TouchscreenDemo (music)InformationComputational intelligenceHash functionCategory of beingSource codeMetadataStack (abstract data type)BitRevision controlPhysical systemLocal ringLengthNumberOperator (mathematics)Presentation of a groupMultiplication signCASE <Informatik>RepetitionAsynchronous Transfer ModeVirtual machineCodeSpacetimeConfidence intervalTheory of relativityFamilyLecture/Conference
Hash functionUniform resource locatorPoint (geometry)Inheritance (object-oriented programming)Link (knot theory)Physical systemLibrary catalogGraph (mathematics)Directory serviceWeb 2.0Data storage deviceLecture/Conference
Virtual realityTouch typingMathematicsMereologyComputational intelligenceOpen setMedical imagingWeb browserCloud computingLibrary catalogMultiplication signLattice (order)MassCodeLine (geometry)Root2 (number)Interior (topology)Standard deviationRule of inferenceForcing (mathematics)Client (computing)Mobile WebScripting languageFood energyProcess (computing)Point cloudVirtual machineWebsiteWeb pageBoss CorporationConsistencyWordFreewareTexture mappingState observerLaptopDampingPoint (geometry)Value-added networkCASE <Informatik>Dataflow1 (number)Graph (mathematics)BuildingOpen sourceLink (knot theory)Computer programmingIdentifiabilityBeat (acoustics)Confidence intervalArithmetic meanInstance (computer science)Raster graphicsServer (computing)Structural loadRight angleLatent heatIntegerGoodness of fitRevision controlHill differential equationSinc functionWeb 2.0ImplementationSampling (statistics)Demo (music)Row (database)Lecture/Conference
Demo (music)AlgorithmLaptopWeb browserProcess (computing)2 (number)Connected spaceScheduling (computing)Point cloudMereologyRight angleNumberMultiplication signFlow separationRaster graphicsMusical ensembleParameter (computer programming)PixelFrictionSampling (statistics)Software developerCodeRepository (publishing)State observerInsertion lossDifferent (Kate Ryan album)UsabilityPhysical systemGraphics processing unitGraph coloringData centerTexture mappingFunction (mathematics)Real-time operating systemStack (abstract data type)CalculationDampingResampling (statistics)Rule of inferenceGeometryValue-added networkArithmetic meanWorkstation <Musikinstrument>Position operatorCollisionThumbnailDressing (medical)Structural loadChemical equationTotal S.A.System callSign (mathematics)DataflowSet (mathematics)Field (computer science)Reading (process)Degree (graph theory)Computer clusterLecture/Conference
Programmable read-only memoryDisintegrationModemPlanar graphInclusion mapPresentation of a groupInternetworkingInformationSource codeAreaData managementGraph (mathematics)Coefficient of determinationState observerField (computer science)Connected spaceKey (cryptography)Position operatorAdditionBitWave packetMeasurementPhysical systemLevel (video gaming)Mobile appMachine learningMereologyProcess (computing)LogicInterpreter (computing)Scheduling (computing)Machine visionMoment (mathematics)Disk read-and-write headUniform resource locatorWebsiteData miningEvent horizonSet (mathematics)Endliche ModelltheorieMobile WebDatabaseHazard (2005 film)Cartesian coordinate systemArchaeological field surveyAsynchronous Transfer ModeImplementationExpert systemLecture/Conference
Mobile WebClient (computing)SatelliteWechselseitige InformationLink (knot theory)Virtual machineUniform resource locatorState observerValidity (statistics)DatabaseWave packetInformationSatelliteProcess (computing)Medical imagingArtificial neural networkEndliche ModelltheorieMathematicsMobile appLevel (video gaming)Cartesian coordinate systemAreaElectronic mailing listArchaeological field surveyField (computer science)Type theoryVisualization (computer graphics)Mobile WebMultiplication signCollaborationismPlanningPresentation of a groupGroup actionAsynchronous Transfer ModeExpert systemFunction (mathematics)AuthorizationDifferent (Kate Ryan album)Pairwise comparisonLocal ringClient (computing)Observational studyServer (computing)ImplementationBasis <Mathematik>Point cloudMereologyAverageTerm (mathematics)OpticsOnline helpoutputThresholding (image processing)MappingWeb serviceUser interfaceSoftware developerPhase transitionQuery languageLoginRight angleWeb pageProof theoryCodeSubject indexingWorkstation <Musikinstrument>Event horizonForcing (mathematics)Game theoryTouchscreenCoefficient of determinationDressing (medical)Identity managementGraph (mathematics)AdditionData storage deviceQuicksortPoint (geometry)Lecture/ConferenceComputer animation
SatelliteDemo (music)Repository (publishing)CodeBasis <Mathematik>BitExpert systemAreaPredictabilityLevel (video gaming)Reading (process)Event horizonComputer animationLecture/Conference
Neighbourhood (graph theory)Computer-generated imagerySatelliteInclusion mapRaster graphicsEmailPhysical systemDemo (music)Service (economics)Process (computing)Mathematical analysisClient (computing)Repository (publishing)CodeNetwork topologyWeb 2.0AreaDemo (music)Selectivity (electronic)PolygonTheoryArchaeological field surveyScaling (geometry)Multiplication signControl flowMetropolitan area networkDressing (medical)DatabaseMusical ensembleMedical imagingSubject indexing2 (number)Range (statistics)Greatest elementProcess (computing)Uniqueness quantificationVideo gameGraph coloringTrailWeb serviceNumbering schemeWell-formed formulaSymbol tableBitSatelliteOffice suiteFamilyGame controllerTemplate (C++)Core dumpSystem callPlanningInternet service providerBit rateThomas BayesRoundness (object)Game theoryArithmetic meanSineGroup actionQuery languageTime seriesTesselationWebsiteImage resolutionComputer hardwareField (computer science)Combinational logicClient (computing)SpacetimeComputer filePhysical systemTerm (mathematics)Sinc functionPoint cloudLevel (video gaming)CollisionBeat (acoustics)Streaming mediaVelocityVolume (thermodynamics)Electronic mailing listResultantAsynchronous Transfer ModeOpen sourceExtension (kinesiology)Program slicingStandard deviationPower (physics)Server (computing)Computer animationLecture/Conference
Demo (music)CodeRepository (publishing)MathematicsForestVideo gamePresentation of a groupProjective planeSound effectServer (computing)Arithmetic progressionCASE <Informatik>PlanningInformationCodeFormal verificationComputer fileFile formatLecture/Conference
MathematicsForestOpen sourceInsertion lossMathematicsTouchscreenCovering spaceForestLetterpress printingMoment (mathematics)GoogolDirection (geometry)Source codeMeeting/InterviewLecture/Conference
AlgorithmAreaObservational studyMusical ensembleMultiplicationSpectrum (functional analysis)Source codeSoftwareGreen's functionVector spaceNumbering schemeComputer-generated imagerySocial classAreaBitEndliche ModelltheorieMereologyObservational studyCovering spaceSocial classMatrix (mathematics)Unsupervised learningVector spaceTable (information)Source codeArithmetic meanNumbering schemeSoftwareThermal conductivityAsynchronous Transfer ModeState of matterRight angle40 (number)Ferry CorstenForestResultant19 (number)Drop (liquid)Computer animation
SoftwareMathematical analysisForestOpen sourceMathematicsSocial classBuildingMereologyComputer animation
AutomationComputer-generated imagerySatelliteMusical ensembleImplementationGraphical user interfaceMaxima and minimaArtificial neural networkAlgorithmSource codeAugmented realityTask (computing)BildsegmentierungArchitectureEstimationConvolutionMedical imagingParameter (computer programming)WeightGraph (mathematics)outputCovering spaceSample (statistics)Different (Kate Ryan album)Bit rateExtension (kinesiology)InferenceTime evolutionGraphics processing unitServer (computing)CodeVirtual machineCoefficient of determinationInformationField (computer science)Medical imagingBefehlsprozessorSatelliteBitWave packetSoftwareEndliche ModelltheorieServer (computing)Fitness functionCodeSampling (statistics)Source codeExecution unitLimit (category theory)Computer programmingNetwork topologyVideoconferencingPresentation of a groupComputer architectureBuildingWater vaporMusical ensembleWordPoint (geometry)Computer animation
Nichtlineares GleichungssystemSpecial unitary groupCartesian coordinate systemNumberPresentation of a groupMechanism designBitWebsiteView (database)Source codeResultantForm (programming)Process (computing)Software developerPhase transition1 (number)Computer animationLecture/Conference
Artificial intelligenceForestArtificial neural networkPole (complex analysis)PredictionDemo (music)VideoconferencingNetwork topologyProduct (business)StatisticsComponent-based software engineeringData managementConvolutionExecution unitEndliche ModelltheoriePresentation of a groupSource codeForm (programming)ResultantValuation (algebra)Performance appraisalForestArtificial neural networkCovering spaceRevision controlAreaEndliche ModelltheoriePattern languageNetwork topologyAssembly languageGraph (mathematics)Process (computing)Set (mathematics)Task (computing)Demo (music)Level (video gaming)Data managementElectric generatorAuthorizationProduct (business)Insertion lossLink (knot theory)ImplementationSocial classMathematicsMobile appMachine learningVideoconferencingExecution unitInterface (computing)Public key certificateCASE <Informatik>Workstation <Musikinstrument>BuildingOperator (mathematics)Mathematical optimizationLecture/ConferenceComputer animation
ForestAreaData managementProduct (business)View (database)Arithmetic meanLikelihood functionCartesian coordinate systemMereologyProof theoryGraph coloringEndliche ModelltheorieMultitier architectureMedical imagingPredictabilityComputer animation
ForestStatisticsArtificial intelligenceOpen setServer (computing)TwitterEndliche ModelltheorieRaster graphicsComputer-generated imageryLemma (mathematics)Decision tree learningMobile WebAreaStatisticsDifferent (Kate Ryan album)MereologyComputational intelligencePresentation of a groupForestPoint (geometry)Coma BerenicesResultantNumberEndliche ModelltheorieWorkstation <Musikinstrument>Multiplication signOvalPlastikkarteSet (mathematics)Numbering schemeVideoconferencingPublic key certificateField (computer science)AuthorizationAssociative propertyCodeImplementationSoftware frameworkShape (magazine)WebsiteProcess (computing)Cartesian coordinate systemGeometryAdditionInterface (computing)Web applicationNeighbourhood (graph theory)Mobile appMathematicsSource codeServer (computing)Direction (geometry)Dependent and independent variablesArithmetic meanTime zoneMappingObservational studyNetwork topologyTwitterComputer animationSource code
VideoconferencingPresentation of a groupObservational studyComputer animation
Transcript: English(auto-generated)
OK, so the first presentation is coming from Kandane. I'll be short and just give the floor to her, please. Thank you. You do what?
So Kandane, please start. OK. Hi, welcome. I am Gandane Di Clisadar. I'm a PhD student at Portinico di Milano. And my submission is about EU-based retrospective time series analysis. I got support from CS Romania by Laurentiu Nicola
on this project. So what is this about? I'm sure you are all familiar that EU and EU-based data are typically big in volume and available for multiple times. And these spatiotemporal data sets are hard to find.
Data need to be downloaded, stored on your hard drive. And so fear for visualization and analysis purposes needs to be installed on your computer, which all requires GIS expertise. So keeping these in mind, I developed a web GIS that enables to visualize and analyze the spatiotemporal EU-based data on web
using free and open source software to eliminate this need for expert knowledge. It also allows to get anyone insights related to human activities on Earth and climate change. I used four different data sets, which are restricted to Italy because of the limited hardware
that I had. These are global land 30 for two different years, 2000 and 2010, with 30 meter resolution. Land cover map of Ispra with 10 meter resolution for only one year. Global human settlement layer for four different years with 40 meter resolution.
And build up area map from Ispra again for four different years with 10 meter resolution. Geo-visualization is a transformed cartography, and it allows three and four dimensional data representation.
We all know that virtual globes for visualization is a new frontier for this reason. I used CSMJS for creating the virtual globe. And I have more or less three different approaches that I followed for the visualization analysis of the spatiotemporal data sets.
One is visualization, and for that I used animation to detect the land cover and soil consumption changes visually. I used OGC standard WMTS and image mosaic through GeoServer. Also the timeline and animation widgets of CSMJS for this.
This is how it works. You can see the global human settlement layer here for Rome, and it shows how the human settlement changed from 1975 to 2010.
And this visualization is on a virtual globe, so you also have the terrain. For the analysis part, I used Razdaman, and specifically their OGC standard WCPS, Web Coverage Processing Service, and the DataCube technology. As you know, Razdaman contributed a lot for developing this standard,
and one person from Razdaman is here. It allows to have multiple operations. I won't go into the details. They are range subsetting, induced operations, condensers, and coverage constructors. I am using firstly trimming and condensing.
I allow using virtual globe, a user to select two different years, driven area on the virtual globe, and select for which to calculate the change for which pixels,
actually. So I will just show you the screenshot. In the screenshot, the user selected two years, 2000 and 2010, and then stated that they want to calculate the change for permanent snow and ice and driven area. The Razdaman allows you to calculate
the amount of change for this land cover class for these two different years and the drawn area. This is the query only for single year, and then I do the same operation for another year, and then basically calculate the difference.
This is pretty fast. This is the whole Lombardy region of Italy, and it takes only a couple of seconds. This screenshot shows that from 2000 to 2010, permanent snow and ice cover decreased around 60%, according to the Globe Land 30 data set. I also use slicing.
So the user can click on a pixel, which is a coordinate, actually, and gets the amount of change for the pixel for all the years that is available in that data set. For instance, in the first one, for the ISPR build-up area map for that coordinate, you can see how the build-up change for that coordinate.
And in the second one, you can see the area which was a cultivated land once became artificial surface, again, using the WCPS.
Lastly, I also overlay VGI data, voluntary geographic information, on the raster maps. This is related to the Globe Land 30 again, and users can query each point to get the collected information by the users, and make a quick visual inspection
whether Globe Land 30, the official data set, is correct or not. And this can be a preliminary step for the validation. The data is collected using the Land Cover Collector application that I developed, which also can be found on my GitHub.
And this project, final words, this project has been developed in urban geobig data project. And the source code of this Web GIS is available online on my GitHub. And also, you can find it online following this link. Thank you very much, Candana. You've been quite in time.
And because, as I said at the beginning, this is a special session with not so long presentation, not with the same length as the others, I'll take only one quick question if there is one for Candana. If not, I'll ask the next speaker
to start his presentation. Volker, are you ready? Yeah, I just got the information with where the slides are. So how could we not, so just a sec. OK. So everyone who submitted them would have them available.
Operator. Yeah. Here you go. So just, yeah. It makes things easier. Yeah, I just need to start the demo.
How from a wireless channel back there? Yeah, it's behind us, yeah. When people should also sign the thing. Yes. OK, all right. Where's full screen?
You, full screen. In mode. All right, I tried to keep it quick, so we saved some time. So that's the title that we can read. All right, so the intro is, as the title is quite complex. STAC stands for Spatial Temporal Asset Catalogs.
To keep it short, it's a simple metadata catalog optimized for discovery and search. Then I talk about the decentralized web. That's a complex topic, but in my case, here it means it's a decentralized content address system. And the project that I actually did for this challenge was, so I came up with it myself.
The project was making the STAC browser, so a web UI to browse those catalogs and work on a content address system. The idea behind it was STAC is an upcoming standard. I wanted to make sure that it works on content address systems, because I think that's the future. And I don't want to have an upcoming standard that is not
prepared for the future. So I won't spoil if it's ready for the future or not. We'll see at the end. So quick about content address systems, because many of you probably don't know the details about it. It's about which data it is and not where the data is.
So location addressing is like where the data is stored. Think about the world wide web. This is kind of the typical example for a location address system. And the problem is, if you have a link somewhere, it might be gone. It might be some other contents. You all know it from browsing the web. And with content addressable, you identify the data
with an identifier. And it doesn't matter if it's on your local machine, if it's on the web, if it's somewhere else. It's kind of like an ISBN number for a book. So it really describes the contents and not the location where it is. The nice properties are that it's a hash. So you can just automatically get the data with an identifier.
And then once you have the data, you can run certain computation on it and then get, again, the identifier back. And if it's the same thing, you know it's actually the same data. Also, in the content address space, data is immutable. This means you know that the data hasn't changed,
because it has the same identifier, it's the same data. And you kind of get implicit versioning. As I don't have that much time, I just skip over it. And I want to give you an example to make it a bit clearer, perhaps. So Stack is anchored in JSON, as you can see here. But what we really care about is the links, how to.
This is basically how you build up the catalog. And those links look like this. So they have a relation. And they have an href, which is an URL. And the nice thing is splitting those two things apart, because what often happens, links are just really URLs.
But then you encode within the location what it is. So for example, if you have slash item, you would encode in the location what data it is. And in Stack, what they do is they encode the relationship in a separate field. And that's very powerful, because what you could do is, if I now want to make this content addressable instead
of location addressable, I first need to throw away those. But that's details. I come to it later. Point to a child, and what you can just do is you can replace it with a content addressable link, which looks like this. And you might wonder, that's not a URL. It is a valid URL.
And that's pretty nice. So IPOD is the system that I've worked on. And this is just a hash. So now it doesn't matter if this is on your local directory. It is on the web, or it's somewhere else. But you still know the relationship that is a child to your catalog. If you do such a system, there are certain restrictions
on the links. So for example, if you build such a hash of the data, such a link, you need to know the data, obviously, so in order to derive the identifier, which means that you can only link to children, because if you want to look at the parent, you don't
know what the parent is before you've looked at the child. It's kind of hard to explain in 10 minutes. So if you have any questions afterwards, feel free to see me. So the advantages that you have with embedding the links in your data is that it also becomes part of the data, which
means if a link to a child is changing, also the data itself is changing, which means that changes kind of bubble up to the root of the catalog. So if you have an identifier of the catalog,
you know that you have exactly the data that someone else is seeing. And if you change anything within the catalog, you will get another identifier and can send out this link. So you can always make sure that people see exactly the same version. And you can also kind of go back in time, because if you just sent the older identifier
of the catalog, you would see the old version. So you kind of get consistency for free. And the technology that I've used for this stuff was the Interplanetary Linked Data, IPLD. This is actually what I work on on my day job. And it's open source. It has open specifications.
It has an open implementation in JavaScript, in Go, and soon in Rust. I've used the existing stack browser and really only did minor modification. It was like 10 lines of code changed, and it just worked. And to process, I used existing catalogs
and just processed them with a small script to make sure those links are then actually content-addressed links. But if you want to know more, those are the links. So about all the stuff that I've been talking about. And there's also, if you want to know more about all this content-addressed thing, I gave a talk yesterday, which
was called GeoData on IPFS, which was recorded. So feel free to watch those or catch me. I'm here for the full conference. Thanks for your attention. Thank you very much, Volker. Thank you. Again, we are in time. I would have 1,000 questions for that,
but I don't have the time. So I can take one quick question for Volker. If you want to raise it now. Yes, Alessandro, please. Are you in touch with the people which are working
on the standard on the OGC? Yes, so I'm not really the OGC people, but I'm in touch with the creators of the stack standard. So I've also attended the biweekly meeting and so on, and I know them well for the past 10 years. So yeah, I'm in touch with them. That was one of my questions, too. But it's good that the answer is positive.
So with that, I pass the floor to Ivan. I know what each of them is presenting, but I'll let you be surprised by Ivan as well. Ivan, please. Yes, so hi, everybody. I'm Ivan, and I'm going to talk about WebGL 2 and 32-bit geotiffs. This is not new.
I have been doing Leaflet, Taylor's GL since something like 2016, but it was able to process 8-bit rasters. This is kind of all technology in which you can load 8-bit images on any web browser, because all the images that we usually see on websites are 8-bit per channel, or GBA.
And that's the problem with technology. Right now, we can only do that, and we can only load JPG and PNG. This is not new. I want to emphasize this is not new. This has three years of history by Maps and Stangram that we're doing this with a lot of workarounds. So instead of having a float 32 texture or something
like that, they would pack a 32-bit integer in the four channels of an RGB image. And for us computer people, that's kind of understandable. For JS people, that is madness, I think. So on the other hand, we have this technology
called WebGL2, which is the same as OpenGL3es, which is based on OpenGL something. It's a whole mess of things. But it can handle more kind of textures, namely 32-bit floating point, and also in some constrained instances, 16-bit integer, and so on.
The problem and the main problem is that only 54% of browsers as of today can handle this technology. And I was very scared when it didn't work in one of the browsers in this laptop, which is not mine, but I'm happy it kind of works. That situation is not going to improve because of Safari. And Apple has some political stances
on the technology stacks. But it kind of works. So what I was asking myself when the Earth observation channel, the Earth observation challenge was launched, is can we GL2 load through 16 or 32-bit per sample cloud optimized geotaves and do raster processing on them?
And I'm proud to say, yes, it can. So it's demo time. I have the demos here. If anybody wants to try this at home on your laptops, be aware it will work on 54% of the cases. But you are most welcome to try. So I will just show quickly that I can do.
This is a 32-bit floating point 8 megabyte digital elevation model from the Spanish Geographical Institute. And I'm doing the hill shading in real time. Because I'm doing it in the client, I can raise the sea level like this.
And I'm not requesting any server for any more images. By the way, this web page is running of a Raspberry Pi, which costs like $30. I don't need AWS cloud services. I don't need any kind of big raster processing machine ever. And it's like, ooh.
OK. And then I have this other demonstration. This is using Sentinel to cloudless infrared and red bands provided by EOX, one of the challenge partners. And what I'm doing here is NVDVI on the browser in real time. And these geotaves run on, they wait around 100 megabytes each.
Because they are cloud optimized, I can load them even though my Raspberry Pi is on a residential DSL connection. So no big data centers here. And of course, because I'm doing this in the browser, I can tweak the parameters of the NVDVI, ooh, ooh,
like ooh, right? And anybody here has epilepsy problems? I hope not. If you have, please look away now.
I can do NVDVI 30 times per second in the browser. I do not need any big raster processing. The GPU on any of your laptops is powerful enough to do any kind of raster processing on full detail,
on full precision geotave data several times per second. I'm not saying that you have to rewrite your stacks to do this. I want to say, and I wanted to make a technology demo, to say this is something that we should take into account.
This will allow me or anybody, any geographer, to try new raster geoprocesses way faster. I don't have to wait three minutes for the process to finish before I took a parameter and run it again. This can help us, and we should have this on mind when designing the technology.
And I tried to make a few more demos. I didn't really have the time. The code is in some Git repository there. There are some problems there. I have to get the Integra 16 textures onto float32 fields because stuff.
We could do things like packing the textures as 8-bit and just passing them and try to make it more transparent. Resampling on geotave.js takes the CPU, and that's low hanging fruit. This is something that we should work with. WebGL is also very good at sampling.
Something unknown sampling is like scaling up and down and making all the pixels match and make nearest neighbor or interpolating pixel values. And that's it, really. That's all I had. Thank you very much, Ivan. Any questions for Ivan?
I'll take one. Two quick questions. One is whether we can see the pixel values, like if you move the mouse over one pixel, can you see the ND value from there? And the other one is whether you've looked into tone mapping the 32-bit or 16-bit samples
because they can look dark or washed out otherwise. Right now, I do not think you can see the output pixel values because you have to map that to actual color. So you have to do this workaround of outputting the color, then querying the color, and converting that color to a numerical value that you can read. That's something that has to be done.
Right now, as far as I'm aware, you cannot output a float 32 texture and read those values. Also, WebGL and all kind of GPU processing only use 24-bit precision for the internal calculations. So if your geo-process really uses a lot of precision,
you might have some losses there. I don't think that's the case because as far as I'm aware, earth observation only cares about 12. But you have to tweak your things around, really. And packing things is a problem. I would love to see this kind of difference between the technical part of the geo-processing
and the actual geo-sciences part of the process. There's a lot of friction there, I feel. And I think we should work to ease that friction. When you're designing a system for raster processing, it's not only about how much data can you take, what's the end result, how fast can you do calculations.
It's also about how easy it is for the geographical sciences people to develop new algorithms and try new algorithms. You have to tell apart the ease of use of development, technical development, or geo-sciences development. There's a lot of things one can focus. And I think that designing raster algorithms
should be faster and having less friction with the technical part. That's what I was trying to ease. OK, with this, I thank, again, Ivan. And I'll invite to take the floor, Vasil, your demo.
Yeah, I will ask. We are ahead of time, so we see people coming in and expecting a talk. But it's already gone. So we see the next talk should be at 11.40 and not now. So yeah, like 10 minutes before schedule. Yeah, OK. So probably people coming in and see the talk always gone.
OK, I'm sorry for that. But yeah. Just so you don't have to make a 10-minute break or something. Oh. Just this one? Yeah, I have just a second, please.
OK. Yeah, apparently, we're a bit ahead of schedule.
And people who is looking for a presentation is a bit puzzled. But well, I'm just waiting for the door to close.
I don't know.
And at the end, I will ask all the speakers to sign the agreement that is at the speaker desk there.
OK, please have your place somewhere. And Vasil, please, have the floor. OK, thank you very much. I'm Vasil, coming from Politecnico de Milano.
And I want to present to you our submission along with our team, including Eduardo Pesina and Vladislav Ivanov. The submission is called Application Earth Observation for Landslides. But before presenting you the application itself, I should introduce you to the problem that we are trying to solve here.
When we are talking about hazard management, and especially landslide hazard, a really important key source information is the Landslide Inventor. It contains information for past events, including the location and additional information
interesting for all the researchers and the academia. The problem is that the Landslide Inventor should be always continuously updated. The old inventories can be incomplete and can be hard to interpret in different manners. And when we want to implement automatic Landslide Detection
Systems using machine learning and Earth Observed Data, we need a reliable training set. Here is an example with the data and Landslide inventory in the valley in northern Italy. While in interpreting, it can arise a lot of questions and can be problematic.
In the meantime, Earth Observation can almost change drastically with new technologies. But it's not just the technologies that should be developed, also the management of the data, because we received big amounts of data each day. And this is also relevant for the geosciences,
which actually hold hand by hand with the Earth Observation for the hazard management. So our solution, our answer for the challenge, is to integrate all those three disciplines, Earth Observation, Geoscience, and Crowdsourcing.
While the Earth Observation and the automatic models can produce a reliable Landslide inventory, in situ measurements can polish and give additional information and additional interpretation of the data, while the crowdsourcing and volunteering will be in benefit for all the database.
So in this manner, as a part of our application, we created this Landslide Survey mobile application, where a user can go on site and tag a Landslide on the field.
A user can choose a simple mode or expert mode, depending on the level of the geological knowledge, while he will be guided with different questions through the process of registering a new Landslide. In addition, since most of the Landslides are occurring in the mountain area, the application can work offline,
since internet connection can be absent. The idea is that the massive use of the application could generate a big amount of database that can be used for further machine learning, training, validation sets, using Earth Observation data.
The mobile application can be found on dedicated page at GitHub. The client itself is written in Java, HTML, and CSS, wrapped with a Pasha Cordova. While the server is in JavaScript, not an expert,
and the database is document-oriented MongoDB. And here are some screenshots from the application. You can see the login screen, or you have to register first. Then the query for the inserting a new Landslide. User can review the entries, and then the entries
can be reviewed on the open street basis. As I said, the application is just part of the whole application. And once a new entry of the Landslide is stored on the application, the mobile application, background service processing will start.
Well, firstly, a comparison with an existing inventory will be made, inventory and already existing database, to see whether the new entry is existing or not. After that, additional advanced machine learning techniques from automatic change detection
will be implemented using a fusion between optical and radar satellite images. And here we're planning to use as a Sentinel 1 and 2, because for the scope of our work, the resolution and the time resolution are quite good for that. In addition, artificial intelligence models
will be implemented for the automatic Landslide characterization. And on the next step, it will pass also accuracy assessment. And it will be performed another comparison between the mobile application entry and the Earth
observation output to see what is the level of coherence between them. On the next step, the entry will be updated to the database with all the gathered information. And it will be included as a score. It will be assigned to score according to a scoring list.
So once a user of the database will understand what is the level of the reliability of the entry. And finally, the user will be asked whether he wants to produce a susceptibility map for the area of interest, where he can incorporate Earth observation data or ground-based, also predefined and user-uploaded.
Here is just an example of a Landslide that occurred last year in northern Italy. The false RGB, all the images are derived from Sentinel-2. The false RGB is just for a visualization
purpose of the Landslide. On the left are the pre-event images, while on the right are post-event. And you can see clearly the Landslide scar that has left. Also looking at the vegetation index, the values have changed from average from 0.8 to 0.
Looking more in details from those two pictures, one can notice that values in the NDVI changed not only in the Landslide area, but also in different parts. This could be to shadows or clouds. So this is just highlighting that this methodology and this technology is not sufficient by itself.
And better thresholds should be defined, as well other techniques should be implemented. We have defined five target groups that we are planning to approach during conferences and academic journals, as dedicated presentations
and courses will be also held. Our practice showed that collaborations between academia and local authorities in affected areas is highly appreciated and it's working quite well. So to just show the timeline how we're going, we divided the application in three main phases.
The mobile Landslide survey app, the satellite Landslide detection, susceptibility mapping. Up to now, we have the mobile application completely working, constantly improving, while the other two phases are still currently under development. So up to now, we have a Landslide survey app
that can be used by professionals and non-professionals. The output can generate quite useful data, large amounts, and it can be in benefit for local authorities and academia. We would like to thank our mentors from Polytechnico and Deimos for the support
and help through this processing. Thank you for the info. Thank you very much, Vasir. Thank you very much. Very good time. So we have time for questions.
Looking in the room. Yes, please, Alessandro. A quick one. Did you receive requirement from the end user in the implementation of the user interface in terms of which are the type of input that they expect to be provided from the user interface? From some community that has provided you some?
Yeah, because we have a geologist in the team. So basically, we put a geological field paper as a questionnaire in the app. So this is the difference between the expert and non-expert mode of the application. In non-expert, it's quite a simple few questions,
while expert has a little bit more in details. And it has also questions and suggests answers. Thank you. I think I can take one more question, if there is. Thomas, you need to wait for the mic.
Yeah, thank you. Just a little one. Regarding the prediction of landslides, is there anything you can tell us? I mean, this susceptibility issue you mentioned already. But this is basically related to the existing landslides. And you predict probably the risk that there will be further ones, no? But predicting landslides in areas which have not been yet affected,
this is something you can also deal with? Yeah, actually, we can do this. That's why the inventory is quite important for this purpose, because on the basis of the previous events, we can study and, let's say, predict the susceptibility to a certain level in areas that were not affected till now.
OK, if there is no more questions, OK. I think, again, Vasil, and I invite, I think it's Bang Pham now?
Seem to be the right one. OK, should be working already.
Hello? Hello? OK, welcome, everybody. So today, I will present my demo about using constancy from the Korea to support for farming in Germany. My name is Bang Pham, and I'm working for Russellman Group. I mean, team, yeah.
So first, I would like to thank for partner, of course, Russellman, who provided the Russell database open source for 20 years, and also for the S-I-I-S, which is a data challenge provider, which helped me to provide the commercial satellite images, which I can use the phone the third time,
also about Crayola for providing me the cloud demo. And also for NASA, for providing me the clouded 3D satellite quick land, which I can make the demo nicely. OK, so this is the data challenge from the COMSAT. Basically, we have a lot of satellite data every day in terms of volume and velocity.
But we need to somehow to process the data so we can gain the value efficiently and quickly. And if we cannot do that, which means we can just store the data, a lot of data for nothing. So I would like to focus on the main problem is in Germany.
My target user is about the farmer. So the thing is, we may have a lot of large fields, and they cannot control every day, like go to the field to check the crop health. And that's why they need to do something which is smart about the technology in agriculture. For example, here to use satellite images, at least for example, from high resolution images
like concept tree to monitor the crop health, for example. And that is my proposed solution, which I need to build a West GIS, which can target for the user is German farmer and so for the government, which they can monitor
the crop from their home or their office. They don't have to go to the field to do something which is not efficiency. OK. So traditionally, one user, we need to download the data to their PC, then have to process all the data.
And it's very limited to their system hardware. That's why we need to somehow to have a database, just a database which is resonant, somehow to start all the data as a 3D time series from all the satellite images which we can collect every day. And by tiling the data to some smaller tile like this one,
then which can enable user to query the data efficiently via, for example, on the left-hand side, the OVC West corporate standard, allow user to select the area of interest and query all the data with the one for only time slide, for example, a space from month to month or year to year.
And to do that efficiently, Rosamond, this is a unique standard from Rosamond, which is a direct processing service, allow this extension from OVC-WCF standard, which allow user to query directly to the data group, which is OVC-WCF. For example, you can see here on the bottom
is about to calculate to filter the data. We have something about the near inference, which is greater than 127, for example. So the workflow is first I get data from the concept,
and then I try to import it to Rosamond. Now I know I can do it easily, of course. And then I try to, because I'm not a satellite analyst, so I have to try to how to make some nice demo from them, which only three-band data with very high resolution. Yeah. Then the last one, after I had some idea about making
a demo, I can create a WestJet client to start the demo. So that is the result. I have done this only in one week. We got to have much time to do it. Yeah, I have a lot of work to do. And yeah, so this is a demo. It has, by the way, on green, you see it on the globe, on the right-hand side. On the left-hand side, we have some demo about OVC-WCF
and OVC-PAT. And yeah, so basically this is about some demo you can see later. And for the thing is, you can still get access to this live demo on this, the third URL, which is on the globe. Yes, yeah.
Hopefully, you can buy on this next month. OK, and now I switch back to the real demo. So on the top side, so you can see that we're on the ground.
We have a WMS concept two layer, which is based on 3D data curve, which you can slide over the time series. For example, here, I selected another time slide in another month. And because of the time, it's important with the pyramids,
which is allowed to query the collision in very, very quick time. For example, if you're in a higher distance, you can query the lowest resolution collision. And when you're in a higher, lower distance, you can query the highest resolution. Yeah, it's by WMS time map in 3D coverage.
And on the left-hand side here, you can see that I make some W3D band combination over also time series. And yes, and that's the way I have the phone caller of the concept three data.
And yes, I tried to create another band combination. You can see that because it was meant to try to put that data very quickly. Because it only focused on the subset, which I chose on the left-hand side. And here, you can see that's also about WCS trimming and subsetting on the coverage.
Like here, I tried to query on smaller selection, yeah. And also, yeah, try another band combination because there are only four bands. So that's not much to show here. OK. So here, another a little bit demo from the Wetland Winds. So it's basically a 3D wetland.
So we can rotate pan around to see more detail in flight mode. It would be much nicer because this is like a mountain. But here, only the fields, you cannot see. That elevation is actually right up or down. Yes.
Next. OK, I can also get the band value by clicking on the overlaid data. You can see on the left-hand side, the value, when I click on the image, it can return the value. Now, I tried to demo some Dallas Beat query, which
is more powerful than, of course, than older list that's yet. For example, here, you can see that the formula for the band combination for NDVI, which is a template. And then I click on the cell result. I have this only one band, no color.
And yes, since that's why you can see the only one band before the red band here. And now for the color, by on the value of this NDVI, yeah, you can see that. Because I just need to depend on the value of this one band, I can create a random value for this one.
And then I can create a color schema like this one. OK, this one is the same demo that we said before. Now, I try to extend the query with more streaming.
Yep. So I select another date, that time slicing. Yep. And I try to calculate another index, which is leaf area index. And you can see that this is about the color scheme, which
I used in the Dallas Beat query. And it's written like in, I think, 200 milliseconds. All right, so I tried to create some demo query, Dallas Beat query on this one. And you can go to the website to play with this. Here, resume also can try to downscale on the server.
And this can return quickly for the image, and so upscale if you want. And here, for example, you can try to clip the file by a polygon on the raster, actually. This will be slower. So you can see that on the top, it's clipped by the polygon.
And yeah, so I have this, my last demo, which is calculate NDVI over a year. So it's about time series. So on time slide over this year. And that is for my demo. Thank you very much for your attention.
Thank you, Bangtan. Yeah, questions for him? Yes, Corinne. Have you used RasterMan with NetCDF files? Yeah, we support NetCDF file.
OK, and it's working well? Have you had issues or not? Yeah, it has been demoed for, like, you can check about us of the two projects. We have partnered with ACMWS. They also use NetCDF and verify format for a few years. It's no problem.
I think we can take one more question for Bangtan, if there is one. Yeah, Anca. Thank you so much for the presentation. Are you thinking to maybe extend this to bring some features that can make this tool give more actionable information
to the farmers? Really, I think the purpose is I made it demo for, like, to do the 4G conference and that's all. But I mean, the code is public in GitHub, so anyone can try to follow it and to make it extensible.
Please keep the mic. Then they can download my code here and they can try to extend more features. If they have time, I think that it's very nice if someone tries to continue with this project, because I think I have another thing to do as well,
so I cannot continue to do this anymore. OK, thank you very much. Thank you very much, Bangtan. With this, I go to the next team that will present.
If you find your presentation. Yeah, that seems to be there and ready.
Working, yeah. Yeah, please use the mic. OK, so hello, everybody. I'm Alina, and together with Adrian and another colleague who at the moment is that he's a volunteer.
We've tried to do a forest change detection, or more exactly, a cover change in up the same mountains from Romania. We are rookies in the open source world, so please have mercy. Actually, it's our first analysis in open source.
OK, so our motivation came from this one. So as you can see, this is a simple print screens from Google Earth in up the same mountains. And you can clearly, with an open eye,
you can see the forest loss from 2003 and 2017. Please, next slide. So this is a situation that should be international.
OK, next. OK, so what we have tried to do is to combine a bit the GIS semi-automated methods and an open learning automated method. For the semi-automated method, this is the study area.
It's the western part of the Carpathian Mountains from Romania, up the same mountains, more exactly. Next, please. The data source that we have used were Landsat 5TM and Landsat ATM from the years 1992
and 2019. These dates were very, very important for us because I don't know if you are aware with the Romanian history. In 1989, December, Romania exited communism.
And democracy was taking the rightful place. But this also came with some issues. Yeah, some issues.
For vector layers, we used the Korean cover as a ground truth. And the software that we finally, after many, many tries, decided to use was SAGA-GIS. So the conceptual scheme for our unsupervised classification
was quite simple. Import the data, clip the area, conduct the unsupervised classification, the means clustering, reclassify, which was quite easy because we just exported the chest of a table and then with a drag and drop, put it back into the SAGA-GIS. And then compare the classes and create the confusion
matrix. So as a result, 1992 versus 2019, we had five classes. The forest, pastures, the unclassified,
which is mainly the houses and the roads and buildings. And this is the, yeah, you can see the classes.
And as you can see, the changes from 1992 to 2019. So yeah. So basically, my wife and her friend,
they wanted to show how you can use a semi-automated method and doing proper classification, which is freely available for everyone. We're here for the open source part. So that was it. And then I'll move to the automated method. This is something that I'll be talking about. I'm also a new beginner in the whole field of deep learning,
of machine learning. So I've said, let me see what we can find. With limited information, can I make it work? So I found someone's code on GitHub. I will not take credit for it. He has used the unit model for deep learning,
pretty much Python 3.7. I spun up a AWS server with an NVIDIA Tesla GPU with 64 gigabytes RAM. And then I used the training data included, which were 24 satellite images with eight
bands, Sentinel 2. And all of those 24 satellite images were run through the training program. And this was the model used. This is the unit's architecture. It looks quite fancy, but it's actually quite easy to implement it.
And it's been throughout all the conference. It's very useful and very easy to use in any image classification and segmentation. These are a few different samples. So there were five channels that the software was looking for.
So roads, buildings, water, trees, and others. And this is the example of a successful run. So for us, the main importance was to classify trees. This is why we're also called tree fitting.
And as a conclusion, basically right now, we see that everyone can buy a CPU or a GPU at very affordable prices. And ultimately, you can run deep learning on their own machine. And through this presentation, we've shown that anyone with limited information,
or at least with a bit of will, they can go online on GitHub. They can find all the sources that you need. And you can do image classification and segmentation through various methods. Thank you.
Yeah, my mic is back. Thank you very much. Is there any questions for this one? No. Thank you. If not, I thank you very much again.
And I ask the next presenter to prepare. In the meantime, I will tell you that after the challenge call, we'd had a number of applications registered, just to know a little bit about the mechanism
if you didn't check it on the website. Then we had a first check. We announced the ones who registered their applications. Then there was a phase in which they've been matched with mentors.
The development process started. And then they were asked to send some results in the form of presentations and source codes. And for the presentations, they were also asked to send recorded presentations
so the jury can better understand what are the results. And in the end, we got the results. The jury made and will have a final evaluation today. And the winners will be announced in the awards
ceremony that will start at 4 today, 4 PM. With this, I pass the floor to Teodora. Teodora, please use the mic. No, I think it's on. Should be already working. He's presenting, I think, is the last one for today.
OK, please. Hello, everyone. I am Teodora. And today I'm going to present LIFS, a tool designed to leverage the artificial intelligence for forest sustainability. So deforestation has been a hot topic in the past few years.
In Romania only, it's OK here, here, here. I have to say here. OK, so I say here. And smile is mandatory. Yes. So in Romania only, this is a map of the tree cover loss. From 2001 to 2016, over 300,000 of forest acres
have been lost. However, this tree cover loss is both due to illegal deforestation activities and to legal forestration activities. Meaning that we have to supply the society with the
wood-demanded products that they desire. So what is the answer to this? The answer is to have sustainability, to have a sustainable forest management of the forest. This sustainability actually means that we certify forest owners, even if we talk about private forest owners or
government authorities for their forest certification, that they manage their own forest in a sustainable way, that they are going to take care of their generation of the forest, and both to the consumers and to the industry, that they take their wood product from a
certified forest. So LEAFS is not the usual forest change detection tools. LEAFS is actually intended to support this forest management in a sustainable way, and to have the process of sustainability in the forest area.
Through LEAFS, we want to allow users to easily monitor the data and to easily get statistics. And we do this by using deep learning algorithms. So we are mainly based on the forest segmentation task.
In order to do this, we need to use data sets and deep learning models. Our data sets, we have used two data sets. One is a public available data set that has recently been released. It's called Send12MS, and it contains Sentinel-to-data.
And then, after pre-training our models with that data set, we have used our own composed data sets from Sentinel-to-data over the Romanian area. And we have used the Korean land cover ground truth classes for it. Next, using this, we have used convolutional deep
learning models. We have provided our own implementation for WNET, HSN, and an optimized unit version. And we have to use one already available Densnet version. So our results are shown there, are composed by an assembles of these models.
So we had the data, we took them to the deep learning models, we identified forest classes and forest patterns. And then we displayed the result and computed the statistics for it. Okay, so actually for the non-deep learning people in the room, there is actually a video on how a deep learning
model actually learns how to identify forest. You can see that it starts from nothing and then gradually it can identify more and more areas of the forest. Now I want to show you a short demo of the app. I don't know if the link, let's see if the link works.
Okay, so moving forward, here. So here, you can view our interface. So, how do I do that?
Okay, now I'm going to try to, I already have it here,
so I'll open it from here. Yes, now let's move forward. So, for example, this is, let me just get here. So this is our main proof of concept application.
So you can view the forest and you can also manage here your forest, meaning that we can provide the management for the full area of the forest or a user can define its own private forest area and view only what the area he's interested in.
Then, of course, we have the layers. We can add the layers, you can see the true color image or you can see the actual prediction from our models. We also have available data for some of the parts that from different tiers. So if you want to see how the forest actually changed during the years. And, of course, here, here is like the whole view
of the Romanian forest. Here you can see how the prediction actually looks when it's displayed, and then here, we can actually zoom in and compute some statistics from an area that you want to.
So we click on draw, we get an area, we select some area there that we want to, and then we send it for computing, we wait for the statistics part, and then we have the statistic of the different canopy coverages for that specific area.
Okay, so now I'm going to get back to the, yes, it's working, presentation. Okay, hoping it will work again.
So how did we do that? First of all, we did use some technology for our first data handling. We used Rasterio, Payona, Shapr, and Geopandas because our code is mainly based on Python.
Then, of course, our deep learning methods, for that we use the Keras framework for implementation, and then our web application uses OpenLayers, Twitter Boostwrap, GeoServer, and PostGIS. So who is actually this application intended for? So we mainly design leaves to create a community
around the forest sustainability area. So we want to address to the authorities and to the private forest owners to use leaves as a tool for better monitoring their own forest. Another problem that is currently happening,
given in Romania, it's hard to distinguish between illegal and legal activities. If we'd had a mapping of the actual forest area, the authorities and the forest owners would actually know. I mean, if there is a change in an area, they would know if it's allowed for tree deforestation in that zone or not.
Then we want to address to the industry part because we want to reduce the cost of forest certification. So for now, if you want to get the forest certification, you have to pay for the auditor to come. But there are studies saying that some of the criterias
to get the forest certification can be made from remote sensing data. And some of them are trichinopic coverage, for example. And those can also be done, as you can see from our interface. And in addition to this, it will also reduce the bias attached to like when a human auditor comes to certify
because we compute some of the data from our tool. And then we want to create a responsible consumer community by promoting the brands that use sustainable forest sources.
So in our future work, we have a few direction. We got in contact with PEFC Association, and we do want to integrate the certification part, I mean, part of the certification process
by using deep learning applied on remote sensing data. Then we want to like actively monitor even the illegal forest activities and actively send notification if this happens. We also want to create a mobile application that would be like intended for consumers
to see the sustainable forest companies that are near-sized human to promote the certified labels. And then, of course, we want to give back to the remote sensing deep learning community
because the deep learning remote sensing community has two major problems. First, the poor lack of labeled data sets, and we plan to continue the developing of our data set and to release it. And then the lack of pre-trained model on multispectral data, meaning not only red, green, blue channels.
So for now, we have four models trained on two data sets, which is the center of MS data set and our data set, and we do hope for them to be like a first start for the future research in this field. Okay, thank you.
Thank you very much, Deidre. We still have time for questions. I'm looking here. Oh, Sandro. Question to Deidre. With respect to the accuracy that you can generate,
do you have some number with respect to different models that you have trained? Yes, so for now, I mean, the results that we have now are only made with a small data set only with Sentinel-2. The data set still has a lot of no data images,
so we still have to cure it and prepare it for release. And we did no post-processing, so only from what we have now. We have like a 0.74 Jaccard score on the whole area. Against Corinne. Against Corinne. No, against Corinne, yes. Against the Corinne data set.
Okay, we can take one more, even more. If not, I think some of the speakers are still in the room. No. The previous speakers? Yeah, please. You are here. And Bangpam is also here, so if you still have questions for, yeah?
For the previous speakers, we still have time for that. If not, I will finish the session here, asking the speakers to don't forget to sign the video recording agreement.
And yeah, I think with this, I thank you very much, all the speakers, again, for that. As I said, the jury will have a final discussion today, and the winners will be announced
in the awards ceremony that will start at four. Yeah? Okay, then I thank you all again for coming. These are some of the challenges, challenge participants' presentations.
Not all of them are here. Not all of them had the presentation today, simply because they are not here. Yeah, well, we'll have some winners, and you'll know that this afternoon. Thank you very much. Again, we'll stop here, and I wish you good lunch.
Thank you.