We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Struggle with WebGL to render vector data

00:00

Formal Metadata

Title
Struggle with WebGL to render vector data
Title of Series
Number of Parts
295
Author
Contributors
License
CC Attribution 3.0 Germany:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Spatial information and its treatment has evolved from the centralization and publication of this in a single repository via standards such as WMS to the service of the information as it is to be processed by browsers via WFS. However, the WFS protocol has some shortcomings in terms of performance when it comes to the format in which to serve the information, giving way to more optimal formats for the service of vector information such as .pbf. This format allows the transmission of large amounts of information to the local browser client. This information, increasingly larger, requires the use of specific rendering libraries such as WebGL. The present work shows a state of the art of the existing WebGL libraries and a real test field on which the data have been tested, showing the results obtained and the most optimal solution. The following frameworks have been considered for the representation of large amounts of data: * OpenLayers * Mapbox GL js * Deck GL * kepler.gl Resulting on the tests executed to represent large amount of data, Mapbox GL has revealed as the more flexible tools in terms of performance and capabilities.
Keywords
129
131
137
139
Thumbnail
28:17
VolumenvisualisierungProgrammer (hardware)Integrated development environmentSatelliteSpacetimeSoftware frameworkFinal ApproachContent (media)Physical systemData managementHazard (2005 film)Dependent and independent variablesData recoveryTexture mappingEuclidean vectorSystem programmingForestInformationFinitary relationType theoryPlanningAddress space2 (number)Denial-of-service attackPhysical systemAreaData managementTime zoneProjective planePresentation of a groupSatelliteComplex systemComputer programmingLevel (video gaming)Visual systemWeb 2.0Open setPRINCE2Observational studyComputer animation
Physical systemData managementService (economics)Euclidean vectorTexture mappingComputing platformUsabilityChainStandard deviationArchitectureDatabaseFluid staticsOpen sourceSanitary sewerSicServer (computing)Library catalogClient (computing)VolumenvisualisierungArray data structureSatelliteRaster graphicsDirected setPoint cloudSoftware frameworkVector graphicsFingerprintSoftware engineeringVector spaceFile viewerServer (computing)Level (video gaming)Library catalogRaster graphicsData structureQuery languageComputer fileResultantProxy serverFile formatComputer architectureClient (computing)Vector spaceVolumenvisualisierungStack (abstract data type)Service (economics)NumberProof theorySemiconductor memoryProjective planeMappingField (computer science)File viewerOpen setLaptopGoodness of fitInformationMathematical analysisIntegrated development environmentTesselationFront and back endsInheritance (object-oriented programming)Software frameworkScaling (geometry)Revision controlLatent heatInteractive televisionAreaDisk read-and-write headCASE <Informatik>Multiplication signConstraint (mathematics)DatabaseMUDMobile WebMoment (mathematics)Web 2.0Metropolitan area networkUniform resource locatorForm (programming)WebsiteComputer animation
Software frameworkSoftware engineeringLatent heatDefault (computer science)Associative propertyData structureServer (computing)File viewerAerodynamicsFinal ApproachLibrary catalogVector graphicsClient (computing)VolumenvisualisierungReal-time operating systemKepler conjectureComputer architectureSystem callInternet service providerServer (computing)Computer configurationConnectivity (graph theory)Probability density functionCartesian coordinate systemDifferent (Kate Ryan album)Digital Equipment CorporationSource codeMultiplication signDecimalSoftware developerClient (computing)Web browserData structureTerm (mathematics)File viewerProof theoryTesselationDefault (computer science)Plug-in (computing)Zoom lensInterface (computing)Computer fileTheoryOpen setWeightData storage deviceUsabilityRow (database)Game theoryClosed setLatent heatForestConstraint (mathematics)ArmECosCASE <Informatik>Data managementScaling (geometry)SummierbarkeitLevel (video gaming)Metropolitan area networkVolumenvisualisierungService (economics)Computer animation
Band matrixInformationZoom lensReal-time operating systemConnected spaceLevel (video gaming)Constraint (mathematics)Server (computing)Functional (mathematics)AreaRepresentation (politics)SatelliteSystem callFile formatWordPresentation of a groupRight angleDemoscene
Presentation of a groupServer (computing)Web browserClient (computing)Plug-in (computing)NeuroinformatikMereologyMedical imagingLibrary (computing)TesselationSystem callPlanningArchaeological field surveyComputer architectureData structure
Transcript: English(auto-generated)
So we get to the final address of the final session of the second day. We have, sorry, I forgot your name. Enric, sorry, I don't know. Okay, sorry, sorry for forgetting that. It's too much to take care of.
So, please go ahead, I don't know where I put myself, but go for it. Okay, thank you very much. I'm sorry about the, about the titles on the phones. Sorry, Enric, you have to speak with the microphone. Okay.
So, sorry about the phones that we have chosen, but it seems that there is an issue with the project. Okay, so let's go for the presentation. The presentation is about how we are handling with PBF data using WebGL.
First of all, to say that this project has been funded by the Copernicus program. So, specifically the emergency management system, sorry, part, area. So, we will analyze the problems and the main goals to achieve.
First of all, an introduction to give you some background of what the Copernicus emergency management system is doing. When there is a natural disaster, you can think in a typhoon, in a earthquake or, for example, in floods. A complex system is put in place.
They fly all the satellites over the zone and a team of quite a lot of people start to digitalize. The same as the whole initiative in the, with OpenStreetMap. And they are complementary. They also use the OpenStreetMap data.
So, finally, as a result of this mapping of the areas, maps for field work are produced. But, well, they are produced in PDF. So, it came to evolve to a map viewer where the people in the field could choose what to see, what some level, move, query and so on.
Interact with information. So, in that moment, the Copernicus team started to think in an appropriate architecture for all these map viewers.
In principle, they thought in a classical approach where you have a database, where you have a geoserver, where you have map tiling services and so on. But, they have some constraints about time, about keeping the service up, scaling and so on.
And it was decided not to use database, not to use any server backend for WMS services like geoserver and so on.
And consume only the static data. So, how can you consume those data without any server? Well, any server that is transforming the data and so on.
For the vector data consumption, we have a lot of features to manage for each of the disasters. When a disaster is produced, about half a million features. So, it was mandatory the use of WebGL. Same for open street map data that they are using.
And that was when it came the analysis on the architecture to use. WMS, WMS tiled and WFS will need a server, if you do not cache it, to serve all this data.
Which means that you will need to maintain that infrastructure. So, given that today our laptops are quite a server themselves in performance and memory. It was thought to use the latest stack standard to serve the vector data.
And all the rendering was taking place in the client. And because of the high number of features to be rendered, the use of WebGL was really mandatory.
So, it was proposed the stack as said. And the PDF was the selected format. We are also consuming raster data as COGS. We again have the same discussion if we use WMS, WMS tiled and so on.
And finally, what we did is to use also the stack data catalog and produce the COGS. And consuming it by a radiant air proxy in a first approach.
And now we are moving to the IoT file. So, in the server side, what we have is a folder structure that I... Well, just to let you know that you will have folders for each disaster. This is a disaster. This is the area of interest. These are several versions and you have folders in each of them.
And following the stack specification in each of the levels, you have a catalog version which is pointing the child and the parent's folder and that's it. On top of it, we have everything ready to put in place a client consuming that architecture
that simply goes there. We have an open API service to know which are these emergencies activated.
These emergencies to navigate up to there. And once you are here, the client is autonomous to navigate up and down all the infrastructure. So, all the folders have the PDF tile and also the COGS tiles.
And now what we have to do is research on the existing frameworks to develop all the solution in the client side. In principle, we thought about open layers, of course. They have PDF rendering, it has good performance in general,
but it didn't use WebGL for the vector tiles by the time. So, really the performance is good, but when it manages half million features,
it really couldn't manage that amount of data. So, we made a proof of concept and we saw that the performance was not good. It's also important to say that it has been quite a research project because we have not seen or we have not been able to find such an architecture
and such pretending performance with such number of features in a production environment that we could be led by.
So, then we moved to try to make a proof of concept on Kepler-GL. Kepler-GL, I don't know if you are familiar with it, you have Mapbox on top of Kepler, you have the GL, which is a layer with ReactJS, and then you have Kepler on top of it.
We analyzed it and we said, wow, it's cool. If it works, we have everything done because it has everything lay in, it has filtering, querying, all you can do with the WebGL, we have it done. So, in principle, it is okay.
But the thing is that Kepler is oriented to, I don't know if you have used it, but you upload a JSON, you upload a file, and then you display it. And for us, for our defined stack architecture where the PDFs are tiled and so on,
we needed to make some development there. Well, it's oriented, I say, to a single JSON, but we only had to load the HTTP sources for the PDF,
then manage it at the different zoom levels, then also manage the tiling at each zoom level, and also merge it as they arrived. We contacted the community. They didn't have anything like this ongoing,
and we tried to make the proof of concept. But for PDF tile, we were not in time and resources to make it work. So, even with the community support,
as this is aligned for Kepler also, we needed to leave it behind. So, what we did was go a step back. Kepler is built on DEC-EL, so let's go to DEC-EL because Kepler is constrained to the JSON on top of it,
the single file. So, let's go to DEC that it is not constrained with it. DEC, again, it's built on Mapbox-EL, and has React.js components. So, it seemed in principle okay, fine, we can go with it.
Indeed, we started programming. We made a plugin with DEC. The performance was good. We could be able to load the PDF, manage the tiling, and so on. But in certain cases,
as the components with React were already built, how it manages the calls and the callbacks to the server caused some blocking. So, we had some issues that we were not able to solve.
In all of this, I'm not saying that anything of these options is wrong. Just in our time and with our constraints, we were not able to make it work, okay? So, again, we had to make the last step back and get to Mapbox-EL.
And then, on top of it, we started to make, let's say, our own DEC and our own Kepler-EL. We have developed all these tools, these plugins, as React components. We have integrated Mapbox in React.
And the performance is very good. And finally, something that is quite straightforward, maybe for all of you, if you are using PDF, try to put all your layers in a single PDF. Maybe you have struggled with it, maybe not,
but if you put, for example, you have ten layers and you have ten PDF, you will have ten calls to the server. And when it is 50 layers, it comes to 500 calls. Then you get blocked, even by the own browser.
So, that's just a hint, probably you know, but store all your layers in a single PDF. So, now what we have here is a viewer consuming a serverless infrastructure that could retrieve and take the PDF data and render it.
So, we had very loose styles. And the next step was to render the layers with an appropriate style and do it in the client side.
Because all we have is raw data with the PDF. So, for these data layers that are stored in a single PDF,
what we have done is to define a specific style for each of them. And also the default styles, because each activation, each emergency has different needs in terms of layers, names, and so on. So, what we have done is to put in the structure in the previous one,
in this structure, I'm sorry about this, but we have a folder with the styles, so the browser also pick up the styles and render it as it receives the PDF.
And again, the performance is quite good. Then, this has been made for the PDF data that has been produced by the data providers when an emergency is activated.
But also, of course, OSM has data that is very useful for these kind of emergencies. So, we followed the same approach for OpenStreetMap data and integrated also a PDF in the viewer.
So, when you have an emergency, you consume the PDF data produced by Copernicus and also the PDF data produced to OpenStreetMap. Again, we render it all in the viewer and we render the styles using the tags and so on,
but in the client. And the performance is also good. How do we do it? We do it with Maputnik. Again, I don't know if you are familiar with these tools, but Maputnik allows you to load a PDF
and offers you a very straightforward interface so you can define and style for a layer in real time considering the tags, the filters, the colors, the scales, and so on. Just like if you were using a desktop application.
And this is it. What we have achieved has been to be able to render the data. I'm sorry, but I think it doesn't seem...
Wow, I will need to be in front of... Okay, and what we have here is the... I don't see where the mouse is, sorry.
What we have here is... This is where I left? Okay. Okay, okay, okay. The thing here is that we are requesting in real time all the data to the server. We have quite a poor connection here, I'm sorry for this.
But all this that you are seeing on top of the satellite are PDFs that are being downloaded in real time and they are represented as they arrive. As you can see, it has an instant representation once the information arrives.
So the only constraint here is the bandwidth that you have for this data. We are using WebGL and Mapbox also provides such... Yes, please, I don't have more hands.
So this kind of functionality that you can do, look at the labels and so on, what you can do with all of these tricks that people like to play with. And then you can move to another area and zoom in and zoom out. So they can easily move with all of this information
without having a static map as it was had before with the PDFs. Another... I'm sorry, how can I change...
Can I change the tab? The tab, I'm not seeing the... Okay, and this, what you are seeing here is a cog that is being displayed
also in real time. We have not talked about cog because this presentation complements another one in the morning with regard to the server part. But here we have a cog that is being consumed with the Radeon Earth plugin
that is making us the tiling of the cog and translating it to an image that can be interpreted by the browser as it is. We are migrating this cog to use a client only.
A client only consumer, let's say, without any proxy, so there is no server dependencies when you are retrieving the cog using geotiff.io as the library
that you have been able to see in the opera room. So, in principle, we will have this all working with as much serverless as possible. You only drop the data in the folder structure, you retrieve it.
Imagine you have only an Apache or an S3 Amazon to serve the data. That's it, that's the goal. No complex architecture, no performance issue because everything is done in your own server which is your computer. And that's it, thank you.
Thank you very much, Enrique. So, anyone would like to contribute, please remember to speak to the microphone. Anyone? You're craving for the dinner, I imagine.
All right, so then we'll close this session. I would like to thank the speakers in first place, the technical team which were very helpful even though we were not seeing. They were giving some important hints to us. And thank you all for coming to this session. And I hope that you have an enjoyable dinner later tonight.
See you then.