We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

From Planetary Scale to Street Level Detail: Instant 3D Map Data Fusion with VTS

00:00

Formal Metadata

Title
From Planetary Scale to Street Level Detail: Instant 3D Map Data Fusion with VTS
Title of Series
Number of Parts
295
Author
Contributors
License
CC Attribution 3.0 Germany:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
A global 3D map works with data from various sources: DEMs, orthomosaics, vectors or even 3D city models. The data is acquired by diverse sensors and processing techniques and varies vastly in resolution and geospatial coverage. Need for partial updates may arise as more recent or higher resolution data becomes available. Integrating all this data into a single, meaningful cartographic or VR product gives rise to the data fusion problem on an ever growing scale. VTS geospatial software stack provides a simple yet powerful way to tackle common 3D data fusion scenarios. In this talk, we will provide a hands-on demonstration.
Keywords
129
131
137
139
Thumbnail
28:17
Level (video gaming)Scale (map)Data fusionSoftwareTexture mappingMachine visionMotion captureService (economics)Menu (computing)Different (Kate Ryan album)Type theoryImage resolutionVolumenvisualisierungLevel (video gaming)Instance (computer science)Machine visionVirtual realityOpen sourceCartesian coordinate systemProjective planeDigitizingElectronic visual displayContent (media)MereologySimulationDistanceMedical imagingGame theory2 (number)Presentation of a groupProduct (business)Multiplication signCASE <Informatik>Client (computing)SatelliteFocus (optics)Context awarenessTimestampConstructor (object-oriented programming)System call1 (number)Digital photographyArithmetic progressionArrow of timeSet (mathematics)NeuroinformatikFood energyGoodness of fitScaling (geometry)Data fusionDifferent (Kate Ryan album)Division (mathematics)Virtual machinePlanningProcess (computing)Type theoryHypermediaResultantMetropolitan area networkMetric systemPhysical systemDigital electronicsMachine learningSoftware developerGoogolConnected spaceFamilyTheory of relativityMeasurementIndependence (probability theory)Web applicationWeb pageAugmented realityImage resolutionInteractive televisionComputer simulationVector spaceMobile WebInclusion mapPolygon meshBand matrixMathematicsLatent heatGroup actionService (economics)HexagonView (database)Lecture/Conference
Group actionInclusion mapINTEGRALPoint (geometry)ScalabilityClient (computing)Formal languageCASE <Informatik>SatelliteTheoremSurfaceLevel (video gaming)Data managementService (economics)DigitizingElectric generatorTexture mappingStreaming mediaLibrary (computing)Set (mathematics)Software frameworkDigital photographyDifferent (Kate Ryan album)Open sourceCartesian coordinate systemMathematicsLimit (category theory)TriangleElectronic mailing listTesselationInformationResultantOperator (mathematics)View (database)Computer configuration2 (number)Client (computing)Endliche ModelltheorieScaling (geometry)PixelProxy serverVector spaceOpen setConnectivity (graph theory)Extreme programmingComputer fileCASE <Informatik>Configuration spaceSoftware developerImage resolutionNumberType theorySurfaceComputing platformFreewareHypermediaForcing (mathematics)Diffuser (automotive)Software maintenanceFormal languageSuite (music)Front and back endsSatelliteFluid staticsBound stateRaster graphicsWeb 2.0Band matrixDynamical systemGoodness of fitServer (computing)Point (geometry)Rule of inferenceLecture/Conference
SurfaceEndliche ModelltheorieVector graphicsCASE <Informatik>Configuration spaceDemo (music)Client (computing)Image resolutionSurfaceSet (mathematics)Similarity (geometry)Different (Kate Ryan album)Multiplication signConfiguration spaceVector spaceUniform resource locatorBoundary value problemTesselationLimit (category theory)Proxy serverEndliche ModelltheorieData storage deviceMereologyFamilyInformationTowerDebuggerTexture mappingCodierung <Programmierung>Mixed realityFile formatLine (geometry)Level (video gaming)SatelliteArtistic renderingProduct (business)VolumenvisualisierungPlug-in (computing)ChainCASE <Informatik>Covering spaceFlow separationContext awarenessQuicksortDensity of statesRight angleContent (media)HypermediaVideo game1 (number)Lecture/Conference
InformationWhiteboardWebsiteStreaming mediaProjective planeNetwork topologyProduct (business)ForestCapability Maturity ModelFreezingInstance (computer science)Operator (mathematics)MultilaterationImage resolutionBuildingSemantics (computer science)File formatTape driveLecture/Conference
Transcript: English(auto-generated)
OK, so welcome to the talk about instance 3D map data fusion called From Planetary Scale to Street Level Detail.
I'm Tomas Kavan, and I'm from Melon Technologies. Let me briefly introduce our company. Melon Technologies is a software development company in the 3D mapping business. And we have two main projects. First is our in-house proprietary photogrammetric
system using a computer vision and machine learning. We are creating digital counterparts of the real world. On these images, you can see on the first, they say input images. On the second are computed mesh. And on the third, there are mesh.
There is some mesh finalized using machine learning approaches. In the second project, which is called VTS Geospatial, and which is fully open source, we take results from our photogrammetric system and fuse them with the other to virtual landscape
and streaming it to the end devices or system. It can be desktop, web, mobile client, or, for example, Unity or some of major GIS system.
VTS Geospatial is basically the blue arrow in the middle. There are various applications where our project can be used from virtual reality and augmented reality applications through the interactive simulations and gaming
to geospatial project with focus on 3D mapping. And lately, we have proudly become part of the Hexagon family, more specifically, Hexagon Geosystems division.
But let's do today's topic, which is an instance 3D map data fusion. Why we need to fuse data together? To answer this question, please consider following use cases.
The first is, for example, the 3D map service with global focus, but with emphasis to one country. There is a global context in there with the Earth model and satellite imagery, but in the same map, there are more detailed layers
with various data like 3D models, VHR, autophotos, and so on. The second use case can be a presentation, a small, very small presentation of a 3D city model as part of promotional web page.
And as the third, I have chosen a hobby web application displaying topographic model of Mercury. In this case, we have even different planet model with various autophoto layers.
Once we have three use cases, each of them needs to fuse the data together, and this is where our motivation for the data fusion comes, to display heterogeneous data together.
The data can be heterogeneous in a couple of ways. First, for example, different type of data. The data can have a different time stamp of acquisition, or for example, different resolution.
We might want to display vastly different data in one map. For example, we want to display a 3D map with vector cadastral layer, like you can see here.
Or we want to display a progression of some action in time in one 3D map. For instance, in these photos, we can see the 3D model of some construction site with visible progress.
And of course, we want our map to include data sets with very different GSD. We may want to display a satellite imagery for distance views. But once we get closer to more detailed view,
we want to display VHR data sets. To sum up our motivation for data fusion, we want to display heterogeneous data together because we want to find connections and relations
in the data. Also, we want to watch for changes in time. And we want to see details and the big picture together.
And what is important thing in 3D maps? 3D maps can make this relationship and changes way more obvious. So now, when we are properly motivated, let's advance to specific techniques
of server-side data fusion. The first one is called no fusion at all. With this technique, all data are streamed to the client, and the fusion
is let completely on it. On one side, it brings high independence of how data is used on client. But on the other side, it has serious design flaws.
The client is very resource-consuming. And in the process of fuse all that data together, this can be very resource-consuming process. And specifically, in a streaming environment,
the bandwidth pretensions are usually unreasonably high in this case. The other extreme is approach fuse them all. It's the most resources and bandwidth-saving approach.
But the data can be considered as static data, since each change in view results in reload the whole view again. Both approaches didn't suit our needs.
So we have started with fuse them all and have had a very, very lightweight client. But very soon, maintaining of large fused data set become tedious, thanks to the fragile update operation
and thanks to data size. The client had very limited options, how to work with data. And it was impossible to switch textures displayed on DIMs.
So we come up with feature set, which our framework must have. And that is how VTS Geospatial was born.
Now, the VTS Geospatial is integrated platform for 3D map application development. And it has a virtual landscape streaming and rendering engine.
And of course, it's fully open source under the BSD to close license. The strongest point of VTS are, first is data scalability. You can fuse together countless amount of 3D and 12 half D
data together. High performance streaming servers with bandwidth optimized dynamic triangular irregular mesh, auto photo generation, and static tile streaming.
And provides a lightweight and fast client libraries for web and for desktop. There were more talks about VTS yesterday and will be another two in following slot in this room.
So please come and see what's VTS in detail. Usually, when you have a large number of data sets,
to get a good maintainability and all of the data sets needs to be held separated. To make a good user experience, Fusion must be done on server. But there is also need to switch between data sets
without rule out the whole view. On following slides, I will show you how the VTS is dealing with this. Let's get back for a while to our first use case a little.
At first, please let me introduce you briefly Mappy.csat. It's a Czech language general proposed web mapping service launched by company called Cessnam.csat in 2005. And come up first with a 3D map covering whole Czech Republic in 2015.
The service is composed from a large number of various data. There are different digital elevation models like Artbody, Viewfinder Panoramas 1 and 3,
and more detailed Czech terrain digital elevation model. There are satellite imagery, which is Blue Marble, Binkmap Real, Landsat, and so on. And of course, we have Mappy.csat has VHR data
for Czech Republic, Slovakia, and for Austria. In the same map, there is rasterized based map for outdoor, for winter, and many, many different use cases.
And of course, there are 3D models. There is a whole Czech Republic with 12.5 centimeters per pixel resolution, and major cities with 10 centimeters per pixel resolution.
And at last, there are vector layers, for example, OpenStreetMap tiles and PeakList vector layer. Now, I will explain how those various data sets are interpreted in an VTS Geospatial.
In VTS, terrains are considered as a surface. Terrain itself usually don't contain any visual information and should be displayed with at least one bound layer. Bound layers are auto photos or textures, simply.
They need some surface to be displayed on. Both types here are dynamically streamed from a VTS Map Proxy server and are based on GDAL raster or VMS
or VMTS services. To fuse terrain with others, it needs to be added to VTS storage, which is the second backend component of VTS. The map configuration file describes
how the bound layer will apply. I will not go to detail into it because it's just about editing one configuration file. But here you can see the result. Starting from distant view, we have one bound layer
with a global scope. Once we are getting closer, the second bound layer with a better resolution but limited scope will appear.
If we return to our use case, we have covered a significant portion of data sets with those two instruments. So digital elevation models are surfaces. And satellite imagery, VHR data, and base maps
are simply bound layers. The 3D models are considered as a surface either. But 3D models can contain its own visual information.
It's called texture. The 3D models are static and pre-generated data sets converted to VTS using some encoder from a general format like VAF or SLPK.
3D models are fused with other surfaces when they are added to the VTS storage. Again, there is an example of configuration. It's not important.
But here you can see the boundary between different data sets. The light part is the city center 3D data set. The darker part is the whole city with slightly lower resolution. You can see that everything is very, very well fused together.
And the beauty of VTS is you have fused data from server, but you still are able to work with data set independently. Like here, when you can turn off and on one of 3D models.
Now, we know the 3D models are surfaces as well
and are fused with other surfaces in the VTS storage. The last thing to complete the family are vector layers. In the VTS terminology, considered as a free layer,
since it can be displayed on its own similar Likert surface. Those are streamed dynamically from VTS Map Proxy server and are based on, for example, Mapbox vector tiles or OGR features.
And the magic in this is that VTS Map Proxy enriches 2D vector data with height coordinate obtained from digital elevation model.
Example again. And if we turn on the vector layer here, it's displayed over the surface, but it respect the height information. So for example, if you see the tower is gone,
but the houses come, see the houses and the parts are boundary. It's pretty smoothly viewed together.
And this is actually the whole mix. Vectors are free layers and those are not fused with surfaces at all.
The LiveMap configuration is pretty large. Currently, it has 1,463 lines and contains 14 bound layers, 9 surfaces, 3 vector layers. And now to the magic. It contains 34 so-called glues.
It's a concept how the VTS go spatial dealing with diffusion and it allows the client to have a fuse data, but in the same time work with data sets independently.
It's a very complex context, but there is a very small demonstration how it works. You can see that the glue is only the boundary tiles with both data sets in here. And that's probably all.
I wanted to show you some demo from mapi.cz, but I don't have time for that, so you can do it by yourself on this URL. And for now, I thank you for listening and we may now open the floor for questions.
Maybe I missed, but what is used for rendering in the front end?
What is used for rendering in the front end? We have a couple of clients for rendering. It has a JavaScript renderer. It has a desktop rendering client and a Unity plug-in.
So it has its own front end renderer. Sure, it's for its vector layers, for example. That's a very good question. The question, there is another talk, which is called Battle of 3D Renderers.
I believe it's held today. So it will be held today in this room in the following slot. So if you want to answer for your questions, attend this. Yeah. Sorry, again. Is that possible?
You mean information from rectolayer? Hate information from rectolayer. If, yes, of course, you can encode your 3D model into ETS
and then display it. It's two dimensional. It's two dimensional. Yeah. Can it be three dimensional? Then there is no need for enrich it
by height information. And it can be showed, I believe. Concerning the 3D model, the 3D city model,
you use, obviously, high resolution laser scan data. No, it's only from a real imagery. Only from a real imagery, yeah. That's the beauty of this model. But the question is, can you populate the lens
tape with grease and vegetation? Is it going to have a vegetation model? Based on the cheese, we have vegetation for it. That's more the question on my colleague.
So if you can populate with already vegetation that you already have. Basically, now we are adding semantics to our first project. So that's going to reflect in VTS too.
So in VTS, you can now stream LOD2 buildings already. We have some provisionary format for that. And this will also include trees in the future. So I guess there shouldn't be a problem in the future to convert the GIS vegetation data to this format. It will be something pretty standard, I guess.
So it should be possible. You probably won't stream every tree in the forest. But it will be probably based on some instance rendering in the end. So it should have good performance, yeah.
OK, thank you. That's all.