Logo TIB AV-Portal Logo TIB AV-Portal

Using Mapillary data for editing maps

Video in TIB AV-Portal: Using Mapillary data for editing maps

Formal Metadata

Using Mapillary data for editing maps
Title of Series
Part Number
Number of Parts
CC Attribution 3.0 Germany:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Release Date

Content Metadata

Subject Area
This talk is going to give an overview of the different data endpoints of Mapillary, for example images, object detections (e.g. street signs), objects and vector tiles. We will look at different integrations like OpenStreetMap iD editor, JOSM, Wiki Loves Monuments and others using portions of this data to improve or document physical spaces. Also, the talk will cover different Open Source integration libs like OpenSfM, MapillaryJS and the iD editor.
Keywords Mapillary AB
Meeting/Interview Blocks
track mapping services mapping Meeting/Interview objectives level level platforms
area vision mapping time views graded maximal Databases bits instance Privacy Coloured vision computational image sign photos alternatives Computer animation Right processes catastrophic
share bits sun programs
degree constantly track image Mobile App pixel Computer animation directions high resolution Right model
image map case different time 3rd The Deep Web Right part Demo thumb alpha
point man CAP building map time frame Demo Emulation Annulus (mathematics) image tunnels sparse Densities Computer animation different URN cloud Gamma
man Bugs building mapping views time point neural network bits sequence Emulation Sans CAN-bus photos topology Video cloud website life Right Gamma
point building interpolation confidence shape perspective Emulation image sign mechanisms hypermedia Video objectives information mapping storage plan The Deep Web sun Databases instance Computer animation phase cloud convex hull Right objects
recognition digital filters mapping forces objectives objectives Demo
recognition digital filters mapping Open Source views Ranges graded projects student part training Emulation goods objects objectives conditions
image coma law Hot Menu Right part rectangles Sphere spacetime
Virtual Home Environment dynamic building code plots ones maximal open subsets clients argument part Privacy sun Arm image Pointer different cores framework Gamma A-posteriori-Wahrscheinlichkeit form CAP map mapping Tiled formating construction projects traction vector measures inclusion visualization real vector Now platforms cloud normal management
Open Source Open Source modularity viewer level
Open Source building Open Source open subsets Demo measures period image Location Computer animation viewer level model structure simulation libraries
Computer animation mapping integrators open subsets terms Privacy Semantics open subsets metadata systems
implementation Open Source link states code time open subsets part Coloured attributes Twitter image static Meeting/Interview area mapping NET graded Databases basis instance processes essence Right objects
matchings information compass directions time traction instance measures sequence Semantics image processes Computer animation angles Meeting/Interview Tower bridges communication level model
but if you have a good morning
everyone would today we still starts this session block it was popular epidermal and you will use from now the pillory and he will yeah Salish ratio of what that is and how you can use it so there are good morning uh with super massive you hear my
name is that you know was really also to be here and look at the bonus track and everything it's fantastic so I want to talk about about what what we do it not blurry and and how we can use this data to to improve maps to get data out of that of that platform which is probably the the the main you know objective of foster G and and and what projects that that we contribute to the world so that you can use that so so we have
basically on a service that crowdsources street level imagery from any device basically our there's you know a panoramic devices consumer-grade cameras mobile phones whatever we then up we then
have to make it like the the use computer vision to generate like more data from that and we make that available by API and so on we also give the the imagery back under Creative Commons ShareAlike on of course pixelated like the minimum requirements for having privacy contained and and then we we also like integrate to OpenStreetMap and 2 other mapping and alternatives where can then get that data and derived from new data that then goes and OpenStreetMap for instance 3 we can't really do like automatic edits because that's not what what what these initiatives are about what we can give suggestions and and you know underlying data so others can can then point out like this is a traffic sign or or this is a new way what that is because today
I yesterday actually we started to cross the air 80 million photos we align yes that's that's at least a as I know that is more now than Panoramio which was the greatest photo photo collectible time so for so so we we we cannot pass it yesterday their so there's a lot of mapping going on all over the world but especially in Europe as always and in the US but also in the eye and and Africa and South America pretty active now so I also like the Red Cross and other so using that blood to to map like catastrophic areas together like a before and after view and and and and like monitor climate changes and that that kind since you can use used since it's really not stitching images in the background but the database that you can very you can actually make like timelines and you can that for the for users and for for you know the times of day and for color gradient or whatever you want looks a bit like this right now
are the San Francisco this is uh town very close to to where I live has work of this is actually the municipality sharing of professional Street you data that they have acquired from from you know people that program around with with measuring device so this this data is very very good and it intersects than with with uses data that that is a bit more shitty a device is
very so but many people use the the phone apps because you have everything in 1 what we need is a GPS the in this direction it can be inferred if you say you know that the camera was pointing in the in the track of the directional within uh the direction of the crack or it has an offset always say 90 degrees of was going to the right and so on and are and and and then of course professional reeks that that do that from that we use the focal length of the cameras and the GPS data to calibrate the model that comes from that and then be able to intersect these these images you
can build you know quite a lot of interesting things but this is actually the the day as Rico come that's a 360 camera and then he has the mobile phone there because these constants consumer-grade cameras don't have the best resolution yet the very convenient but but you know yeah they have like 5 thousand pixels spaced out over 3 from 60 degrees which which gives you kind of pixelated images if things are like too far away right so so we probably need some more solution coming
there on behind the scenes of the 1st thing we do when the images come in is to to try to to detect faces and license plates Our viral right now static detectors we're working on on self-learning learning like deep nets to do this and we bluer than the hot blow them in the image other things we are detecting and we are showing them all blurring them on the fly in the viewers and and you know in what you get out we don't want to destroy like original data but in this case are we actually doing it you know in the in the 1st time this sense that that that we generate are we generate thumbnails and 4 different sizes to to minimize from traffic it's on on Amazon and then depending on what you want you can compile them up to 2048 speak the Western like many sizable and fit we then do about 3 D
reconstruction from this if you if you look at this it's actually not an alpha blending this is actually the different textures part of the textures blending into each other so so this is what it
looks to be behind the scenes disappoint cloud going I can actually show you on from yesterday like this so these are the kind frames that went here and these are the points that are that are kind of like reconstructed from the overlaps of different images and from the calibration so what we're building really is a global sparse point cloud of the world but as textured at the same time we're now starting to investigate how to import lot of data which is like dense point clouds and then you would get of course the uh uh the ability to to to texture you know dense point clouds but which is super interesting also uh this is calibrated so depending on how good the incoming data is you can actually measure in these point clouds so you can you can find out like how far is it between this and this on which it which is interesting for munnicipalities they can measure like roads and tunnels and that that kind of stuff but I can actually show you 1 of these examples
but yesterday I went to just around here
instead of going to the party and and if you if you look at the i would just reloaded so it gets the right this is the that a bit older
view about the good thing is it has the point cloud viewing so
this so this is a this is 1 sequence right of basically so you can see here the actual the the actual building and the trees there you see the trees there being reconstructed and you can you can actually walk walk through it you're right so this is not just a video of this this is life on the site like all the time just just so you know
when the next person comes then like overlaps the point out will get enriched and and the brighter use all the the the building was kind of smeared out the big building because we only had 1 perspective there's no depth perception but as soon as someone else comes in and comes of it from the side then this will be adjusted to form the actual building right so right now we only have 1 perspective of so the
but also what we're doing now it is found in the images permitted starting to use deep learning to detect objects in these in these images and the seasons as we have the depth information these are not just detections these detections but from these detections by interpolation and so on we can't party stabilize the point cloud we want to take out things that are volatile from the from the from the point cloud for instance we don't wanna much on sky the skies segmented here the blue thing this guy uh and has a very good accuracy in in in in deep nets it's very easy to learn and it's very high value for us to take that out because it gives falls overlaps what little objects same thing for for cars for people for other moving objects that we can recognize we want to take them out from the from the point of matching so the point gets more stable and also of course there's there's other objects that we want to detect like street signs like like vegetation park benches buildings and what not but right now the point of is of it's bars but it is it this sufficient for 1st street signs if you have if you can if you can match like traffic lights the Sun's lamp posts to see that I'm here exactly match but you can then if you see it into 3 images you can interpolated and put it on the map where is as opposed to in the image plane right and that then gives that we call that object merging so there's that the phase of detection of things and then the merging phase where merge different detections into 1 object or 1 three-dimensional thing but so so this is a this is what we want to be doing right now where we have built the background and rebuilding about now because it explores the database whenever we're done 20 million images right now with 1 detector and there's about road 100 shapes for that detected per image which means 20 million images 200 million uh shapes 2 billion shapes and so on so so we need to come up with a better storage mechanism for that but you know that's that's big data problem as they come from the fossils so this is how it looks in the media right we we do that that kind of 4 for images also people can upload videos and then this of course is much more effective because you can you can do he object tracking and striking and so on he see the the confidence intervals for vegetation for for cars and so on OK let's see if
I can so from this then
this is kind of like the more more than example about what a municipality wants to do they want to like recognize
certain objects and then
place them in into into the
scene and have them on the map that gives them the ability to forces
validated databases or or see when an object was seen 1st lost is still there are in in what in what condition it is because this is ground truth right so so you can send like students or or garbage trucks around with cameras and and then assess the data that you need and it doesn't need to be professional grade it's enough that you get a hint of what what is there it's it's kind of good enough and it only gets better the more the more data they also become kind of like from the from the low quality data range and work our way
so this is what the the the new website looks like this is part of an open source project a method JS where we were we open source the whole viewing experience are we adding in there now the 3 D of the 3 D placing of markers so you can so we we know like this is farther away than this you can you can actually see them the blurring if I
and it's not exposed like that that the market API is not exposed yet but if you go to it to this for
instance and and blurring when I say I want to everybody can suggest pairs and and others so this is a panoramic image right so if I they do this you see that I'm not marking in the rectangle by the rectangle in space so so I can even do it like this it will it will actually be quite interesting but then I would I would kind of blur the whole lower part of that of that sphere as opposed to direct and so it's it's it's got a nontrivial to do these these kind of 3 D the things that we can i don't want
others found OK so how
do we how do we make this data available we will open
sourcing as much as we can or as much as makes sense of all over the the core code but also like for the data we having basically 3 big outlets out one's nabla JS which is the kind of visual parts of everywhere we use these 2 API is and the textures and the actual images to 2 as you saw that to make a kind of street you like experience but + + because you can actually in a modified and and and measure and you can place markers it's kind of a 3 D flat I would say that we were starting to build a 3 D framework where can place like like my posterior leaflet or any anything to be you get 3 different in the background this is using the MAP expected form for almost that any data that big the big advantage here is that the vector plus have already bonding boxing building so you don't have to have that as a as a extra parameter also they're highly optimized and they aspire on the client if you happen to like then then the 2 D map is consuming the same day as the because you know to the maps are going to inspect the data for different reasons and uh and also then then the year of the vector data here is dynamically created and statically created so our our kind of like dynamic API is that on not just plot cloud from like electrode above tiles that are there also returning the same format so you can you can drill down to any data you want and then just added to the map and style so it's it's very very convenient that and then there's uh there's a there's other API is that are like special for you know private projects we provide private private projects for for people that don't want to have the data open for like construction businesses and so on and so on that that that need like private data of best and normal kind of Jason Bay stress based API
open source as I said the viewers open-source modular JS and and we're working very actively on that because we're using it uh ourselves pretty intensity it's also done for embedding and for for like it easy showing
off your own stuff you just adding now filters to it so you can actually say I only want the footage from me or for from from these 3 uh like the
users from this period because that then the weather was bad best for showing up all of my location of whatever you need to do so it's it's very good for
embedding what you saw like that 3 D reconstruction from from images it's called structure from motion on we have open source this to and are actively working on this this lets you do like you know city level 3 D reconstruction from from injury and it's built on top of open cv are so that's why it's called openness of on and then we are
opening the data as as much as we can and as we allowed we cannot really like you know put all the origins online because of privacy reasons but all the blurred originals we can put online and with the with the API is beginning in the metadata to and we have a special license for for OpenStreetMap and for other you know open mapping activities that they can arrive derive any data they want from from these which is which is what we will be what we want to give back and also in all the business licenses are compatible with the competing with others so so all this footage is usable in in in these in these activities so yeah that's serve from me any questions I
can show you the looks for OpenStreetMap integration but probably that's all the 5 minutes here uh I think you various secular amounts and yet still really set only are open for questions and is a semantic segmentation system uh both the
yeses to to a big part it's it's party part of openness of them and partly this is so implementations of all the papers that are out there so so much of it is actually like research I can get back to where we had stage right now we're finding the best models for for segmenting the data and and some of the segmentations are done in in in several steps so you 1st find for instance areas violet color gradients static methods and then you pipe them into another segmented that does actual actually deep learning right so there's this different this different stages in their and that depends on what you wanna do I mean this this stuff is no no secret but we chain it in kind like Barbara images so the question is how to best open source you know plumbing could have but about a yes and we recently published our findings in paper so we actively uh participating in academic research on this stuff because it's so bleeding edge of the code yeah it's mostly mostly and it's mostly academic papers right now so look at papers from Gerhard and Peter in got there or academic kind of researchers and they will not go out and compete and and and the what is it called there's there there's a bigger segmentations televised now we think we are in place of 4 according to the to the state of art so so watch Peter Conchita but his regular presenting the the papers and so at conferences and as time goes we will have something that we can actually have put into code but now it's very influx other questions and the essence of the presentation and for this is always in the street map OpenStreetMap contributor I I did want to add some attribute is a data now based on net he re database that posted signed 10 restriction and is that appear manual process solely providing tools to facilitate so so we have been thinking about holding tagging data in our database we decided against it because it's very hard to do it right for everybody what we what we do instead is that like openstreetmap has has source map of the tags that refer to the UID every image the UID and every object as you so you can even in the future refer to you know recognized objects and say this is the basis all of us you know tagging this as a lamp post and so on so we would like it to have links into the database not necessarily hold all the folks on Monday so that people come up with but I think that's 1 of the big problems OpenStreetMap to to hold all the wrongly tagged to you know misspellings and so what we will do it is to enable full-text search on all the comments and all everything so you can ask the hashtag things so if you want to later on you could actually putting OpenStreetMap hashtags into the common somewhere and be able to filter out everything that that goes that way so we're kind of like taking the Twitter approach that we think we're not we're not finished thinking that thank you payment time for another short question this
education is a 1 and so you
don't really know exactly for 1 image to the direction of the image that is provided we know however the truth from mobile phones and so it is it is the resulting 1 solar so we normally use you know it is um uh the the reconstruction to also determine better the direction also normally when you when you have the camera steady in into 1 direction uh like relative to the moving direction then this is normally in far better source of of in this direction than anything else because you compasses drift especially in cars where there's a lot of metal around it our that's basically useless unless you have a special measures so so what I would just stress is to lock the compass direction into for direction however we can with you know semantic learning for instance turned the images back when we detect like OK you there there's an image sky is on the lower side so this is probably a rotated intracarotid converted back and we can also say that all the compost says you're going very of viewing there but the big post tower is on the other side so so we general we turn this image in the direction but then we need to have sufficiently good data surrounding them from just 1 sequence you can't really say because you have no would you have no existing models and when available could you make that available and DATA as well because it's not available right now the as a direction of Cmhc scratching not everybody GPA this is not it's it's called the a compass angle OK but you don't have long beach young information you have we don't like that there we currently don't hold that actually so so in the axis there is in this direction at time that's what we use from and and I think then I we submit actually the job pitch direction of the estimator major so that might be available but but we are not using it right now that's another thing to improve for the future to actually have a height model also 4 electron images and so on and we're not going there right now we have a lot to do with just ground-level imagery but eventually that will come also the the a concept like in OpenStreetMap of levels so a bridge has like 2 levels 1 is down there and 1 is out there so that the images don't match so right so that that kind of things OK thanks again think of