Speeding up the RapiD map editor with WebGL and PixiJS
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 351 | |
Author | ||
Contributors | ||
License | CC Attribution 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/69046 (DOI) | |
Publisher | ||
Release Date | ||
Language | ||
Production Year | 2022 |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
| |
Keywords |
FOSS4G Firenze 2022198 / 351
1
7
13
22
25
31
33
36
39
41
43
44
46
52
53
55
58
59
60
76
80
93
98
104
108
127
128
133
135
141
142
143
150
151
168
173
176
178
190
196
200
201
202
204
211
219
225
226
236
242
251
258
263
270
284
285
292
00:00
BitText editorRevision controlBuildingSet (mathematics)Self-organizationMeta elementLevel (video gaming)MereologyFront and back endsINTEGRALShape (magazine)Open setCanonical ensembleComputer animationPanel painting
01:08
Type theoryPoint (geometry)TouchscreenAddress spaceLevel (video gaming)BuildingVector spaceMultiplication signVertex (graph theory)PredictabilityAreaQuicksortComputer animation
01:56
Reading (process)Address spaceDisintegrationBuildingDifferent (Kate Ryan album)Point (geometry)Shape (magazine)Level (video gaming)Set (mathematics)Order (biology)Multiplication signFocus (optics)Computer animation
02:19
Personal area networkFrame problemBitZoom lensDemo (music)VolumenvisualisierungRight angle
02:48
Object-oriented programmingDemo (music)Lattice (order)Web pageDisk read-and-write headLevel (video gaming)DemosceneFrame problem2 (number)
03:11
Software frameworkSlide ruleDemosceneVideo gameFlash memoryOpen sourceVolumenvisualisierungFitness functionWeb browserMultiplication sign
04:03
Web browserLiniengruppeLocal GroupOpen sourceBuildingPrototypePower (physics)BuildingElectronic mailing listVisualization (computer graphics)Event horizonMappingWeb browserOpen setCodeRectangleSlide ruleMathematicsNormal (geometry)Software frameworkLevel (video gaming)Object modelQuicksortDynamical systemClassical physicsObject (grammar)Computer animation
05:37
Line (geometry)Cartesian coordinate systemWeb browserMultiplication signMathematicsWebsiteContent (media)Web applicationProgram flowchart
06:15
Graph (mathematics)Level (video gaming)Default (computer science)Data bufferTexture mappingElement (mathematics)Open sourceTimestampLine (geometry)Point (geometry)BuildingGame theoryFrame problemScene graphBuffer solutionCodeTerm (mathematics)Web browserDiscrete groupDemoscenePixelLevel (video gaming)Semiconductor memoryData conversionGraph coloringMobile appMultiplication signVolumenvisualisierungSystem callStapeldateiTranslation (relic)Drop (liquid)Video gameWordView (database)Mountain passLoop (music)VideoconferencingComputer animation
08:13
Multiplication signGraph (mathematics)Pairwise comparisonPoint (geometry)Task (computing)Different (Kate Ryan album)CASE <Informatik>Graphical user interfaceDemosceneFrame problemComputer animation
08:47
Product requirements documentCovering space40 (number)Google Street ViewArtistic renderingSpacetimeRight angleBuildingNear-ringLevel (video gaming)DemosceneMereologyNetwork topologyOpen setView (database)Computer animation
09:51
State transition systemRaster graphicsCone penetration testProgrammable logic arrayBit rateMetra potential methodSlide ruleRevision controlBitZoom lensComputer animation
10:08
Software as a serviceReading (process)Graphics processing unitBit error rateState of matterAbsolute valueThermal expansionSima (architecture)Slide ruleGraphical user interfaceFrame problemCodeRight angleComputer animation
10:36
Event horizonTask (computing)Function (mathematics)System callDemosceneElement (mathematics)Graphical user interfaceProfil (magazine)System callMobile appLevel (video gaming)Rule of inferenceRight angleTracing (software)Stack (abstract data type)Clique-widthMultiplication signSemiconductor memoryFrame problemScene graph2 (number)Personal area networkLine (geometry)VolumenvisualisierungGraph coloringComputer animation
12:01
Web browserDemosceneCodeBitValidity (statistics)Multiplication signGraph (mathematics)Process (computing)Graphical user interfaceSpacetimeObject-oriented programmingMobile appGraphics processing unitUser interfaceVolumenvisualisierungTask (computing)Goodness of fitGraph (mathematics)
12:51
Cross-site scriptingVolumenvisualisierungDemosceneGame controllerMathematicsMemory managementComputer animation
13:28
Personal identification numberComputer animation
13:51
MappingInflection pointComputer animation
14:12
Line (geometry)Dot productMedical imagingAlgorithmView (database)Clique-widthDebuggerProcess (computing)MultiplicationRight angleLimit (category theory)Computer animation
15:00
BuildingDemosceneMappingFitness functionAddress spaceOpen setComputer animation
15:36
ClefRectangleSweep line algorithmComputer reservations systemWebsiteSlide ruleShape (magazine)QuicksortBuildingRectangleComputer animation
15:53
RectangleOperations support systemPolygonTexture mappingFreewareGoodness of fitSquare numberComputer animation
16:12
Codierung <Programmierung>RectanglePolygonZoom lensSpeech synthesisElektronisches MarketingZoom lensTheory of relativityQuicksortPole (complex analysis)Projective planeDecision theoryNichtlineares GleichungssystemWeb 2.0Computer animation
16:40
Zoom lensPairwise comparisonBuildingZoom lensComputer animation
17:02
Zoom lensMaxima and minimaZoom lensDecision theoryPoint (geometry)Computer animation
17:44
QuicksortBuildingLevel (video gaming)VideoconferencingPoint (geometry)Shader <Informatik>Computer animation
18:18
FingerprintRewritingSystem callGroup actionSemiconductor memoryTexture mappingMultiplication signProgrammer (hardware)Computer scienceComputer animation
18:53
BitTouch typingVolumenvisualisierungZoom lensComputer animation
19:16
Bit rateType theoryCore dumpComputing platformFunction (mathematics)WaveSoftware testingLocal GroupRight angleZoom lensLevel (video gaming)Software bugComputing platformFrame problemAlpha (investment)Web browserCodeControl flowSystem callInternational Date LineGroup actionTesselationSoftware testingTwitterOpen sourceFacebookForm (programming)MultilaterationComputer animationSource codeJSON
20:34
Meta elementDaylight saving timeComputer animation
Transcript: English(auto-generated)
00:00
that works on Rapid. So if you were here for the previous talk, you saw a little bit of our editor. So we're gonna introduce you to a few other things that we've been up to lately, namely the v2 version of Rapid that's gonna be coming out later this year. So you had a pretty good introduction to what Rapid is in the previous block,
00:22
but for the folks who haven't been here, it's basically the canonical OSM editor known as ID, but with some extra datasets added on top of it. So datasets from all kinds of non-governmental organizations, some government entities, and as well as global datasets from meta.
00:43
Even Google actually submitted 50 million buildings across Africa later last year. And the data is served up and conflated by a meta-hosted backend, which is part of our kind of value add that we're not just shipping you shapes that already exist on the map. We very conveniently hide those for you so you don't have to be looking at two versions
01:01
of the same building. And we also have the integration with Esri's ArcGIS open data API. So really quickly, this is what you'll get if you look at Rapid, which is pretty identical to what you get at ID. This is just the OSM data that you see drawn in handy vectors on the screen.
01:20
As you move into Rapid, you will actually get these very nice AI predictions from Microsoft that are pretty high-fi and they're getting better all of the time. So if you saw this data maybe a few years ago when it first came out, I think you'll be pleasantly surprised by the level of improvement that Microsoft has put in. And finally, we're also looking at an area where there are Esri address points.
01:42
So right away, if you're a new mapper, this is gonna save you tons of time. There's probably a few hundred vertices in all these buildings that you would have to sort of laboriously add and then square and then add the address points, type a whole bunch of data in, where all of that is pretty much just done for you. And we also have a dataset picker.
02:00
Chris just covered this, so I don't wanna belabor the point, but there's a lot of data in here, I think on the order of hundreds of millions of different points and shapes for you to add to the map and potentially edit. And there's more of these being added all of the time. So what else we have been up to is really the focus of today's talk, and that is something new,
02:21
which is Rapid v2 that Brian and I have been rewriting all of this year to use WebGL as opposed to our older rendering stack. If you were to say use Rapid 1.0 over Florence today, you would have to zoom very far in before you got a very decent performance.
02:41
Right now, if you were at zoom 16, you're looking at like two frames a second as you pan the map around. So we have a little bit of a demo here. On the right, you see Old Rapid just clicking the map and attempting to pan. And this is not a particularly busy scene, but it still takes two frames a second. After a few performance improvements
03:02
and converting the engine over to WebGL, we get some pretty substantive performance gains and we aren't done yet. So we're pretty happy about this. So I'll hand things over to my colleague and he'll take you through how we did all this. All right, thank you. All right, so like the slide says, right, Rapid is slow.
03:26
We know that there are faster technologies for rendering complex scenes in the web browser. And this is something that we've kind of wanted to improve for a long time. So last year, we really started exploring this idea of replacing the renderer with something better.
03:40
So we looked into a whole bunch of WebGL rendering frameworks and we settled on one that's called Pixy.js. Pixy is a popular open source framework, which is really geared towards doing 2D graphics and video games in the browser. So it's kind of a replacement for Flash or SVG, which makes it a really good fit for what we're doing.
04:00
And I gotta say, it's been like a real joy to work with. So let's start, we'll talk about the old code. This is the code that we replaced. We used D3.js, which I think a lot of you know this, but anyway, D3 is a visualization engine, but it's very dependent on binding your data to the browser's DOM.
04:20
The DOM, when we say DOM, that is the document object model, which is kind of how the browser organizes everything that it's showing to you. So this slide shows sort of the classic D3 example, where you've got some data up top, like a list of cities, and then your D3 code will turn that into an SVG document, which has some rectangles,
04:40
and then you see it in the browser as a chart. And D3 is also really great about handling if your data changes, so it can be very dynamic. You change the data, and then the rectangles change. It's a really powerful visualization framework. And it works for maps too, like a lot of you know.
05:00
This is traditionally how Rapid and ID before it did all of its rendering, so you start with some data, like the nodes and ways that we would receive from the OpenStreetMap API, or maybe some GeoJSON that contains building footprints. And then your D3 code is going to run through it, and it's gonna turn it all into, like here it's turning it into paths, and you see it in the browser as a map.
05:23
And you can do things with this SVG, like you can style them magenta like we did here. You can listen for events so that the user hovers over the buildings or clicks on the buildings. You can respond to those events just as normal browser events would. But it starts to break down, as we've seen performance wise,
05:43
once you push more and more data into it. So the DOM API, this is again how the browser manages everything that it shows you, can only handle so much, and it's kind of notoriously slow to be changing the content of the website while you are on the website. So you generally don't want a web application
06:01
to be changing the DOM if you can help it. Because when your application changes what the browser is showing, the browser has to recalculate everything like where it is and what it's styled like, and that can be very time consuming. So we're not gonna do that anymore. This is what our new rendering pipeline in Rapid looks like.
06:22
Our familiar map data still flows into it from up top, but now we're generating a pixie scene graph in memory. Pixie then turns all of that into WebGL draw calls. And it's actually very, very good at what it does. For example, it will actually batch together similar things.
06:40
So like all these buildings that are magenta, it's not gonna draw them one by one. What pixie will do is it'll just load up a buffer with pixel data, and it'll just say, hey WebGL, set the color to magenta and just draw whatever's in the buffer. So you end up with a very efficient pipeline, and it gets drawn onto a very efficient WebGL canvas instead of into one of those chunky SVG documents
07:02
like we were doing before. As we've refined the renderer over the past few months, I put up some words here. This is work that's still ongoing. We realized that it made sense to really think about these two steps in the pipeline as discrete pieces. And so that's why I wrote on there app-col-draw.
07:21
If you're familiar with any really old school renderers like Panda3D or even SGI's performer, you might know these terms. So app is work that we do to prepare what belongs in the scene. Col is when you're removing things from the scene that has gone out of view and the user just can't see them and draw is the conversion or translation
07:42
of that scene into WebGL. So the idea here is what we're doing is we've set up like a game loop in the browser. It's almost like a video game, like it really is, where we have a request animation frame callback. And that means that on every frame of browser time, our code gets called and it has an opportunity
08:00
to do the next tiny little chunk of work so that we're keeping the browser happy, we're not overloading it. And it still retains, it can still be interactive and it doesn't drop frames like it used to. This is a side-by-side comparison. Again, the difference in performance has been just really impressive, especially as we work with scenes
08:21
that have added more and more data to them. So in most cases, we're getting around a 10 times improvement in speed, give or take. Once you get past a certain point, obviously Chrome is not gonna go any faster than 60 frames per second, so you can only speed it up so much. But even basic editing tasks like this
08:41
where you're drawing points and lines, it becomes so much more responsive than it used to be. So just to kind of like demonstrate how much better the new rendering pipeline is, this is kind of a really busy scene. Somebody on Slack a couple months ago shared this. It's someplace like near Miami, Florida,
09:00
but it has like 45,000 OSM features in one little view. People are mapping all the buildings, all the roads, all the parking spaces and trees, right? And look, OSM has really grown up a lot, and so some parts of the world have become really micromapped at a level of detail that was really unthinkable
09:20
even when ID launched 10 years ago. And with Rapid, it's not just about the OpenStreetMap data, but we wanna layer more data on top of this data, right? We wanna layer on the AI-detected buildings, the roads, you know, whatever other data that we get through our partnership with Esri, whatever other QA layers that people wanna see,
09:41
or even OSM notes, and you know, on top of that Mapillary imagery, Street View, and so it's just a lot of data when it all comes together. So here's that place from the previous slide. This is what it was performing like in Rapid version one. I actually did have to zoom in a little bit
10:00
just to have something happen, otherwise it would just be completely frozen. This is zoom 18. But I guess we're getting around four frames per second here, give or take, with occasional pauses where it just stops. And this is what it looks like, kind of a new Rapid. It just like slides around very smoothly. We're getting around, like Chrome is reporting
10:21
around 35, 40 frames per second. So that's about that 10x improvement, right, going from four to 40. The frame rate that's being displayed up in the corner is Chrome's frame rate, and our code actually uses a few tricks to render less frequently than that. So we're gonna get into that, the details.
10:41
This is a little bit detailed, but right, this is what we're seeing here is called a flame chart. I ran both of these scenes in Chrome's profiler so you could really see exactly where it's spending its time. And I zoomed in, so what we're looking at right here is just one second's worth of data. So we've got two frames, right, that's not very impressive
11:02
and most of that work here is being done just by like, just updating the DOM and cleaning up that memory because those DOM elements really get churned up a whole lot. And then up in the corner, like if you see that purple, it says recalculate style. That's Chrome trying to run all of our thousands
11:21
of CSS rules against everything in the scene just to make the map styled correctly, just to decide the widths of the lines, the colors, things like that. So this is the new profiling trace of new WebGL. We're getting around, like I said, 40 frames per second. And remember when I said we use some tricks to render like actually less frequently than that?
11:40
It's kind of a bit obvious on this chart. You'll see like there's kind of stacks where the work is being done. Those are our app and call passes over the scene graph. And all those tiny little slivers in between are where we let the user pan the map, but we defer the redrawing until one of these other times.
12:02
Oops, can I go back? Yeah, good. The scene graph's also actually really interesting because you'll see there's a lot of space where our app is doing nothing. So Chrome is almost like idle. That leaves more space for us to perform other tasks in the browser like validation or user interface code.
12:21
So we're really only spending a few milliseconds of time to set the graph up. And then at the bottom, those green bars, you can see that's the GPU doing its job. And this scene, we're actually kind of GPU limited, which is a thing that has never happened to us before, but it's exciting because that means that now
12:40
there are other ways that we can speed the scene up even more than what I just showed you, either by just drawing the data a little bit smarter or maybe filtering some of it out to draw less data. So in summary, this new renderer really represents a change in how we think about data in Rapid. Before where the DOM and the styling
13:01
were really like locked together, now that's no longer true. There's no SVG, there's no CSS, and the drawing of the scene happens much, much faster. And we have more control over when these things happen. Like when we draw, when we free memory, it's just better all around. So thanks to this new render,
13:21
the rewrite also gives us opportunities to improve some things in Rapid that I was never really happy with. So we're gonna talk about those now. We took a fresh look at label placement. Before, the labels really had to be placed exactly next to the pin, but now we try a whole bunch of placements
13:40
around the label pin. So this is especially great for situations like this where they're all lined up. It will actually stagger the labels so that you'll see more of them. Here's another, this is kind of a before and after. It shows like before, you only get a few labels, but now you get a whole lot more labels. And this is really useful if you're mapping POIs, right?
14:03
Like businesses and shops. And we'd really like to see more people mapping more of this stuff. So it improves the user experience a whole lot. We also improved the line labeling algorithm. The old line labeling algorithm was kind of limited.
14:20
It would just pick a few spots and you would either get a label or not. And now Rapid can test a whole lot more spots along the line and even place multiple labels on a single line, like if it stretches out very far. So this image is kind of like a debugger's view of that process. Here, the red dots are places where a label would just cover something.
14:40
So we don't wanna put a label there. Yellow are places where it's just not enough width for the label that we wanna place. Magenta are places where we think the line might be a little too bendy to make the label look right. So we avoid those. And then green are the spots where a label would look fine.
15:01
So the great thing about all this labeling work that I just talked about is that now Rapid can put labels on other things too. We could never do this before. We were just limited to labeling OSM features, but now we can, this is a screenshot showing one of the Esri building layers and one of the Esri address layers. And we're actually putting labels on all of the address points
15:21
so you can see what they are. And this is great too, because all of those labels are placed coherently, right? They avoid the labels from open street maps features and everything just kind of like fits in the scene really well. Okay. This next slide shows another cool little performance trick
15:41
that we are doing. As we zoom out, we can actually replace building shapes with sort of these standalone, like little rectangles. In this animation, I had to turn them blue so that you could actually see what's happening. But in the real Rapid, they stay red, so the user doesn't really notice. But it's cool because Pixi can draw
16:01
a whole lot of textured sprites nearly for free. So this is a good way of, as you zoom out, we're not overloading the GPU because it can just draw hundreds of thousands of squares, like almost in an instant. Speaking of zooming out, a lot of us know the Web Marketer projection
16:21
sort of stretches everything out at the poles. So this idea of what a low zoom is can be kind of relative. So going forward, we're gonna be making more of our rendering decisions based on a thing we're calling effective zoom. That's kind of like saying, like what the zoom would be if we were at the equator for this feature. And I'll give you an example here.
16:41
So this is a building in Ghana and a building in Norway. They're actually the same size building, but because of the stretchiness of Mercator, the one in Norway looks a whole lot bigger. So it'd be like, if you were editing in Norway, zoom 14 might be like what zoom 16 is closer to the equator.
17:01
So going forward, it was always like kind of unfair that if you've used ID or Rapid before, you might hit a point where it says zoom in to edit and all the data is gone. We're not gonna do that anymore. Here we're actually showing editable features at zoom 15, which is a thing we couldn't do before.
17:21
We're still rethinking what it actually means to edit at different zooms, using this idea of an effective zoom to guide those decisions. So users might need to zoom in to actually see more detail or to specifically work with the features that have been simplified. But we wanna move away from this idea that all editing has to happen only at 16 and above.
17:43
And finally, we feel like we're really just getting started, right? This is a little silly video of something I made last week. This is, we hacked it together. This is a WebGL shader to just kind of make the buildings look all squishy. Like we're probably not really gonna do this unless we do like a Halloween release or something.
18:03
But like the point of this is that we have new opportunities to sort of style the map in ways that like we couldn't do before either to like draw attention to certain features or just make them stand out or look special. So we're excited to play with this. All right, so I'm gonna turn it back to Ben
18:21
who's gonna talk about some of the challenges we faced with this rewrite. So really quickly, we'll go over some challenges we had while implementing all this stuff and then a short call to action. So one of the things in computer science that you do not like to be told as a programmer is that you might have to work with an infinitely large dataset. That means you have to worry
18:42
about how many textures you're using, your memory usage. So we had to spend considerable time figuring out how to use texture atlases so that we weren't literally overflowing the GPU with texture references. Yep. Oh, you can speak more into there. Sure. Also, Brian touched on this a bit. Now that we have a faster renderer,
19:02
we can't just render everything at every zoom because we're just going to be burying the user in data. One of the first things I did when we got this render stood up is zoom over to Manhattan, where we have way too many features. And if we just label everything, you get the following picture, which obviously we can't do that, right?
19:20
So we have to maybe work closer with some cartographers to figure out what we show at given zooms. Another challenge is just WebGL itself. We're not entirely sure that support is evil even across every platform. There was a horrible bug in Safari a few months ago that caused the speed to rapid to crawl at like one frame per second at most, but other browsers and other OSes worked fine.
19:44
Finally, it makes our code more complex, but we would also like to be able to just ask the browser to do things later. We can do that with request idle callbacks to say maybe load some tile data, but don't do it right now. Wait until you have an idle frame. It's great for performance because it doesn't stop your rendering, but it also makes the code more complex.
20:03
So very quickly, next steps. We have an alpha test. This just got released a couple of days ago. You can go and use the alpha. You can use an improved bug nub in the lower right where we have a nice bug ingestion form up on GitHub and try it out. Let us know what you think of it. Let us know if anything breaks.
20:20
As this is an alpha, we do expect there to be bugs and we are hoping that you will help us find them. A few places where you can come find us. We're in the OSMUS Slack. You can find us at Map with AI on Twitter or on our Facebook group. And finally, we also have a birds of a feather session, two of them going on over in room 12 now. So if you have questions about Mapillary,
20:41
about daylight or about rapid, come to room 12. We'll be happy to chat more with you. Thanks.