We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Fast-Track Ember with Fastboot + Embroider

00:00

Formale Metadaten

Titel
Fast-Track Ember with Fastboot + Embroider
Serientitel
Anzahl der Teile
23
Autor
Lizenz
CC-Namensnennung 3.0 Unported:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
Ember Octane has clearly introduced a paradigm shift in the Ember community and has contributed towards boosting the user experience. But what is next? Can we improve the user experience even further? Yes indeed! Featuring Fastboot + Embroider :)) This talk will demonstrate the possibilities that Server Side Rendering and Code Splitting together unlocks for improving the user experience. In the end, you will have a good grasp on the concepts as well as the tools and techniques required to build Fastboot + Embroider powered Ember apps.
CodeDialektFramework <Informatik>BitBenutzerbeteiligungMaschinencodeComputerspielSpieltheorieLesezeichen <Internet>Weg <Topologie>MultiplikationsoperatorComputeranimationJSON
BitGammafunktionApp <Programm>Lineares Komplementaritätsproblemp-BlockBootenSpeicherabzugMetrisches SystemEin-AusgabeDatensichtgerätInteraktives FernsehenKumulanteStabilitätstheorie <Logik>sinc-FunktionMultiplikationsoperatorInhalt <Mathematik>Element <Gruppentheorie>TouchscreenMereologieRechenbuchZentrische StreckungZweiGüte der AnpassungSpieltheorieKonfigurationsdatenbankVererbungshierarchieKartesische KoordinatenInstantiierungThreadApp <Programm>VerschiebungsoperatorWeb-SeiteSchnelltasteKugelkappeData MiningWeg <Topologie>Leistung <Physik>SoftwareentwicklerDifferenteAggregatzustandBenutzerbeteiligungSoftwareAutomatische IndexierungTeilbarkeitRoutingRenderingClientKlasse <Mathematik>BrowserServerEndliche ModelltheorieNabel <Mathematik>QuellcodeProgrammbibliothekInformationMechanismus-Design-TheorieProgrammierparadigmaGebäude <Mathematik>BitGruppenoperationSyntaktische AnalysePaarvergleichWort <Informatik>Rechter WinkelEinflussgrößeCASE <Informatik>TabelleZahlenbereichLineares KomplementaritätsproblemCross-site scriptingGoogolUmwandlungsenthalpieZusammenhängender GraphVerbindungsloser ServerBillard <Mathematik>Dienst <Informatik>Framework <Informatik>Computeranimation
W3C-StandardCross-site scriptingTestdatenGammafunktionSchätzungNormierter RaumLokales MinimumMenütechnikRoutingKartesische KoordinatenAutomatische IndexierungZusammenhängender GraphRechenzentrumMultiplikationsoperatorFaserbündelHomepageTopologieBootenSoftwareentwicklerSystemaufrufServerLastClientMailing-ListeHeegaard-ZerlegungTouchscreenGebäude <Mathematik>BimodulMomentenproblemProdukt <Mathematik>Web-SeiteCodePhysikalisches SystemGlobale OptimierungCLIElektronische PublikationAnalysisFigurierte ZahlMathematikGüte der AnpassungVererbungshierarchieInstallation <Informatik>Inhalt <Mathematik>BrowserVolumenvisualisierungSoftwareApp <Programm>BitATMSerielle SchnittstelleMereologieCASE <Informatik>sinc-FunktionMinimumExogene VariableDifferenteProzess <Informatik>InformationSkriptspracheSyntaktische AnalyseLineares KomplementaritätsproblemCross-site scriptingQuaderRechter WinkelUmwandlungsenthalpieDienst <Informatik>TemplateRouterEin-AusgabeSichtenkonzeptWhiteboardDemo <Programm>AuswahlaxiomArithmetisches MittelTeilbarkeitRenderingVorlesung/Konferenz
GammafunktionW3C-StandardEinfügungsdämpfungStatistikMathematikDefaultBimodulRechnernetzTotal <Mathematik>StichprobeMetrisches SystemBenchmarkElektronisches ForumVerschiebungsoperatorPlot <Graphische Darstellung>FrequenzWeb-SeiteZusammenhängender GraphLaufzeitfehlerGamecontrollerMultiplikationsoperatorElektronische PublikationKartesische KoordinatenOrdnungsreduktionTopologieFunktionalZellularer AutomatEinflussgrößeBrowserMessage-PassingFaserbündelLastBootenSyntaktische AnalyseMAPUmwandlungsenthalpieCodeCASE <Informatik>Konfiguration <Informatik>Globale OptimierungDynamisches SystemAnalysisMetrisches SystemLineare RegressionMultiplikationResultanteBereichsschätzungSoftwareentwicklerApp <Programm>ZweiGebäude <Mathematik>Heegaard-ZerlegungBimodulRouterRoutingServerWeb-SeiteSoftwareBefehl <Informatik>CLIProgrammbibliothekComputeranimation
Set-Top-BoxThreadLokales MinimumGammafunktionRegistrierung <Bildverarbeitung>BootenBitBrowserElektronische PublikationMereologieHomepageApp <Programm>Web-SeiteInformationDefaultSoftwaretestSoftwareentwicklerRepository <Informatik>Demo <Programm>Reelle ZahlVirtualisierungServerProdukt <Mathematik>QuaderKonfiguration <Informatik>DruckspannungCodeSchaltnetzRechter WinkelComputeranimation
QuadratzahlJSONComputeranimation
Transkript: Englisch(automatisch erzeugt)
Hello, everyone, and welcome to our talk,
Fast Track Amber with Fast Food and Embroider. I'm Suchita. Hey there, I'm Thomas. So first let us do a little bit introduction for ourselves. My name is Thomas. I work at LinkedIn web framework team. And during my work, I interact a lot with Amber
and I also enjoy contributing to the Amber community. Outside my coding life, I enjoy making special drinks at home and I'm also practicing latte arts. I also love playing games. My all time favorite is the game Transistor and I'm currently playing Final Fantasy VII Remake and has been loving it.
And next I'll hand it over to Suchita. Oh, thank you so much, Thomas. I'm definitely going to ping you after this for the latte art. And hi everyone, I'm Suchita. I'm a senior engineer at LinkedIn and I'm also working with the web infrastructure team like Thomas. And outside of work, I'm a core member of the fast boot working group.
And in my free time, I love playing cricket. It's a game very similar to baseball and super popular in India. And I'm a huge keyboard, mechanical keyboards fanatic. And here is my latest creation where I switched to my Gator and Brown switch to holy pandas and updated with new key caps.
And also during the pandemic, since we all were working from home, I uncovered a newly found passion, which is gardening. Let me share a few creations of mine that I did last fall. I sliced up the tomato and planted it. And on day 67, I started seeing the first blooms of tomatoes showing up
and rest is history. We had so many tomatoes that we had tomatoes in almost all of our recipes. And I also grew some Thai chilies in the last fall. So this is about us, what we do and what we did in our free time. So now let's get back to our main topic, which is fast tracking Amber with fast boot and embroider.
So last year, I presented the journey of Amber from one.x to Optane, where we saw the paradigm shift that Optane introduced in the Amber community. This year, we will take you through a story of a team that migrated to Optane and are now looking to further improve their app
with various aspects. We will first get to know the team and their requirements. We will then move on to understand how fast boot powers the server-side rendering experience for the users. And then we will talk about how we can optimize at the build time with embroider to further improve the user experience
and also developer experience. And last, we'll talk a little bit on what is on the roadmap for fast boot and Amber. All right, Suchita, should we get started? For sure. So now let's begin the story. So Zoe is an Amber developer who is super enthusiastic about Amber
and is leading the web team of an awesome team at an awesome company that uses Amber JS as its framework. The current status of this team is that they recently migrated to Amber Optane and are super happy with it. However, soon after, they got a few new requirements, which is very normal in our industry, right? Once we finish one feature requirement, we get a new one.
So that's what happened with these people and let's see what were the requirements. So first requirement was to deliver faster content to the page. The next requirement is to improve the user engagement, especially for the countries with slower networks. The last requirement is to improve the SEO
or the search indexing factor for a specific route. So looking at these requirements, the first thing that comes into our mind is server-side rendering. So what is server-side rendering? To give you a brief overview, let's take a step back and see how client-side rendering works.
So when a client makes a request or when a browser makes a request for a page to the server the server returns the HTML shell along with the JavaScript and CSS tags. Like you can see here, the body of the HTML returned by the server is empty. And once the JavaScript executes on the client-side,
it renders the HTML on the client. On the other side with SSR, when a user makes a request for a page to the server, the server takes in the request, runs the app in the server, builds the HTML and returns the HTML to the client as a by-product. Hence, instead of an empty HTML shell or empty body shell,
you will see a page source filled fully with statically rendered HTML. In AMBER, this is possible with a library known as fastboot. So fastboot has been there for a while and most of us know that it provides an SSR solution for AMBER apps.
But today, let me showcase what makes fastboot so special and fast. So you can imagine the AMBER application container as a stateless container that does not contain any specific data, user-specific information. Instead, it only contains the parsed classes,
components, routes, registry information, et cetera of your app. So when a user request comes in, it is associated with an application instance. So what is an application instance? You can think of an application instance as a stateful container, as opposed to the stateless one that contains information like user data,
services, models, et cetera, which is very specific to the user request or this request that came in. In the meanwhile, if there are other requests coming in, then instead of blocking the entire thread for serving the initial request, we put this app instance in the background using the asyncio and spin up new app instances
using the same app container that you see in the background. This is huge. Since we are cutting down the cost of revaluating the whole app for every request, instead, we are just reusing the same application container. And that is how we can serve multiple requests at the same time for different app instances concurrently.
And once the request has finished serving, we destroy the app instance and the state itself. But one thing to note here is because these app instances are running inside of a sandbox, we prevent the state leakage between the app instances, which is great. But with that being said, one thing to be mindful about
is since the application instances leverage the same container, we should avoid putting anything on the global scope or on the class scope, because it might end up leaking in between the instances otherwise. So that's one thing that we should be careful about. So now that we understand the mental model
of how fast boot works and what are its benefits and how fast it is, let us see how we can measure the success of leveraging fast boot in our app. So Google recently announced a few Core Web Vital metrics that provide the guidance around determining the quality of user experience on the web, where LCP, which is the largest contentful paint
is used for measuring the loading performance. FID, which is the first input display is used for measuring interactivity and CLS, which is cumulative layout shift, which is used for measuring visual stability. So for the purposes of this talk, we will be focusing on the LCP metric, since that's where fast boot really helps.
So what is exactly LCP measuring? It basically measures how much time it takes before the user sees the most important content on their screen, or you can say the hero element of the screen. Anything that is not in the viewport will not be a part of the calculation. So if you see the scale carefully on the left,
that measures this metric, anything about four seconds is considered to be poor. Now, just imagine if a user has to like stare at a blank screen or a loading screen for more than four seconds, it's certainly not ideal and not a good user experience. And it might potentially lead to the user abandoning the page.
But if we could improve the time it takes to paint the content, then we can certainly improve the user experience and give them faster feedback and of course, better experience. So now let's see the role that fast boot plays in improving the LCP metric and connect these two aspects together with a side-by-side comparison of a word before fast boot
that you can see on the left and after fast boot that you can see on the right. And here for demonstration purposes, both of these apps are simulated on a slower network, just to show you the impact. So let's see what happens when the user hits the same route at the same time
before and after fast boot. So here you can see on the right that clearly that the fast boot enabled app is already showing up the content, whereas we are still waiting for the non-fast boot app to show up. Now it is showing up. So the LCP measurement for the after fast boot use case was 2.9 seconds.
Whereas in opposition, the non-fast booted case is 9.5 seconds. This clearly shows the value that fast boot brings to the table. This would not only give users a better experience, but will also help retain the number of users who left the page due to seeing a blank screen.
So now let us understand why did we see such a shift of amount of time taken before and after fast boot to load the initial page. To do that, let's see the timeline of the before fast boot world first. Before fast boot, it first waits for the HTML and CSS required for the page to load. It then loads the JavaScript required for the page.
And then it makes any required API requests here the articles call, like you can see in the example to make the first page load. And that's where you see the LCP or the first content showing up on the screen. However, with fast boot, the timeline is a little bit different.
Here we can see that as soon as the HTML and CSS is loaded and the browser has finished the parsing, that's where immediately you'll see the content showing up on the screen or the LCP is marked. This is happening while the JavaScript files are still getting loaded. This means that we are not blocking on the render until the JavaScript loads,
but rather we are showing the content or flushing out the content as soon as it is available. Since we now know how fast boot is so beneficial, I would specifically want to highlight one aspect of fast boot, which helped with this demonstration very much.
To do that, first let's see the before fast boot use case. Here if you see carefully, we are making two XHR calls, namely articles and tags, where the articles call is required to display the list that you can see on the left and the tags call is required to show some data at the bottom of the screen. So this is normal, right? You make API calls
and then render its response on the screen. This is a normal scenario. However, after fast boot, I'll show you something different. So here you can see, surprisingly we are not making the articles call, but you're still seeing the data on the screen. And just so you guys know, for the purposes of this demo, we have deliberately added this tags call as lazy loaded,
just to show that fast boot gives the developers a choice, whether they want an API call to be made on the server side or they want to lazy load those network call just on the client side. So, okay, coming back to our example, like we can still see the articles list showing up
without making the XHR call. What is making this happen or what's enabling this? That's where the concept of a shoebox comes into the picture. So let me walk you through how shoebox works and then I'll show you an illustration of how it looks like. So when a user makes a HTTP request,
fast boot receives the request, it runs the Amber app on its side, and then it begins the process of serving the request. In the meanwhile, if the app has any API calls needed to be made for serving the initial page, which is articles in our example, then fast boot will make that call to the data center and the data center will respond it back with the data.
And here is where the real magic happens. Now fast boot will also store this API response in something called as shoebox. So now on the client side, when the app again loads and the time has come to make the API call for articles again, instead of making the XHR call,
the client will query the shoebox instead and say, hey, shoebox, do you have the matching response to this particular API request? And if the shoebox has a matching request, then it's just gonna return there and there itself. So you can see the clear advantage here that instead of making an extra network call,
we are eliminating that. Not only that, we are also responding with the data as soon as possible, like almost immediately. So the render time also improves. So now let's see where do we store this information and how does the client get this information from? We basically embed the API response inside of the index HTM itself
so that the client can later query. And this is stored inside of a script tag named shoebox, which you can find here on the screen. So this was a little bit about what shoebox is and how it works in the nutshell. And all of this is great, right? We saw all of the awesomeness that fast boot has to offer but now we come to the main question.
How hard is it to adopt fast boot? Or how hard is it to adopt all of the concepts that I just showed you before? Do we need to make a lot of changes in that? Well, good for you. It's super simple. All you need to do is run this simple command, AMBER, install AMBER-CLF fast boot, and boom, your app is fast boot enabled.
We also have an opt-in feature for smoother rehydration on the client side. So what I mean is currently when you run this command, when server side responds with the HTML, Glimmer VM will re-render the whole DOM tree on the client side again. However, if you enable this experimental rehydration feature,
which is experimental render mode serialize is true, then Glimmer will only update the parts of the HTML that requires any correction that needs on the client side, which means things will be faster if you start using this experimental feature. Now to leverage shoebox, you can add a package called AMBER data storefront and extend it in your applications adapter layer,
which will do all the wiring required for your app to be shoebox ready. So this is everything is awesome. And now let's see if the use cases or the requirements that the team had, if we were able to match that and address those. So let's do a quick recap. So the first requirement was that
they needed faster content to the end user and by improving the LCP factor, we were able to achieve that. So that's great. Next by displaying the content sooner on the page, it showed better engagement from the users due to early rendering, which addresses the next requirement. And finally, since SEO is supported
by fast boot inherently, the search indexing requirement was fulfilled as well. So now everyone is happy. A few moments later. Surprise, the new requirements coming in after a few moments. So this time, the good news is the company and the product is doing great.
And now we're going to build more features and more pages. But the problem for the team is since we will be building more pages and writing more code, that means we will have more code going into the JavaScript bundle, which will automatically be delivered to every user's browser. So the time to fully boot up clients at Ember App
will increase. So the team is thinking about, can we ship only the JavaScript needed for the initial page that the user is requesting? Therefore, when we are working on some new pages, we won't negatively impact the performance of our homepage. And there is an answer from the Ember community
that is to optimize the build of the Ember App. So let's introduce Embroider, a modern build system for Ember.js applications. So there are a lot of optimizations Embroider provide to your Ember App. Let's talk about some of the features. The first thing I want to highlight is code splitting.
So traditionally, Ember CLI build pipeline, what it does is it will look at all the Ember code in your application and all your dependencies and Ember CLI will transpile those code and ultimately build into two separate JavaScript files. One is for your own applications and the other one is for all your dependencies
combined together. What Embroider does is it starts to taking advantage of the long adopted ES module syntax by the Ember community, which is the import and export syntax you have in your components and helpers, all the files. So what it does is it starts a static analysis
for all those modules and figure out what are the exact code that you needed for a specific route. For example, for our homepage, we have the application route, the index route and the services and the templates for it. And for an article route, which the user won't necessarily go into
for the initial visit, we can put it in a separate chunk and all the components that's only used by the article route will go together into that chunk. So this time when the user is visiting the initial homepage, we won't be delivering all the code for the article route to the end user.
And you can also choose to include any route together with the initial bundle if that route is important and the user might go to it immediately. So the benefit of doing the route-based code splitting is we can deliver a smaller initial JavaScript asset to the end user. And when we are working on another route,
we weren't concerned about impacting the performance for the homepage. The other thing I want to call out is tree shaking. So imagine you are using an Ember add-on which provides a lot of useful helpers or components for your Ember app. But the problem is not everyone is going to use every functionality provided by the Ember add-on.
If you are only using two of the helpers provided the add-on, traditionally the Ember CLI will bundle all the helpers into a single JavaScript asset, which you won't necessarily need it. What Embroider does is it will analyze the import, what exactly are the methods,
helpers or the components you're using and only package those into the final chunk and only deliver those to your end user. This is great. So now you may be wondering, how would I use Embroider? So let's see how you can enable Embroider to your Ember app. The first thing you need to do is, of course, install the packages from YARN on MPN.
And then you can start leveraging Embroider in your Ember CLI build.js, which is everything the build happens for your Ember app. What Embroider does is it starts a multi-stage build. And the first two stages is to make sure all your Ember add-ons are compatible
with the new build pipeline. And it compiles out all the Ember specific code. Then you can really start leveraging all the benefits from a wider JavaScript community. In this case, we're using the Webpack Bundler to bundle all our JavaScript asset files. You can use the options provided by Embroider to progressively opt-in
to all the optimizations Embroider provide you. You can turn on the static add-on and helper component options to tell Embroider that you are not using the dynamic component syntax. So Embroider can safely perform the static analysis and build out all the chunks for your files.
And then you can start enjoy the route-based cosplitting by using the Embroider router. What it does is it will read what are the routes you wanted to split away. And then when the Ember router is running, when it enters a new route, it will fetch those routes from the server.
You can choose which route to split or which route to not split. So you will have the full control. And the benefit of enabling Embroider also provides you the ability to use dynamic import. So comparing to the static import, which is all the import statements you write at the very top of a JavaScript file,
you can use import that returns a promise everywhere in your component or any other JavaScript code. Once the promise resolves, you get access to the module you want to use. So you may, during the runtime, you have the full control of the exact timing you want to import that module,
fetch the module, the JavaScript file from your server and start using it later. And now you might ask, can I use it today for my Ember app? Will it break anything? So the question is, yes, you can start using it. And we have been using it at LinkedIn for a while, and we can share some of the benefits we are seeing
when enabling Embroider on a large LinkedIn application. And we are really seeing Embroider start to bring faster speed to our end user. When Embroider is enabled, we are seeing a 35% reduction on the initial JavaScript size reduction.
And when the JavaScript is delivered to the end user's browser, the final past size of all the code combined is almost half of the original size. The reduced JavaScript bundle size translate to a faster page load time. So what we are seeing is from the beginning to a fully boot up client-side Ember app,
there's a 6% reduction for the page load time, which is roughly 200 milliseconds on a simulated slower network. And most of the benefit as expected from coming from a faster download and parsing time. That's where the Embroider is cosplitting
and all the static analysis bring us. All the metrics we are seeing are collected by Tracerbench, which is a controlled performance benchmarking tool we use a lot at LinkedIn. We use it to ensure we don't have any performance regression. And under the hood, it runs the initial page visit multiple times
and give us a statistically confident result. In this case, Tracerbench tell us it has more than 95% confident that Embroider is indeed making the app much faster. And the benefit for the developers is also obvious for us.
After enabling Embroider, we are seeing the code build has dropped by 58% of the original 100 something seconds. And the most impressive is the library load has dropped from 3.5 second to less than one second. So our development speed is also skyrocketing.
To recap, after enabling Embroider, we're shipping less code to the end user, which translate to a faster period of time. And as a bonus, our developer experience is also improving because of the faster build speed. This is awesome. I think I can see the team already celebrating.
They seem to be super happy with the fast boot and Embroider adoption. So Thomas, let's talk about a little bit more of what's next in the world of fast boot. Sure, there are a lot of things we can do to make fast boot and Embroider experience much better.
So few things in my mind, we have a fast boot app server, which is a production ready, out of box, server-side deployment option for people using fast boot. And we want to add worker stress to it so we can further improve the server-side throughput. And we also want to support streaming back
the initial HTML so people don't wait all the HTML get generated on the server-side and start shipping it back to the browser, which will bring us a better, faster Embroider app. We also want to make the faster and Embroider combined experience a bit better. Right now, if you load not the homepage,
but another page from the browser, what Embroider do is it will fetch the JavaScript file for a separate chunk after the Embroider app boots up. What we want to do is we want to do this part in the fast boot world as well, so that when the user get the HTML, it can in parallel fetch all the necessary JavaScript
for the Embroider app to boot up. Awesome. And just to give you guys more information, we are also planning to stabilize the rehydration feature and enable it by default for all the users and also have a better testing infrastructure just to make it easier for the apps to test against fast boot.
And of course, we want to continue the effort of improving the developer experience. For example, adding more instrumentation in place, enhanced documentation, et cetera. Cool. And here are some more resources for folks to check out. You can learn more about fast booting at his homepage and Embroider at his GitHub repo.
There are a lot of instructions and more details about it. And all the code we use for the demo app we showcase today, you can access on GitHub to see it yourself. And we really hope fast boot and Embroider, these modern toolings can provide better Embroider developer experience
and real value for our end user. And we really hope you are as excited as we are for the future of Embroider. Oh wait, but that's not all. We have one more thing. What is that? As a bonus, we are hosting a virtual fast boot meetup where we will do a deep dive on fast boot
and its various concepts and aspects. Please, please feel free to sign up for the meetup on amberfastboot.com. And we can't wait to see you all there. Thank you very much, everyone. Thanks.