Merken

Performance Testing for Modern Apps

Zitierlink des Filmsegments
Embed Code

Automatisierte Medienanalyse

Beta
Erkannte Entitäten
Sprachtranskript
the and the and the and the and the B but thanks everyone for joining and again this is performance testing for modern dance and 0 really talking about parts and tools of the trade for testing performed on the server side but also understanding client-side performance so at the time of content there's a bunch of nodes of the bottom my slides on the canal the online so if I go a little bit fast that there's plenty of notes for later so the reality is the top engineering organizations the performance not as a nice to have what is a critical feature of the product and because they understand that has a direct impact on the business bottom line most of the time developers really don't think of this until you go to large and growing here that change that 2nd venerable about
me and felony on Twitter possible adjustable . com so that its performance matter but sigh found being searches over 2 seconds
so results in the 4 . 3 per cent drop in revenue per user and associate 2 . 2 seconds
after landing page experience firefox downloads increase 15 . 4 per cent so that's 60 million down just the kids the page was a bit faster
and making rapid numbers websites exclusive bus increased ignition corrections but 14 per cent but the most impressive metric that I've come across is
that a decreasing the end user latency of Amazon . com 3 2 operations by 100 ms results in a 1 per cent improvement in revenue so whether it's you who serves as a wall Amazon . com all these engineering companies all these of engineering organisations understand that words merely performance directly impacts of all 1 the so the
question is how fast fast so . 1 seconds it feels instantaneous it feels like you're flipping a page in a book or a hundred milliseconds so you should strive to keep your load times in this range of about 1 2nd positive think seamlessly in after 10 seconds you really start to lose the attention of users so there's been a bunch of performance studies to understand basically the attention span and applications and user experiences of
users that prevents really is key to a great user experience and big everyone's probably had the experience where you go to check out in e-commerce or any good the check up on and then adjust weights for a long time and you really quickly so it's lose faith and in general the cells to just weighted ordering it charged again so again how fast is fast enough and 100 ms is again instantaneous a physical pupation book 100 ms at 300 ms at the the way perceptible so users instead to notice and after about 1 2nd Eurostar to interrupt the user's flow to users expected their site alluding to seconds and after 3 seconds 40 % will bend in your site and this comes from Nielsen of performance-related they did a long time ago and that is the main applications are even worse the and this is
really hard because modern applications are really complex when you have a hundred micro-services talking to each other and you're making calls to external providers a shipping provider payment processing provider fraud detection writer is really hard to have great performance so with plexity exploding IT managers and honey test for this and I think most companies
treat performance and really up time as a critical differentiator and if you look at some of the major enterprise companies they all treat of time is a critical metric and I think we've all encountered this if you provide a service to others it's enforced by Nestle so what's the goal the ministry performances of feature in Winner tablets and tools of the trade for performance destined to do exactly that so the 1st to start with is how we understand your baseline performance so just wrap Python is going to be a pretty cool pretty fast uh when stood Ajay yeah you have a framework where that's an additive overhead and then you're gonna have your applications that that's not a bit more overhead really what you want understand is what's the baseline performance on some hardware on specific set of our religion around production what's the performance of acetic acid being served not from by about what's this what's the performance of the pipe on whole were also very simple script with the overhead of your framework and then what you actual application because oftentimes what you'll find is that it's only the business transactions in your application that had very different performance the home pages can be highly where's the checkup processes in the top a bunch of third-party providers so it's going to be inherently slower to should understand the performance of the twenties transactions and really understand and how that affects users so to that by understanding what the static threshold is what the whole world the sign is and what the application benchmark is so if you haven't said Apache you access to patch events if not you can act in instead Apache 2 utils on expect forms and becomes available at that event is a very simple tool for benchmarking the performance of applications this isn't specific to Jane user-specific the pipeline if use most of these tools across in the different up with application platform so that event really crude and that simple so
if you just wanna get an idea of how fast a particular transaction is like a home page it's so easy to test the single comparing user so in this case what seeing is Apache wrench Darcy for concurrency so test 1 user going as fast as possible for 10 seconds to Acme demo out . com so run Apache benchtests EU concurrency 1 for time of 10 seconds and that case benchmark notes I want no delay between the request and which also to get this response it looks like this
to the out requests per 2nd which is a very useful much equipment probably is the latency so what's the average response time per transaction and the balance the fearing on how much the past year the infrastructure can support is understanding when you max out your requests per 2nd and the latency starts to rise so you can always serve more transactions is very slowly but you don't want users within 10 seconds just because there's a thousand them showing up so you really need understand the balance of those and the party
event it's really easy to start to increase the concurrency so in this case on testing 10 users so this is a concurrency level here to turn and test again for 10 seconds and what you'll see is that we have now requests per
2nd in this case you've 65 requests per 2nd at the time progresses gone up to an average of 151 also but that is great it's a quick-and-dirty makes it very easy to load test the server and but I prefer siege procedures of fits a a
similar format if you can act instead siege on most platforms reported brooms stall Szeged pretty straightforward and it will have a very similar format so you can run siege Darcy again concurrency of 10 users and for a time of 10 seconds measurements in these examples on more testing 1 endpoint animal load testing the home page but that's only so useful and metrics there were very similar to apache bench so
for 10 concurrent users read about 65 transactions a 2nd and the average response time is about 30 miles of the thing at 150 ms rather so you can keep increasing the concurrency and so you Max adults and really wants and so you can start to see the latency said to skyrocket really which you want understand is what's the matter request preserve uh upper 2nd that the machine can get that world before the average response time starts to increase that this assigned for a very simple application but most time applications 1
and point it's many different points you have functionality to of the home page of login logout add your cart and check out process in order etc. so there's very quick to to be able to call the entire application to discover a bureau and points because most of us inherit applications and we don't have the ability to build them from scratch every time so you often don't know where the functionality that exist so the whole class party which is a transparent HTTP proxy basically what allows you to do is make a request through prosody and reliably were all your request so if you are not with an application and you want to cry of and points in the application it's very easy to use as proxy and w gets uh to basically emulator search engines by and the goal here is that we want to have had all the users of the application so so so that's Patsy Basho and it's going to do is run HTTP proxy on the thousand 1 in all the you so we access and you put into Eurostat that text file the necessarily due you study it so that the gear has a spider mode allows you to emulate a search engine spider in the good of a homepage and recursively cry the links so if you have a quick and easy way to discover the functionality of the application you can run as proxy and w gay and discover a public were in your application and what you end up with full circle list so we have a unique list of yours at the end and you end up with something looks like this this
is this a simple e-commerce application and using that as an about page of you cannot change the currency while and register for an account you by category BY tag etc. and really we want understand is what is that we benchmark traffic across all the Urals so that we can understand the performance of each transaction because their processing in
order is going to be for a lot slower than going to the home page especially when the home pages highly catch Sudanese seed again this time fitting in the list of the world's textile so we can benchmark traffic across all the unique you so so
in incrementally and earlier so this case is when the siege dust before rows concurrency again 1 50 concurrent users for 3 minutes across all the different neurons in the world that text file and end up with is
something pretty shape forward to that but we had earlier which is the average response time for each 1 of these transactions and the transaction right the now this is only so useful because oftentimes you don't wanna just go through a list of your policy action would perform a transaction and that use in
Stockholm of atom it's not a big fan of META-NET nice it's an open source framework for performance and load testing well makes it really useful as allows you to strip transactions so I can actually these credentials to the lab form on maintains the by God created cookie jar maintaining the each to be session along the way so I can log in and then I can add something to the crowd and then I can check on I can actually manage uh the entire issue transaction or a set of transactions and then I can start to load test those as well so it's very easy to install spy so you can simply pick instead and that nice and then they have access to it now my library
anything this bootstrap a new project pretty straightforward but I presume everyone here is pretty familiar fight on so this should be pretty straightforward the ideas were in importation peer request library so th client for Python and then the only thing that we find that nice provides is the transaction costs and what you do inside the transaction class it doesn't matter it is going run many of these transactions at a high level of concurrency so I can do this for a single mother 1 . I can run a series of neural so with this is gonna be the same example that we just went over with Seeger Apache branch was just 1 test that we laptop com as a single 1 point so if you want to put the question of what the a tree of the method run and then was sublimated get requests to academic . com but oftentimes you wanna do much more than national descriptor uses an area so actually to now going to at laptop conquer the copy age and in such a script the cell security can implementing mechanize again the only thing that matters is what you put inside of the run method so this case we have a transaction and we just want to run this transaction uh high levels of concurrency so here we instantiate the morphemic browser missiles go last emulate the HP state and maintain a cookie jar and a similar about so there's a blocker public and points by the using the robots exclude and it has a constant timers API so this'll work it's really useful because based on the custom timers it would generate fancy graphs allow you to understand for each 1 of these transactions the average response time and the number of requests per 2nd so for each 1 of these will create a custom time a local because sometimes for the home page and a custom timer for the current functionality and obviously conscript for the specifics of the application it becomes quite easy to uh to strip complex transactions so here we the time and we're going to use the cookie jar to open academic . com where reader response were require the latency so the start time in time gives us a latency and organ and that's list of custom primers the and do the same thing but fetching the cart we could build will do a post post login factor better products by the tide add the set of all those products into the cart and process in order but that doesn't fit on the slide so once
you have descriptive agrees maybe scenarios
as you want so this is just 1 transaction cycle homepage transaction I could have about a bias that from a shopping cart transaction file support request transaction and all these different user scenarios and then once had to run the scenarios I can use the META-NET and this
command line to run these at different levels of concurrency for so you simply run Montenegrin demo so this files called them
about I've just as the transaction cost inside of it and once you and went to
whatever level of concurrency you've configured and then we generate a list of output that looks like this for each 1 the custom timers and you'll get the response time of Canada so the average response time versus the request perspective the notion of hands so this is pretty easy for uh the scripted transaction by which I mean this is the running all the transactions are part of a single server Anderson and I need you are
having a production application that's more than 1 machine right how many of you live in the class
all and so it would be something about my favorite open source project beings with
machine guns they ferret happy about this 1 1 because it comes at the notice it says it's a felony to do this against any other states appear on this effectively to distribute denial of service attack spread the wealth of to in uh so what is the use of machine guns but
utility from the many be used to attack targets were creating micro E C 2 instances in the Amazon Web Services to little test web applications so new binary load balancer and you have hundreds of server hundreds of machines behind the load balancer how generate enough traffic to actually test that you're not gonna do that at a single machine operating that fear your not unanimity that often easy to it's still max out your network throughput before we can generate enough load so the idea the use of machine and the veracity trying to distribute load test it is very easy to use so again I
found so simply put instead these of machine guns the reasons can be fun and this
is the 1st part library so if you have no that was on web services you good US the amazon . com slash freeing get to free years worth of computing power and then all you need to do is configured access genes secret here and that'll give uh the better library could uh AWS credentials so abuse of machine guns relies on the better I read a span up E C 2 instances once you can figure this you're good to go and you can pick whichever region you want in this case I'm running off the west coast now you can stand up to machine to can
but 200 just depends on the level concurrency that you wanna generate so in this case it looks pretty fancy but the only thing that really matters here is a copies up that as to so I wanna spend up to servers so I can distribute load test there's some other stuff in here that's the 1 being the default security your group I wanted say that these 2 servers in the US West to be data only uses a minds than say the and ssh keys and name but about other stuff as relevant which you need to know those things assistive so I wanna spend up to services that load testing these people
have a great sense of humor so yeah connected the higher the of the call to be used only for the bees to lower the machine guns and then they're gonna be ready for attack the same has assembled to bees and then we can check how
many machines are actually up and running and ready to be using copies report it so you that you have TVs from roster NIPS so it's research obsessed with just the thousand requests so in this case we will test the single endpoint so all these attack and in the end the number of requests on affair thousand across with the concurrency level 50 at a time against this you and points of uh this with the 50 concurrent users until I had a thousand requests at the demo Apr . com so something very similar to the siege to
serve a bit more concurrency and the get this responses can read to be some arrested connected by the symbol bees he should you use will fire 500 rounds 25 at a time soon you also we cast for attack so every user a framework like change 1 of these uh there are many replications Saxon each time the cast because you need a lot of things on the 1st request that you well you want to when you do a load test you want understand the role performance 1 have 1 catch requester gets fired and use of machine guns mathematically do this so will create 1 you're 1 request that you're at a prime the catch and then we'll start load test and the most of the requests per 2nd that you do at 50 yeah concurrent requests per 2nd we units 306 and requests per 2nd with an average time of 163 milliseconds so you again the grounds that we won increase the level of concurrency until we start to see the latencies start spike in our 1st with the this particular how well running this application can support but we can very easily
increase this uh 210 thousand or a hundred thousand or 500 thousand very easily so in this case where going created the concurrency 2 thousand requests the 2nd and the 3rd was again so these attacked this time the fact that 100 thousand requests a thousand at a time and academic . abuse attacked us and 100 thousand that she felt so concurrency with and the ceremony the same thing
and the founders of the request for a 2nd always support 500 to request per 2nd but publicity search specs another request averaging out 360 ms so that just means that I guess what we're not gonna fail over the users and wait longer so you decide how much money wanna spend and how much the business can support on the threshold there and that was really want to be able to assert that hardware with the lead to search despite especially this can be sustained normal workload you
may have a good sense of humor so sometimes things don't always work out you should to be useful for a 550 thousand around 500 at a time and that was the offense and that was completes a actually missed at the
time requested the offensive so service survive basically and if things don't
go so I don't know beans conclude the mission of Televisa peace-loving it so the sum of the weights new orders and then there are the benefit of the cloud is the cost of living a rudimentary only pay for what you use as soon you can call
these down and will turn around the 2 instances you just spun up so if you really want easily run a production load test and you can spend up a couple of machines for a an hour so and I'm an entire them right down when you're finished so sobey's down simply and turn off the 2 instances that it spun up to cripple adjust but to tackle the barrel Apache vengeance siege when you won a load test a single endpoint that we talk about what mechanized so when you wanna scripts different points and we talk about the use of machine guns when you wanna generate massive amounts of concurrency but sometimes you will do all those things at the same time so Andriluka sale so this is relatively new uh open source framework for the testing I'm a big fan and that's again written by found that get a website locus that I am pretty much all decided directly copied from there so you can convince up instead look a sale because again it's a pipeline tool and it gives you add the benefit of both worlds when it has to run the tests
from the service but it also allows you to have complex transactions and an even better as it has a bunch of great reporting and ships with the web application so you can do this very easily in an automated fashion so there is an attack by performance testing is this is an a 1 off most of the time the problem of performance that's easy only wanna do it right before launch how many of you know about health care about the so this is the time I've seen a present the independence for a broken website because they didn't do any testing and they spent some seen my money on it some testing on the voluntary because they were this wasn't gonna go well and didn't and out of the country for that right so that the grammar is will dentist this 1 it's lunchtime this is something that should be part of the software development lifecycle is a continuous process everything that you do affects the performance of the code you should understand that and the same whether you have a continuous integration set up a new right union functional 1st you should write performance test because you should understand when you're going to significantly impact the end user experience as part of every release process so the more intelligible for now more about look the same so the locus there's that you can very easily script and website task so the so we find mechanize as a transaction you can define a set of task in this case downloads
going this so you have an user behavior in this case we disallowed infects the profile and armored requests we need to log so on started just like you have a set up and turn around and you have non starts and the automatically analog in this discrimination posts requested to each uh the log and point so if you see that here and
this is that the concept that kind of person post the username password to log in and point at the start of every request and then we had to run the task number 2 which is to fetch the home page and then finally the 3rd task which is the federal profile so fairly and then on the home page and then when the user profile now I believe we can give to the sum the
rest but the idea here is that it's a also comes with the great you why because for 1 these endpoints you want understand what's the average requests per 2nd and the average latency for each 1 of these transactions by stripping them will focus on 0 and by giving you a nice to run the status they make it very easy to automate all the stuff very easy to script complex transactions and then generate a high levels of concurrency as well to check it out with the start of the is there are
many tools it's not just going to be about a Python tools like hand-picked for this talk but there's gallon work uh song there's a ton of different server-side tools it gives the user works well for you in your use case and but that's just the server side that is what about the clients are most people think about performance they think about only the server side because that's the applications that well which it arrives at moderate applications more Lindsay come from the client-side and server-side the 200 ms that you're waiting for the response from the server is nothing compared to the 2 seconds so you're downloading jobs CSS resources will be for the page to execute JavaScript and paid the actual page to become usable when you talk about at the end user experience I it's about users perceived experience the doesn't matter hidden whether it's on the server side of the client side of that item i waiting in my brother can I use this right now the nice click on the box to start searching homepage was very quickly is the box I just wondered that in about the fastest possible right so when you think of a performance testing don't just focus on the server side also understand how the client impacted so who
is the best of a ton of money and performance engineering because it's really important for them because they understand the performance impact the business and when the great tools of the releases speech been insights so good pitch and says allows you to analyze optimizer website it's using the PHP rules so what it will tell you it is page should be using 4 features expires at sphere CSS and JavaScript you should be using from Jesus compression for delivering CSS and all the script but even better they made it extremely easy
to automate this piece uh within genic speech speed an Apache PHP so the the models which will rewrite your responses in real time and to make them more optimized so most of the time I don't recommend using the past year speech me this is the cheaply easy way to do it as cheap as an engineering Simon lazy as a new ownership to fix anything you just install model it does all the hard work for you you should really use of automation and file and automation to make this party you development cycle so that when you package application to the point of production you automatically minify your CSS and JavaScript as so the automatically said the injects configuration up to serve a compressed so you can do that very easily with Internet speech the way you're laterally fix the root problem unless you fix it yourself adults so it's not just
a painting by a P a module for your web server because they have a great website so the can go get actionable advice on how improve the performance so you can get a developer step ruled out comsat speech speed and type in your URL and will give you tips on how to improve the performance and this is the end users perceive performances usually matters the most so it's available as a web page and standalone
API it's is available the Chrome developer tool so it's very easy is to make part of the development set up and it's also that there is a uh good package so you can if you're familiar with the JRC can and PM install PSI and thus allow you to automate this is part of your front and workflow so without work like once you and him and stop yes I have access to the PSI command
line and then you can simply run PSI in the domain and with this will do is run page speed insights using the PHP API underneath and give you some formatted output and so your pitch speed score so 73 is not to Stella the analysis of assassin job could be using how many occassion Carcinomatous compressed uncompressed etc. so the star is a good way to integrate with your continues immigration set up so the farther the staff is just another task and then started to do some benchmarks across the so far page we ever goes over 500 kilobytes know we should probably have something fail like the test would fail and then if
you want to disintegrate directly of course they have arrested plants available so sometimes you can actually understand what's going on in the JavaScript world so w benches a tool that makes it really easy to understand the of what's happening inside the browser so we are led to understand the page load times so this is the ritual so you can simply Jan installed at the bench and that the bench users effectively at common when drivers so the run inside an actual browser incidents and requested uh the the page 3 times so in this
case in this around across the very wonderful Desdemona . and you'll see how with bills so have
a tool allows the who the on service that they don't really publicly exposed but each 1 of those requests ends up in a redirect uh which means everything takes a really long time on my website load and what you can see here is really the end user timings uh that yes so the user time EP eyes so that if you see has a set of time API as 1 is the user tiny API so how much time they spend doing DNS lookups how much time I spending SSL nutrition how much time I spend waiting for the browser to download CSS and JavaScript waiting for the page the paint and for the PG become available and then they also have a resources tiny API so if you go added a social would you on your website like the Twitter show button or Lincolnshire then you realize these really sort impact performance so that gives you a performance findings for the individual JavaScript and CSS that you include on your side and then they're just released a custom tiny EPI city can do whatever measurements you want jobs the return of this so I can take this which is
the your clients and performance with branch all so there the stocks in the view that that is bad fermenting dependencies use granted going further automation from another on me fun and workflows and you for bootstrapping projects so if you're getting so something new today there's 1 to uman generators for data that you really easy to do all this stuff and talking about best practices Bolton by default I gotta go up there's no
matter pick which a flavor well which whatever works for you the case of a a show of hands how many people
understand how many of you would call the souls of professionals yeah hopefully all you and how many of you understand that you have a problem when the user complaints directly to what the customer support which says you have performance monitoring built and you find out when users go through and say hey I can check out into your website so we have the values that most people don't measure performance and I for performance companies and I had a pitch that my goal here is to simply say that this is really impactful and effective is there's more than you might know sushi tracking performance in development production even those companies the few companies that do track performance they'll attract performance and production and they use some variety BPM tools out there and but if you're going to do load testing the whole world load testing is to generate useful information you can only capture useful relation of your monitor so before we go in so doing the load testing the client-side performance-testing instrument everything and Schrödinger components menu databases your passes use third-party services in your infrastructure because oftentimes it's not the applications that the brakes it's not the Django out it's the issue it's uh landing areas with some error provider making API calls the sum payment provider or the cache hit this lower database lookup so unless you're instrumenting out once you generate all this slowed in at beginning meaningful insights of that so that all those you shouldn't instrumental applications your infrastructure so that when you run a performance test useful insights so there's a ton of different open-source tools
pick your favorite stack but the reality is that you should be doing some level performance testing and some level monitoring and became chef and
sends to as that's the graph anger father so you can have a cold for the looks like this
and covering all others the free quickly but the idea here is that uh if you can entrench applications and you can instrument or the API calls year making all the database lookups year past and he had this ratio with the Tottenham and capture Eurostat understand where you need to focus on improving performance when you do run a load to us running Lotus understand like capacity planning and whether you're gonna fail on launch is 1 thing but making it part of another level functional testing for applications is really the angle and you can only do that if you actually have some level of instrumentation to get meaningful insights
and the last part is again it's about the end user's performance seizure top performance of the end users because if you have an application host and Amazon Web Services in the West Coast and element of users in Australia were bunch of users there on postal service in Australia might be having a very serious problem and you're never going to be able to see that so this this you never know will see that less you actually monitor the end users experience and there's a bunch of open source tools that make that possible and so those episodes framework from the world that has 2 centers of the guys behind you page speed and what it essentially does is it captures the jobs the timings from inside a everyone you browser real users browsers so you see very quickly if users in a specific location on a specific area having performance problems because that's about wouldn't show up and on the pipe on that you that's like exceptions century or whatever along platform use and just going to be a user has a bad experience you can place a customer support because they think it's your fault even when it could be the case that there could be a particular browser and so check it out and webpage tested or so and they found that it says that work for a couple of reasons 1 allows you to be very easily check performance across different browsers so you can check the coding performance on the west that from specific locations of a specific network conditions so that checker from New Zealand over or 3 G I my application load and that apparently and members the
rendering process look like so 1 of the advantages of already this fairly Europe ends where you wanna run the test and from what parser and generate test
results so 1 of the things I like this is a collaboration with so much of a technology companies said to the other 3 and it will give you a both you can use it for QA purposes will give you a screenshot of the final rendered form but also give you a video of the rendering process so often times that you was that optimize the critical path so the the the key parts so the browser can paint the page as fast as possible that's what matters most and very quickly because the sea uh oftentimes it you'll see is that a sudden flash and then the page gets painted and that can be a one-second delay where you can see me the page show up and then they get progressively built beneath and depends on how you architected oftentimes is very hard to test those things where pitch etc. makes it very easy to test and q and like
I said that if you have reduce the developer consul Grover Firefox Developer Tools and you probably used a lot of the of of loading your clients and resources that are going to see where your spending time like I said I still the approval of the American service which ends in bench redirects which goes my performance and without this to loaded never realize that was happening but it also
makes to requests for the 20 different icons on my homepage and my home pages only 28 concerts a bit waste OK so I think we can out here as furthering talk about a 60 . 0 so states in battle is very much like Google page being an example of the
it's so much more automated and much more comprehensive so again you can and Kim and stop sites b . I 0 what this allow you to do is analyze your website speed performance so similar speed give you actual advice on how to improve our performance like you should catch these uh CSS adults profiles and modify them there's a bunch of rules that site speed incorporates
and will tests for so this is really good because my talk about I had a meeting all the C 1 any rate this new Jenkins slower Travis yeah workflow and you have to have some real metrics to use that says he makes it very easy to capture each 1 of these is individual metrics and then fail based on so you should think about it for all the average end user experience I was over 2 seconds or you're including more than 5 500 Java scripture whatever the conditions make sense for years that's a pretty comprehensive listing of full so it's not just CSS and jobless pasta on uncashed use the on the uh on compressed and compressed but it's also a highly DNS lookups treated using SSL negotiation times it's very comprehensive so take a look at and then again end you
should be monitoring production applications and you know is I think a lot of you are familiar with
probably neurons that dynamics is a company that works with enterprise applications there is a CompuAdd there's another 1 there's rocks there's a bunch of these performance management companies the global these companies is effectively map out both the server side performance in the kinds of performance and try to understand exactly what's happening I talk about a bunch of open source tools that allow you to do various pieces of this of the global these tools that tell you where your
traffic's coming from in real time and
then be able capture the individual metrics and give you visibility into the client side it
doesn't really matter where the role of open-source 3 by commercial package we should be doing some level performance monitoring the and then
oftentimes the fact about a bunch of open-source tools for writing all these low test but oftentimes is much easier to buy this and to build it yourself as a ruler this run through a couple would testing services from the cloud so I'm a big fan of
Topeka just reserve global footprint and they make it very easy to just write a load test and say I want you to go to this Europe and 750 thousand users and give me a report on what that looks like I would to know is probably 1 of my
favorite in terms of just easy to get started a blitz that I know that's the give you free service uh it's paid free but they give you a decent free plan where you can just say here on the span of 500 users and this URL and in fact the response time versus the level concurrency and then please meter which works
well if you have a complex interactions or you want testable places so sometimes I know as you know in the vein of open source that that we try to write all of our coder
cells but oftentimes it's a business problem that you're trying to solve an issue remember that that's not always carries noise the solution as a I can't emphasize this enough but test for failures the reality is that most people test under ideal conditions but at some point in life you realize it's an imperfect world and that things will happen so you should be prepared for them so netflix I understand this very much which is they release a suite of tools called the simian army 1 these tools is chaos what personal doesn't use it creates chaos inside of your AWS set up so what happens if you lose your passion layout as USA respond if you drop your compassion sincere as instances what happens if pay PID rely on slows down 3 x do uh re trying exponentially back half the few things up and but or do you walk on them you don't really discover those issues until you have a production knowledge oftentimes it that's when he learned the most when things are failing and it's the most painful lesson is like a we really probably shouldn't cue that up not on outer we should really try that request if it fails not just ignore it so and test for fillers there's a bunch of tools that make this very easy to do but what happens if you run a load test sneaker laughter memcached instances that search of the database but it can it be said about etc. so performance better so clear and firm sizes and oscillatory performance a feature that there's a bunch of fools and tips on how to improve from and performance but basically use a 40 k rule so you and saluting C 1 8 1 8 years that 1 small request and get everything you need to paint the page and then progressively build a page from there and there's a bunch of tools to do this but the core services use task vendors to build and deploy production the so crunch and goal our there's a ton of tools for jaguar that make it just as easy to incorporate these best practices like minified success in Java script and then combining all the stuff and best practices for for for 4 performance testing capacity planning will test the server side optimizing performance that's the client side again you spend more time in the client side than you do on the server side so people just think because the server can respond to request that users are going to have a good experience emission a starting point you can understand where fell machinery starting from so instrument everything and they're just instrument and monitor production instrument and monitor your development and production environments because you don't a fix it when it's ready 1 effects in in the early on a fixer before workers life is very easy to learn these lessons when the executive team GIA because you're losing all this money in revenue per hour because everything is broken you know how resources with it's also the most painful fun to do it's to try earlier and then measure the difference of every change that's not just upgrading from gender from 1 7 2 1 8 that's can impact performance or operating for your original post faster when x or moving from 1 data center to the next everything affects performance and everybody cares about performance they just care about a different perspective this is people they can about revenue and operations to make a lot of time engineers they care about the end user experience the this in the car after perspective it's useful and so you should measure the difference of every change and then I a performance-testing you're building development process it sure will be part of the software development process and something that's automated that happens every time and it's really understand how failures impact performance
now there's no Alabamans recently an issue if you specifically allowed the best practices for the be 1 well are actually going to hurt you need me to world so I'm shouting concatenating all your system is a CSS and JavaScript spreading a bunch of images into 1 file and these are hops around the shortcomings in a the 1 that is used to eliminates the need for this these sets of this be conscious that the protocols are changing and that you also should adapted volunteer these accordingly uh you grow work has a great book on high-performance browser propose a high-performance browser noting that highly recommend if you're interested so again
they go out of their performance testing into continuous integration for both the server side On the client-side understand the performance implications of every deployment package upgrade and monitor the end user experience from and and in development and production and that's 1 of the of
the many questions few thank much talk from I a question I about um when you think of a given the complexity of modeling and involved in creating a load test and the in an enormous possible investment of over time whether there is any value in taking the feature ramp approach of with regards to then I this under oath you have opinions but so was the time I try to do is I take the basically replay production traffic late the parallel based on what you actually will you think users are doing is here users all very quickly surprise you go and take a look at a production access logs and that find the most common user past and then replace that traffic and the future breast the few like it when I read any future test of this feature does work quite well but which are gonna miss is the complexity of the overall so the most common use cases are things you built originally an anything that knew that works well especially as a way to sort out have sustainable performance and but I definitely think it is something that you should replay production traffic versus just we think you immediate and when you do go building a sort of the feature branch approach or when you go individual features and you try to just test and that the functionality that users often use at lazy don't expect so I think a reliable source of truth which is this access log so if you use some sort of load balancing solution all off of that I always thought was known office so before I when I the feature ramp by I meant something I I mentioned specific so I'm releasing to say 1 per cent of you review users no OK yeah you have whereas most companies as well as well as some other companies and up the slowly releasing features that that's a great way to actually test at all especially in multivariate testing era and you see that a lot with was very substantial changes were usefulness slowly roll them now and and a some sort of automated performance test but you can very quickly learn and by doing custom instrumentation are metrics that matter for this thing so if you're going to introduce a new feature so you can use like stats the creating custom metrics around and that's what I would that's generally how we release the features is what I want really something you will track some important measurements for that specific feature and the more slowly roll out to the users yeah that's a much better approach Cheney chance I'll ask it once Lord of the minimizer of but the and B
Rechenschieber
Knotenmenge
Selbst organisierendes System
Statistischer Test
App <Programm>
Mereologie
Minimum
Server
Inhalt <Mathematik>
Softwareentwickler
Signifikanztest
Gerade
Computeranimation
Resultante
Nichtlinearer Operator
Bit
Web Site
Selbst organisierendes System
Zwei
Disjunktion <Logik>
Zahlenbereich
Umsetzung <Informatik>
Computeranimation
Homepage
Homepage
Twitter <Softwareplattform>
Einheit <Mathematik>
Bus <Informatik>
COM
Wort <Informatik>
Tropfen
Beobachtungsstudie
Server
Web Site
Gewicht <Mathematik>
Zwei
Physikalismus
Browser
Zellularer Automat
Web Site
Kartesische Koordinaten
Datenfluss
Computeranimation
Inverser Limes
Homepage
W3C-Standard
Last
Minimum
Normalspannung
Bit
Kartesische Koordinaten
Systemplattform
Framework <Informatik>
Service provider
Computeranimation
Homepage
Unternehmensarchitektur
Bildschirmmaske
Web Services
Datenmanagement
Vorzeichen <Mathematik>
Skript <Programm>
Benchmark
Signifikanztest
Umwandlungsenthalpie
Schwellwertverfahren
Hardware
Softwarewerkzeug
Systemaufruf
Biprodukt
Ereignishorizont
Patch <Software>
Transaktionsverwaltung
Menge
Tablet PC
Overhead <Kommunikationstechnik>
Unternehmensarchitektur
Signifikanztest
Server
Demo <Programm>
Datenparallelität
Zwei
Übergang
Dicke
Computeranimation
Homepage
Summengleichung
Software
Transaktionsverwaltung
Mittelwert
Datenparallelität
Endogene Variable
COM
Response-Zeit
Benchmark
Signifikanztest
Server
Linienelement
Datenparallelität
Zwei
Übergang
Dicke
Systemplattform
Ereignishorizont
Algorithmische Programmiersprache
Computeranimation
Übergang
Homepage
Software
Statistischer Test
Mittelwert
Last
Datenparallelität
Modem
Dateiformat
Einflussgröße
Proxy Server
Server
Subtraktion
Prozess <Physik>
Punkt
Datenparallelität
Klasse <Mathematik>
Kartesische Koordinaten
Login
Computeranimation
Homepage
Virtuelle Maschine
Dämpfung
Suchmaschine
Mittelwert
Proxy Server
Response-Zeit
Transaktionsverwaltung
URL
Lineares Funktional
ATM
Kreisfläche
Spider <Programm>
Default
Eindeutigkeit
Mailing-Liste
Elektronische Publikation
Binder <Informatik>
Endogene Variable
Transaktionsverwaltung
Datenparallelität
Rekursive Funktion
Ordnung <Mathematik>
Emulator
Bitrate
Lesen <Datenverarbeitung>
Transaktionsverwaltung
Kategorie <Mathematik>
Benchmark
Mailing-Liste
Kartesische Koordinaten
Ordnung <Mathematik>
Computeranimation
URL
Fitnessfunktion
Homepage
Server
Multiplikation
Datenparallelität
Gruppenoperation
Framework <Informatik>
Computeranimation
Open Source
Statistischer Test
Datensatz
Bildschirmmaske
Mittelwert
Fächer <Mathematik>
Statistischer Test
Programmbibliothek
Response-Zeit
Ganze Funktion
Transaktionsverwaltung
Signifikanztest
Shape <Informatik>
Open Source
Mailing-Liste
Elektronische Publikation
Endogene Variable
Transaktionsverwaltung
Framework <Informatik>
Menge
Last
Datenparallelität
Cookie <Internet>
Bitrate
Punkt
Prozess <Physik>
Datenparallelität
Browser
Bootstrap-Aggregation
Kartesische Koordinaten
Ungerichteter Graph
Login
Computeranimation
Übergang
Homepage
Netzwerktopologie
Client
Skript <Programm>
Umwandlungsenthalpie
Signifikanztest
Kraftfahrzeugmechatroniker
Lineares Funktional
Latenzzeit <Informatik>
Computersicherheit
Biprodukt
Teilbarkeit
Rechenschieber
Konstante
Transaktionsverwaltung
Menge
Lesen <Datenverarbeitung>
Projektive Ebene
Ordnung <Mathematik>
Fitnessfunktion
Aggregatzustand
Selbst organisierendes System
Klasse <Mathematik>
Entscheidungsmodell
Zahlenbereich
Zellularer Automat
Mittelwert
Notebook-Computer
Endogene Variable
Programmbibliothek
COM
Response-Zeit
Transaktionsverwaltung
Demo <Programm>
Browser
Verzweigendes Programm
Mailing-Liste
Objektklasse
Roboter
Flächeninhalt
Cookie <Internet>
Latenzzeit <Informatik>
Subtraktion
Demo <Programm>
Datenparallelität
Browser
Einfache Genauigkeit
Mailing-Liste
Elektronische Publikation
Computeranimation
Homepage
Übergang
Transaktionsverwaltung
Diskrete-Elemente-Methode
Perspektive
Mittelwert
Maschinencode
Dreiecksfreier Graph
Mereologie
Lesen <Datenverarbeitung>
Server
Response-Zeit
Transaktionsverwaltung
Demo <Programm>
Funktion <Mathematik>
Distributionstheorie
Klasse <Mathematik>
Web-Applikation
Besprechung/Interview
Virtuelle Maschine
Kartesische Koordinaten
Binärcode
Computeranimation
Lastteilung
Virtuelle Maschine
Lesezeichen <Internet>
Softwarewerkzeug
Streuungsdiagramm
DoS-Attacke
Signifikanztest
Nichtlinearer Operator
Datennetz
Kategorie <Mathematik>
Open Source
Softwarewerkzeug
Einfache Genauigkeit
Biprodukt
Rechter Winkel
Last
Server
Projektive Ebene
Aggregatzustand
Instantiierung
Mooresches Gesetz
Virtuelle Maschine
Web Services
Freeware
Installation <Informatik>
Mereologie
Programmbibliothek
COM
Konfigurationsraum
Computeranimation
Instantiierung
Signifikanztest
Distributionstheorie
Datenparallelität
Computersicherheit
Virtuelle Maschine
Gruppenkeim
Systemaufruf
Computeranimation
Übergang
Web Services
Statistischer Test
Last
Server
Default
Schlüsselverwaltung
Demo <Programm>
Bit
Rundung
Punkt
Datenparallelität
Mathematisierung
Zahlenbereich
Kartesische Koordinaten
Unrundheit
Framework <Informatik>
Computeranimation
Übergang
Virtuelle Maschine
Einheit <Mathematik>
Datenreplikation
Endogene Variable
COM
Ordnung <Mathematik>
URL
Signifikanztest
Schreib-Lese-Kopf
Symboltabelle
Auswahlverfahren
Endogene Variable
Last
Topologischer Vektorraum
Beanspruchung
Schwellwertverfahren
Rundung
Hardware
Schreib-Lese-Kopf
Datenparallelität
Ordnung <Mathematik>
Drei
Computeranimation
URL
Endogene Variable
Subtraktion
Gewicht <Mathematik>
Gewichtete Summe
Punkt
Rundung
Datenparallelität
Virtuelle Maschine
Framework <Informatik>
Computeranimation
Virtuelle Maschine
Web Services
Statistischer Test
Fächer <Mathematik>
Skript <Programm>
Ordnung <Mathematik>
Tonnelierter Raum
URL
Signifikanztest
Schreib-Lese-Kopf
Open Source
Güte der Anpassung
Ruhmasse
Biprodukt
Endogene Variable
Generator <Informatik>
Last
Lesen <Datenverarbeitung>
Ordnung <Mathematik>
Streuungsdiagramm
Instantiierung
Web Site
Maschinencode
Prozess <Physik>
Web-Applikation
Formale Grammatik
Schreiben <Datenverarbeitung>
Computeranimation
Task
Task
Web Services
Statistischer Test
Maschinencode
Skript <Programm>
Softwareentwickler
Analytische Fortsetzung
Analogieschluss
Signifikanztest
Stochastische Abhängigkeit
Kontinuierliche Integration
Profil <Aerodynamik>
Transaktionsverwaltung
Fundamentalsatz der Algebra
Menge
Login
Mereologie
Client
Verkehrsinformation
Gewichtete Summe
Punkt
Datenparallelität
Gewichtete Summe
Zahlenbereich
Profil <Aerodynamik>
Komplex <Algebra>
Computeranimation
Homepage
Übergang
Task
Transaktionsverwaltung
Task
Client
Skript <Programm>
Passwort
Benutzerprofil
Subtraktion
Web Site
Server
Quader
Minimierung
Familie <Mathematik>
Kartesische Koordinaten
Sprachsynthese
Computeranimation
Homepage
Client
Kugel
Statistischer Test
Prozess <Informatik>
Endogene Variable
Skript <Programm>
Quellencodierung
Softwareentwickler
Zwei
Thumbnail
Schlussregel
Lemma <Logik>
Rechter Winkel
Einheit <Mathematik>
Server
Client
Web Site
Punkt
Kartesische Koordinaten
Sprachsynthese
Web-Seite
Biprodukt
Elektronische Publikation
Modul
Computeranimation
Homepage
Internetworking
Wurm <Informatik>
Benutzerbeteiligung
Informationsmodellierung
Echtzeitsystem
Datentyp
Dreiecksfreier Graph
Injektivität
Endogene Variable
Server
URL
Wurzel <Mathematik>
Softwareentwickler
Konfigurationsraum
Signifikanztest
Installation <Informatik>
Stab
Homepage
Sinusfunktion
Task
Prozess <Informatik>
Mereologie
Benutzerführung
Softwareentwickler
Gerade
Cross-site scripting
Analysis
Funktion <Mathematik>
Benchmark
Druckertreiber
Last
Browser
Inzidenzalgebra
Computeranimation
Homepage
Web Site
Sichtenkonzept
Browser
Verzweigendes Programm
Quick-Sort
Computeranimation
Homepage
Statistischer Test
Client
Generator <Informatik>
Web Services
Twitter <Softwareplattform>
Menge
Prozess <Informatik>
Last
Direkte numerische Simulation
Client
Bootstrap-Aggregation
Projektive Ebene
Default
Prozessautomation
Einflussgröße
Web Site
Subtraktion
Gewichtete Summe
Weg <Topologie>
Kartesische Koordinaten
Service provider
Computeranimation
Datenhaltung
Weg <Topologie>
Web Services
Statistischer Test
Maschinencode
Zusammenhängender Graph
Softwareentwickler
Einflussgröße
Caching
Web Services
Signifikanztest
Kraft
Open Source
Datenhaltung
Relativitätstheorie
Web Site
Biprodukt
Warteschlange
Flächeninhalt
Last
Caching
Information
Message-Passing
Varietät <Mathematik>
Fehlermeldung
Elastische Deformation
Lineares Funktional
Graph
Winkel
Datenhaltung
Automatische Handlungsplanung
Kanalkapazität
Kartesische Koordinaten
Computeranimation
Übergang
Motion Capturing
Statistischer Test
Last
Mereologie
Statistische Analyse
Vorlesung/Konferenz
Subtraktion
Maschinencode
Prozess <Physik>
Weg <Topologie>
Browser
Kartesische Koordinaten
Element <Mathematik>
Systemplattform
Signifikanztest
Framework <Informatik>
Computeranimation
Homepage
Web Services
Prozess <Informatik>
Reelle Zahl
Normalvektor
URL
Umwandlungsenthalpie
Signifikanztest
Datennetz
Open Source
Web Site
Ausnahmebehandlung
Parser
Bildschirmmaske
Generator <Informatik>
Flächeninhalt
Last
Konditionszahl
Mereologie
URL
Signifikanztest
Resultante
Prozess <Physik>
Browser
Summengleichung
Trägheitsmoment
Sichtenkonzept
Systemaufruf
Signifikanztest
Computeranimation
Ultimatumspiel
Homepage
Videokonferenz
Homepage
Flash-Speicher
Kollaboration <Informatik>
Bildschirmmaske
Client
Web Services
Zellularer Automat
Mereologie
Softwareentwickler
Chi-Quadrat-Verteilung
Web Site
Bit
Profil <Aerodynamik>
Web Site
Schlussregel
Bildschirmsymbol
Computeranimation
Aggregatzustand
Homepage
Signifikanztest
Linienelement
Verbandstheorie
Mittelwert
Konditionszahl
Applet
Direkte numerische Simulation
Kartesische Koordinaten
Mailing-Liste
Biprodukt
Bitrate
Computeranimation
Motion Capturing
Client
Datenmanagement
Diskretes System
Open Source
Server
Kartesische Koordinaten
Unternehmensarchitektur
Computeranimation
Signifikanztest
Streuungsdiagramm
Web Services
Open Source
Schlussregel
Signifikanztest
Computeranimation
Übergang
Last
Web Services
Einheit <Mathematik>
Statistischer Test
Last
Streuungsdiagramm
Verkehrsinformation
Punkt
Prozess <Physik>
Datenparallelität
Skalierbarkeit
Applet
Datenmanagement
Computeranimation
Strategisches Spiel
Homepage
Übergang
Rechenzentrum
Spezialrechner
Statistischer Test
Freeware
Last
Client
Web Services
Statistischer Test
Maschinencode
Unordnung
Meter
Skript <Programm>
Unordnung
Caching
Signifikanztest
Nichtlinearer Operator
Suite <Programmpaket>
Softwareentwickler
Prozess <Informatik>
Zirkel <Instrument>
Datenhaltung
Gebäude <Mathematik>
Güte der Anpassung
Stichprobe
Web Site
Vorzeichen <Mathematik>
Ideal <Mathematik>
Biprodukt
Systemaufruf
Geschlecht <Mathematik>
Konditionszahl
Client
Server
Messprozess
URL
Programmierumgebung
Prozessautomation
Instantiierung
Server
Subtraktion
Kanalkapazität
Beschreibungssprache
App <Programm>
Mathematisierung
Automatische Handlungsplanung
Interaktives Fernsehen
Geräusch
Zellularer Automat
Term
Signifikanztest
Task
Task
Hauptidealring
Webforum
Perspektive
Verkehrsinformation
Endogene Variable
Schwingung
Response-Zeit
Softwareentwickler
Streuungsdiagramm
Videospiel
Open Source
Systemplattform
Kanalkapazität
Schlussregel
Automatische Handlungsplanung
Schlussregel
Last
Meter
Mereologie
Speicherabzug
Server
Prozess <Informatik>
Protokoll <Datenverarbeitungssystem>
Browser
Browser
Kontinuierliche Integration
Physikalisches System
Elektronische Publikation
Biprodukt
Kontinuierliche Integration
Computeranimation
Statistischer Test
Menge
Statistischer Test
Client
Lesen <Datenverarbeitung>
Protokoll <Datenverarbeitungssystem>
Server
Softwareentwickler
Bildgebendes Verfahren
Signifikanztest
Lineares Funktional
Linienelement
Open Source
Mathematisierung
Besprechung/Interview
Verzweigendes Programm
Prozessautomation
Statistische Analyse
Biprodukt
Login
Komplex <Algebra>
Quick-Sort
Office-Paket
Informationsmodellierung
Multivariate Analyse
Last
Statistischer Test
Einflussgröße

Metadaten

Formale Metadaten

Titel Performance Testing for Modern Apps
Serientitel DjangoCon US 2015
Teil 22
Anzahl der Teile 46
Autor Whittle, Dustin
Mitwirkende Confreaks, LLC
Lizenz CC-Namensnennung 3.0 Unported:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
DOI 10.5446/32786
Herausgeber DjangoCon US
Erscheinungsjahr 2015
Sprache Englisch

Inhaltliche Metadaten

Fachgebiet Informatik
Abstract The performance of your application affects your business more than you might think. Top engineering organizations think of performance not as a nice-to-have, but as a crucial feature of their product. Unfortunately, most engineering teams do not regularly test the performance and scalability of their infrastructure. Dustin Whittle shares the latest techniques and tools for performance testing modern web and mobile applications. Join this session and learn how to capacity plan and evaluate performance and the scalability of the server-side through Siege, Bees with Machine Guns, and Locust.io. We will dive into modern performance testing on the client-side and how to leverage navigation/resource timing apis and tools like Google PageSpeed and SiteSpeed.io to understand the real world performance of your users. We will cover how HTTP2 and modern browsers change the game for performance optimization with new best practices. Take back an understanding of how to automate performance and load testing and evaluate the impact it has on performance and your business.

Ähnliche Filme

Loading...