Merken

A Nice Problem to Have: Django Under Heavy Load

Zitierlink des Filmsegments
Embed Code

Automatisierte Medienanalyse

Beta
Erkannte Entitäten
Sprachtranskript
and that the and when instructions with some people come judge all and under that out to you today about running GenGO under heavy load so everything about Django focuses on getting you to run lineup city don't actually and Applications not while you're not really doing anything that lets you hit 1 . 0 and everybody loves what you've done you suddenly find a servers under assault what you give yourself a letter something useful some ideas as a versatile cruise similarity
and who my answer is a business acceleration consultancy based in the DC area I work with a lot of very large enterprise in that area and personally I am the principal Python architects in charge at the Python discipline that slavery and a of work for social codes resolver Foundation and division of Match . com also have had a fair amount of experience in running under heavy load so will be dead a version that
about the just the logical problem of drinking from the fire about measurement matrix on the difference between sledgehammer solutions in SCOP solutions and how to analyze the risk and potential disaster of any particular chose and the talk about 3 very common in bottleneck problems the people around you and some ideas on how you might feel sort sister
drinking from pharynx graduation not sent
back so ultimately when you have more traffic than you know what to do with you need to John faster and there's 5 logical ways you can do that you can handle the request which might sound like the opposite of what you want but it actually can help ask you questions during your requests so talking to your data sources if you're not asking them as much then they don't have as much to answer but they do answer you want to answer faster because increased concurrency you build capacity by doing things are the same time we can offload processing your client these are the various political and all that his request downstream caching can reduce your head load tremendously and downstream caching is better than upstream caching for this you're cash entire pages in your upstream cash doing well on the trick those don't serve over salmon but you do serve needs to be fresh by whatever your application and business needs definition of freshness in cache invalidation as well really tricky problems after a few questions during requests this is where you optimize your database queries you select related prefetch related you minimize the unnecessary queries and you of the continental rise to prepared or materialized views questions faster during your requests battery use the upstream catch upstream cash is great for not having to ask as many questions you that it's only the DBA had you do your best to optimize your indexes to energy query planner and to tweak your server configuration you don't know database we don't have to I can't tell you how many instances of high load sites that seems still store sessions so task results in messages and that it turns out that those can be something like 80 per cent the rights you end up doing your database I consider metadata stores and not just multiple data stored the multiple kinds of data stores these different data engines are the better for solving different kinds of questions and again show rout repeat you can increase concurrency by throwing iron and consequently money at the problem so if you have application which isn't necessarily CPU-bound switching to an event and I O handler could absolutely increase your concurrency and you can create your own legal about not by isolating process and your clients by pushing the template rendering code onto your client's browser on and putting more of the computation that distributed computing you take the load off your own servers the added advantage is that you get something out from the user's eyes faster when I was at the matched outcome we found that a 100 millisecond reduction our page load time increase their sales 4 per cent simply getting something from the user's eyes faster can actually make it feel faster I make it happier users so when we start you identifier about x 1st
you can identify about if you don't have metrics so prescribe a remedy
requires diagnosing a particular bottleneck because every system anything you're running anywhere near ecosystem could be that battle and 1 bottleneck to bring your entire application to a halt but proper diagnosis means you actually have insights and metrics and visibility into what your application is doing and where so that tools Newark 11 around 11 year olds so much and they're not even paying never possess and you should be using a relic there is no better generalize platform for getting a 1st look into what your application is doing what your systems are doing what your network is doing where you're spending your time let your users are asking for them you rock if you're starting to diagnose bottleneck if you wanna make sure you see the before they start start with new Iraq isolate you spastic Starsky using the open-source no JS Damon released by sees them arts and crafts thing on statisti uses a UDP protocol for counters engages in timers writes that information to a round robin database called whisper and then relies upon a python library called graphite in order to of drop regrets graphite is also you can take any 1 of your metrics any several of your metrics put them on the same graph use whatever's that statistical analysis you want to make 1 was really can occur naturally things to hang up in your office you know when people come and they're very impressed that you can look at your key performance indicators because you know your application and you know the things that might be your problems at any given time and that's something that gives you more granularity in more applications specificity of then the relic can additionally Brightcove released a Python set of tools for uh system monitoring CPU memory disk network of that called Diamond and I can also have a that I 13 doing signal statistics analysis and if you don't know what your database is doing well it's spending its time how the table can you doing how many index can you're doing how many rows going and you're doing if you don't know what your database is doing you can't figure out how to how to do that so impose wishing to look at the PG stat activity table that has basically everything you need to my signal should status is effectively the same thing of using the of Microsoft acquired EVA it's complicated enough and you can afford it and I don't know why I mean talk vesicle at the bottom line is we don't know about your app is doing when how often how fast or how efficiently the flamboyant the because again the bottleneck could be anything it could be any 1 of your systematics be anyone agree network set to be any 1 of the services it could be the application you yourself wrote so you need accurate detailed insightful metrics and work find the battle slowing system down
most solutions that can generally be prescribed our sledgehammers sledgehammers blunt objects that work in a variety of situations but lack any sort of find touch to them we're talking about general under load on virtually lot this is very very application-specific but this
is where you need to be the the engineer and not just the development so blunt objects will get she done for a little while but you really need to know your application inside and out to know the places where you can alleviate that pressure and you're not trying to fix all the problems it what's you trying to buy some time until the next problem rears its head because the next problem will rear its head and this is a never ending cycle of everybody dreams and unit 2 . 0 I have yet to find anybody who's really the so some problems are common and asking around asking other engineers things that you run into you see the same kinds of problems and the same kinds of solutions being used as search engines are fantastic broad data scraping that databases doing n-gram matching is not exactly a great idea of columnar databases of our are coming back in fashion which is fantastic is a fantastic for a numerical aggregation of data but the other extortion great for single record access if you find yourself going to sequel to get single records all the time you might wanna look into something like that of doing query optimization can give your database more room then you knew it was capable of having if your domain has naturally Sharma data showed fantastic and you can keep in mind that not all data stores need to be available for life queries if you're running a do each base if you're running neo for j allowed time those databases themselves are not available for like by users but instead repeated queries take a repeated queries creative use against those data stores and save them into something more instantly accessible but ultimately the point is to alleviate bottlenecks and by alleviate we don't mean shift them somewhere else will mean actually adding processing capacity to your site so the shift in battle X is actually very common problem you can alleviate the problem and placed by the stuffing out someplace else for example if you horizontally scalable your number of when it's better you number 1 that's where that's great of you just put Babylon answer only that it's you switch to an event I know you went from having 3 G unicorn traditional France to 100 g event it's so suddenly you're putting 30 times the number of connections on your database and each connection as memory cost you need to be prepared for so when is a good solution look like because again it's very application-specific a good solution is surgical it never really addresses your bottleneck it's the minimally intrusive way to do so in Berlin application always go from 80 2090 temple where 10 to 20 % the achieves 90 % but we're not in this case you looking for the 99 1 you're looking at the place where you can touch and 1 per cent of you go and get 99 per cent of the battle at and you don't want to add substantial operational risk were cost and you need to know that whatever it is that you are rolling out you're in the test your developers are able test against your able to migrate your data into and that you have a disaster mitigation strategy because running in that sort of high-volume production environment you have to be actually conservative about your operational strategy so you need to know how you're going to test how to migrate and what happens the worst case scenario and we know that you have a plan that actually accomplish so how do we do that risk
analysis and plan for that disaster mitigation because every new system that we
add a new database a new caching layer any sort of new system we add has a cost at a class because it's a new point failure it's a cost because all of a sudden you might be introducing new race conditions is nearly impossible to develop it makes for a more difficult testing your developer suddenly have to put more systems on their dead instances in order to make sure that they're working correctly with these other systems you more complicated troubleshooting because you're looking more larger looking through more systems and you're looking at more interactions and because the number of interactions between more system increases not arithmetically but multiplicatively each new system you add has a greater and greater cost to the complexity and the risk of the function of the application so important questions to ask are how we going to bootstrap this in the production you got everything a sequel that right now you wanna bring elastic search online you got your data into elastic search some point also valued your data into elastic search you putting their data in the sequel get that into last search to so bootstrapping any new systems any new data stores requires a lot of forethought and planning how to recover from disasters I most of you know I love red it's not I don't let let us write the desk I don't like letting restrict SCQ but if readers crashes it loses all the data that has in its memory I need to know what my planets I need to know that my heart guys are can be woken up the 3 learning because suddenly the site is work How battle-hardened is the solution if you understand something edible often get hard the doesn't even have a 0 . 1 on pi you're doing it well you need to go with solutions that other people have tried and tested and found to be reliable and even better if there's a company behind provides commercial support because somebody who knows the 3rd party system so you're using inside and out can provide customization can provide tuning and can help you minimize the in the of the mean time between failures In the meantime to research and how much is a cost and and talk about licensing constant of an operational cost you have to set it up you have to build check configurations for you have to put monitoring and for it you have to develop new metrics for you have heavier operational staff understand how to use it and build around look for you have to provision servers for new network let's make sure you understand what it is that the new system you're bringing online are going to cost you and it deserves asking are we just shipping the bottleneck someplace else are we taking the pressure off this group a systems and simply put it on this other groups but those are the questions you should be asking yourself before you put in place any these bottleneck mitigation strategies so it's an example sentence non is thrown where I am at the problem you can go for bigger issue 2 instances bigger databases you can go for provision i you can download your number of women heads you can do all sorts of things the program many of the problem not introducing any new points of failure that he did not ready and you're not really interesting race conditions that you didn't already have testing seems to be pretty complicated because you've already tested these configurations for as very using bootstrap the most most installations can also scaled scale upscale them as much as they need but it's it's expensive and it's increasingly expensive and nobody wants to have to justify why you suddenly went from spending 5 thousand dollars a month later the US 225 thousand optimal data us additionally certain kinds of solutions have diminishing returns so adding a sequence of a read this slave is a fantastic way take a load off of your masters and all of you reach the stage selects the natural and everybody's happy with the cheapest you add because they have to continue to write the replication while you get diminishing returns so after rabbit solution doesn't work for you anymore and scaling maybe bottleneck shifting because once you widen the base increase the number of things talking to Data Systems Network Systems Web services you increase in the load there and you need to make sure that those systems themselves don't suddenly become bottle another example you wanna add Elastic search to Apple expensive sequel queries that really clearly searches I contains is not good for everything about your entries in front of very interesting new technology you need to bootstrap this new data system you need to index your entire that reusing haystack that's a lot of database low that suddenly to query against a database index update could introduce new race conditions or alternatively to real time search index updates you could be slowing down a lot of the rights of 1st testing seems pretty straightforward construct of past search histories which you can scale so if these are risks that you're willing to take and solutions we've found where plot around for your specific application it might be a good solution mitigate some of the database of end the
1st and most common like is usually
that it's usually a of query you're asking way too many questions you probably didn't design your indexes right because what you thought you were gonna be asking a database you aren't actually asking about this you probably didn't tune your database because before you have problems with scaling you don't really think about doing that and you probably never thought about sharing because you don't prematurely optimise but that might be a good solution so have a query and end times gender code we will use the object exploration features of the or em recklessly and professional and select related are very helpful here you should know the differences are not really going to use that in the 3 talks about them at this conference and you might be doing queries and said Phelps you might also so now I'm not but then figure out that you can't of functions on a for loop that itself does queries I so that you could be writing really bad queries on so it exempts and over requiring using gratis object exploration of Israel actual based on real world scenarios and further as J. song we grab photo sets and we build a dictionary of Jason representations of and that all the photos in there as well as the owner of this particular product so in this particular 1st example wouldn't three-quarters recording from the from a set query for all of the related photos and query for the associated use but if we bring in select related and prefetch related here we can cut that down to 2 we gets of the photos associated with the phoneme set in the only precast on the query and we get the user objects of selected circulated and so we eliminate that particular query run big pitfall to keep in mind this is noted in the docks blood online forever there's still have photos at all you cannot filter you cannot work you cannot do anything which clones that queries because you lose your pre-fetch cache which case you actions as major applications slow I see Phillips at the time of their let's make it very readable it's logical to iterate over for loops and not think about what your queries are doing that so there's evidence and application of the the photo sets for user as well as the most recent photo in order to make a thumbnail then so In this case they're doing in class 1 queries for every for set for the user and ridiculous this is a good example of where the normalization can come people so if you were to infer sample on photos that created new Frankie to and either on photo save or with a secret trigger update the references were always going to the most recently created we always know that it's the most recently created in which case as soon as you grab the 1st that you only have the reference to the latest photo object and you can grab using select related with only 1 query a the you could use might just stock this is another example grabbing episodes related to a television series thereby title that are happening in the next and dates so there might be more the 1st way I mean you know we are like the the of the generative syntax the no of in if but is really much more database information I mean I can count how many queries on and rerouting and focusing on the episode 1st i is a far more efficient way to do a single query and get all the data were getting out of there now special note on this 1 is a little known facts that engender you the topic was written for general 1 . 4 so it's not their fault of these series to episode relation is a generic foreign key generic relation enjoying the 4 you could not query across the US in general 1 . 5 so we suddenly silently introduced into the query engine the BN multi-column joints and the only place we have expose this was in the generics library however we didn't do it all the way through so in 1 . 5 even though these kinds of queries were using a multicolumn joint you couldn't use the double underscore syntax to filter across and 1 . 6 you could accept you couldn't do in the reverse direction and 1 . 7 you cannot query both directions the so so that developers stopped by upgrading to well actually we're running gender them . 6 without round patch backwards but but rendering 1 . 7 you can change a query around lean on a multicolumn joined have much more efficient query it and Our index is appropriate to your questions so you need to know the database you're working with you need to understand how the data planner is picking indices to use you need to know how how these indices can be used for different types of queries Let indices there the more effective than others and if you're having if you're using any indices that you never leaning on all so 1st the database engine on the that as much as they all speak sequel have very different iniquities about for example my signal limits April row index size to about 700 bytes some 700 something that's of which might sound fine unless you have a unique constraint against several large art you could exceed that very quickly on you know GB is terrible at some tree locking during so when you're updating an index in we obtain an index in in RGB if the node you're updating is in the leaf it actually blocks the entire subtree against any other rights which Islam icicle has former dead contention almost any other database engine the Mexico's receipt with its infinite limit and you know could could actually that the POS receiver doesn't have an integral of lies in a K columns caper column hour and it does some magic underneath the hood covered toast table where he puts a a text field or a blob field in there it chunks it into a k blocks and stores them in separate rows in a magical when you create flat text field that joins against it recomposes the data for it on the fly dollars degree against a text fields be can save yourself a join a lot of processing time if you don't have to get the Exif data every time a query a photo Mountain column indexes were highly using Django partly because they are exposed to the model in the model API something you can create bad what's interesting about that I column indexes is even if you have another column indexes you can still use the same index the queries the constrain against subsets of the member cops so if I have a 3 column index and I want to query against columns constraining its columns 1 and 2 I can actually use the 3 column index to do that in my signature over matters so I can't you 1 and 3 I have the 1 and 2 in I can do and 3 but it's not as performances of don't want but putting DB indexicals true on 80 % of fields in your table you might actually wanna see which ones your queries against together and do dogmatic column indexes in order to reduce the number of indexes you having to
create content at times of seeing Bloomfield DB indexicals true that CombNET the coat that came as a cardinality of 2 you have been anything that index so if you have a few the which you have 8 different values it's an enumerated type it's ridiculous to build an index on because the indexes of the stadium armor indexes means slower rights because you have to update every single index and it means greater rock intention because you competing with other rights yeah and it's so because when the query is executed at the the database engine only uses Rand index code table per query so even if you constraining on 3 columns and all 3 columns of single column indexes you're using 1 of those indexes your query you're not doing anything by putting indexes on the other 2 fields unless you make a multicolumn and X and if you want know about indexes your meaning on you need to learn how to use the query analyzer is honest your best friend making it that this fast every single query that you're running on any kind of regular basis you should explain it takes the explain syntax will tell you exactly how the database plans to execute that query the next time you run out and by just simply going to print square set that query in the gender showed you will get the exact sequence of the gender is about to send your database engine of course the output of the somewhat cryptic on impose reciprocal of the talk about different kinds of index cancer so a plane index can scans the B tree of the index and for every road then goes and gets the associated topple from the table and indexed on scan of data that needs to queries in the index so doesn't actually have to go to the table and bit map index can I actually goes through the B-tree instead of going UP tree road tree road tree road players everything from the tree hashes them into a table and then goes and gets all the associated topics from Table of what's on those of it's great you really wanna indexes you're getting in the X Games yet rose fast if you didn't sequential scans congratulations you're scanning the entire table you better hope there's not a lot of so if you're doing constraint against an on index column and that's the only constraint you have on that table you're scanning the entire table make sure you have all those those drawings there will give you loops they can be nested loops in which case you're looking candidates that meet the cross constraint where they can be hash joins which case similar to the the bit map index can you building a hash table in order to reduce the number of lookups you have those are out of difference terms in the pose physical query plan there are extremely well documented article developers of written and detailed it's explanations of what exactly is going on the hood and you can reduce the complexity of your queries just by trying different permutations the you are assumed to tune your database for what you're doing nobody should be using out of the box configuration for your dad so if you have not to your database you're doing it wrong but resist the temptation to simply jack up all the sets the biggest reason why is that most of those settings or per connection at scale you're doing 205 hundred database connection concurrently so if you tell it that you want to have a joint buffer of 15 eggs and you're doing 200 connections you know or on your entire that's and that's not going to get it so you need to actually consider which of those metrics opera connections which ones are per server which 1 of them addresses the types of queries that you're running and which ones don't and to it very carefully incrementally and test is used to test the impact on the performance of of its on larger queries sometimes the every join so needs memory in order to do that compilation for larger queries both my sequel posters use that's they write out temporary tables to disk and do the search on the joint on debts if you're doing the memory slot but you have to know what a new query is going to force it to do an on-disk Joiner sort for example in my signal if you're query includes a text field you're going to ask every time does matter what you do it you're going to this in order to do that join in sort our understanding what it is in your database engine that's gonna send you to discourse is doing it memory can help a lot I don't understand what's the cloud but if you do when you and I and you better be battery backing up your rate control and that's because then you can go to the database out of the box the database won't consider the transaction complete until the physical medium of the discursive yet got when you have a battery battery control you can tell this is the kernel has accepted the right not necessarily physical medium you can move on so at the powerset gets cut to the machine the battery back on the rate will continue right buffer until everything's out on also to make your partition faster use SST it's some apps that match that comes the city area we found if I remember rights a 20 to 30 per cent increase in the performance of a database server just by switching SST is competition it's not that much space you can use miles should you be on nope some datasets and actually showing I could examples be every user's photos photostreams comments all those will can be contained to you and to look at them in the context of this 1 uses suitable almost or around users data on a single database server and break it up across many different datasets gender DB routers are perfect for handling this rap you can round robin based on some identifying information inside that record to know which shot you need to be working with the inversion thing is this means you need to switch to using new ideas as your primary key is opposed to incrementing integers otherwise you will not have unique primary keys across multiple shots this can cause some problems with third-party databases because of the foreign key is the gender foreign key model is basically expecting it to be a positive integer on the other side if you don't switch to showing that plan for the future so if you take your 1 databases supported to what happens when you spot it free we're talk about moving data between the Sharjah talking about shifting the algorithm by which you assign the shots and you need to make sure that the data you are showing that you have a plan for when you need to have more shots and finally if you are shot and you should never be doing cross short queries even though the ways to do it PG cool to do it it's not a good idea what you want to do is any data which involves crush on queries you should either denormalize after induced so for example if on the in the Flickr example if I want to list all of my friends the than that probably crosses shots if I want to say anything really listing multiple users definitely crossing shots so strong that a copy of that can alter data store so that when you need to hit that your still hitting 1 particular data but the 3rd major
strategy was a major strategy we're talking about is opening up the or a marriage secret is
quite right about Siegel is great and 3rd 99 . 9 % of our applications out there you will never need anything better than c see group is easy to make pasta compliant that durability part is the really important it has an infinitely flexible query language capable of constructing any number of ways to look at your data and once you can even think of right now and the other 1 is also it's been around a long time it's great it's easy it's fantastic only let and the RDBMS is space-age technology back when we actually sent rockets in the states since the 19 sixties we've been perfecting the RDBMS technology and it's extremely reliable versus something which was invented the last couple of years the sometimes it was not great the things the sequel is not great at sequence socks aggregation columnar databases so far faster doing that cygnus stocks that search operations I contains has a very limited sequels shocks that seek single record access key object stores are far better at doing that sort of thing sequence stocks and highly dimensional data because you involving a whole lot of joins a whole lot of duplication elaboration whereas the database which supports high-dimensional data and the box a document database might work much better sequence stocks a graph traversal because you have an unknown number of joints omnia foraging tighten our fantastic solutions for if you have a database of you that is better looked after regret graph the but the tricky part is that you need a single source of truth you cannot have metadata stores without a single source truth that means the authoritative record the thing that ultimately if there's any contention you fall back to the same of truth is that you use to bootstrap other data sources single social truth is what used to recover from disasters anything that's an alternate data store needs to be considered a derived data store in use we consider cash denormalize dimensions something you can throw away and don't the head of the amendment the restorers especially across different architectures is race conditions so for example if you are keeping account of something in red S that's our individual rows skin post by incrementing and decrementing account it's very easy to have a race condition we encounter comes out of sync with the rows that you actually have that's not personally I use Rattus even back from maps there's a pretty well tested a set of atomic operations you can use to do timestamp-based locks this role running NTP in earlier times denser reliable bet on the right as a comic of the as a single threaded server with atomic transactions is probably the easiest and cheapest way to ensure mapping across a subset of the data and if you do go to other data stores don't forget that we're all engineers here and that hiring new engineers in to be teaching this to other engineers so if you actually do this model your data singling appropriately and the best you can follow the example Django because you're hiring jingle people will work with these are introduced 1st on haystack is the most fantastic example of the way that it mirrors the query structure of Django about as closely as it can and performing searches that take is a very shallow learning curve and Inverse switch from a agrees to search queries and the last general strategy and suggest
that 200 years OK but 304 sparse so basic refresher have
downstream caching works responses from your origin contain 3 interesting headers Last-Modified ETags and cash control the downstream captures make decisions about freshness based on what you return so when clients come back to that caching server they're going to provide the headers if modified since the timestamp and if none match the attack if the cash has a copy of the genes flash means the max age not expired yet and the corresponding timestamp or he tag from the request headers matches it simply returns the cached copy doesn't even you talk to your logic but if the caches a copy that is stable meaning that next page has expired it doesn't automatically go and pull a fresh copy of it actually hits your urgent server again with if-modified-since and if none match if you're managing server comes back and says 3 refer the cash considers it still copy fresh again and it serves to the client on quick caveat because I was wrong about this for a good long while in cash control you can say it must rely it that doesn't do what you think it does it does not require caching servatory validates the data that you have every single time so max age equal 0 is the only way to do that but this requires going I'm going beyond the gender catchment where it's a blunt object it saves bandwidth but it doesn't save processing because it has to go through the entire view to figure out if the e tag is changed the tag is based on a hash of the constant comes back you know your views you know the data is asked accessing and you know how the data is related to 1 another so you can do better so about battering intuitive use shortcuts to simply check to see if the data has changed since the last time the user asked she couldn't return 3 offer you save yourself a ton of processing time we save yourself a ton of database access but this is tricky and i these solutions this is the trickiest cache invalidation is hot because it's not just keeping track of the time stamps of individual records it's keeping track of the time stamps of every record you're going to return in a particular view so when you do this is with that you can keep the modification timestamps for particular objects in rats and again you have to be careful about updating related content I so that when you request comes and all you do is check the time stamps of all the required uh about the required records in Venice and whatever the last month the uh maximum last-modified timestamp is what you need to compare with if that matches the request coming and you just say through for be done you could also keep an updated date time stamp on each 1 of of your models and touch them across related objects using either signal handlers for overriding save methods in order to again figure out what is the max the updated timestamp so you can quickly returned 3 or 4 if you wanna make sure absolutely that you never served stolen data back to users you can return your cache control with the max Agent 0 that forces any and all upstream caches to come back and revalidate every single time but Harrison some barely bad pitfalls about this you can't use per user cookies to determine content because multiple users will be getting the cash content and you're not varying just based on the cookie you could vary based on the cookie accepted for using any analytics a advertising platforms they're changing the cookie every single time the user has the site to basically not using a cache of all this makes using user recession preference data extremely hot it makes mobile versus desktop detection extremely hot this makes locale-specific data that is represented in the headers like timezones extremely hot this represents a the browser needs to be the 1 to look at the cookies for the user's preferences and not gender on by the browser looking at the cookies it can take the data out of the cookies and include them somewhere in the URL string of that it accesses over X a chart in which case the cache server since the unique days in the URL will vary the content based on those doubts on top of that it also means you can buy different cash lifetimes for the surrounding crowd of which probably won't change so much it doesn't actually have the specific data in it versus the the XHR request which does have the user-specific data which will probably be more about all the so the train background station we're talking about shutting faster you can handle the requests by leveraging downstream Cashmore slightly fewer questions during requests by optimizing the queries that you're asking you can answer faster by making sure your meaning on appropriate indexes leave properly to new databases you can increase concurrency in some cases where the event I O and you can use JavaScript and inside the clients CPU in office in the processing and take pressure off your website services are so that's the pitch of Europe me from about 8 Python engineers and do the finest level and use of the work so we hire rockstars and they're all really good but we have not just Python folks we would that match up we have front ends JavaScript we graphic designers you UX Information architects QAP arms and and the projects we work on ah with large influential and often public-facing web sites are so if you're interested in working for a cool consultancy in the serious please do feel free to write me an e-mail and if
you go the else right man thanks at Thanh Minh and the this
Last
Server
Ähnlichkeitsgeometrie
Kartesische Koordinaten
Computeranimation
Matrizenrechnung
Vektorpotenzial
Subtraktion
Matching <Graphentheorie>
Versionsverwaltung
EDV-Beratung
Mathematische Logik
Division
Quick-Sort
Computeranimation
Datenhaltung
Systemprogrammierung
Hauptideal
Flächeninhalt
Last
Datennetz
Codierung
COM
Messprozess
Biprodukt
Unternehmensarchitektur
Objektrelationale Abbildung
Einflussgröße
Resultante
Retrievalsprache
Prozess <Physik>
Datenparallelität
Kartesische Koordinaten
Computeranimation
Homepage
Metadaten
Client
Information Engineering
Caching
Sichtenkonzept
Prozess <Informatik>
Template
Datenhaltung
Globale Optimierung
Abfrage
Quellcode
Ereignishorizont
Rechter Winkel
Automatische Indexierung
Datenparallelität
Server
Client
Message-Passing
Instantiierung
Subtraktion
Web Site
Server
Renormierung
Code
Datenhaltung
Task
Multiplikation
Rendering
Speicher <Informatik>
Konfigurationsraum
Ganze Funktion
Schreib-Lese-Kopf
Medizinische Informatik
Konfigurationsraum
Kanalkapazität
Indexberechnung
Routing
Automatische Handlungsplanung
Sichtenkonzept
Ordnungsreduktion
Energiedichte
Last
Caching
Server
Weg <Topologie>
Kartesische Koordinaten
Unrundheit
Data Envelopment Analysis
Zentraleinheit
Systemplattform
Analysis
Computeranimation
Physikalisches System
Datensatz
Umwandlungsenthalpie
Rhombus <Mathematik>
Datennetz
Mini-Disc
Statistische Analyse
Programmbibliothek
Indexberechnung
Tropfen
Figurierte Zahl
Gerade
Umwandlungsenthalpie
App <Programm>
Graph
Datennetz
Linienelement
Protokoll <Datenverarbeitungssystem>
Datenhaltung
Open Source
Dämon <Informatik>
Linienelement
p-V-Diagramm
Statistische Analyse
Physikalisches System
Knotenmenge
Office-Paket
Rhombus <Mathematik>
Dienst <Informatik>
Menge
Automatische Indexierung
Festspeicher
Minimum
Messprozess
Information
Ordnung <Mathematik>
Eigentliche Abbildung
Schlüsselverwaltung
Tabelle <Informatik>
Retrievalsprache
Prozess <Physik>
Punkt
Minimierung
Adressraum
Nebenbedingung
Kartesische Koordinaten
Fortsetzung <Mathematik>
Kardinalzahl
Computeranimation
Einheit <Mathematik>
Softwaretest
Suchmaschine
Gasdruck
Verschiebungsoperator
Softwaretest
Datenhaltung
Güte der Anpassung
Abfrage
Spieltheorie
Globale Optimierung
Matching
Ereignishorizont
Reihe
Festspeicher
Strategisches Spiel
Programmierumgebung
Zentraleinheit
Varietät <Mathematik>
Maschinenschreiben
Web Site
Automatische Handlungsplanung
Zahlenbereich
Datenhaltung
Physikalisches System
Domain-Name
Adressraum
Softwareentwickler
Speicher <Informatik>
Schreib-Lese-Kopf
Einfach zusammenhängender Raum
Videospiel
Kanalkapazität
Migration <Informatik>
Quick-Sort
Objekt <Kategorie>
Last
Dreiecksfreier Graph
Zentrische Streckung
Surjektivität
Punkt
Bootstrap-Aggregation
Gruppenkeim
Fortsetzung <Mathematik>
Kartesische Koordinaten
Komplex <Algebra>
Computeranimation
Softwaretest
Web Services
TUNIS <Programm>
Datenreplikation
Konditionszahl
Punkt
Softwaretest
Lineares Funktional
Zentrische Streckung
Datennetz
Datenhaltung
Gebäude <Mathematik>
Abfrage
Web Site
Biprodukt
Konstante
Arithmetisches Mittel
Druckverlauf
Rechter Winkel
Automatische Indexierung
Konditionszahl
Festspeicher
Strategisches Spiel
Server
Ordnung <Mathematik>
Instantiierung
Lesen <Datenverarbeitung>
Folge <Mathematik>
Web Site
Stab
Dezimalbruch
Klasse <Mathematik>
Automatische Handlungsplanung
Zahlenbereich
Interaktives Fernsehen
Systemzusammenbruch
Wiederherstellung <Informatik>
Installation <Informatik>
Elastische Deformation
Speicher <Informatik>
Softwareentwickler
Konfigurationsraum
Drei
Analysis
Schreib-Lese-Kopf
Linienelement
Indexberechnung
Physikalisches System
Quick-Sort
Echtzeitsystem
Last
Caching
Retrievalsprache
Prozess <Physik>
Inferenz <Künstliche Intelligenz>
Selbstrepräsentation
Nebenbedingung
Kartesische Koordinaten
Fortsetzung <Mathematik>
Computeranimation
Eins
Richtung
Netzwerktopologie
TUNIS <Programm>
Automatische Indexierung
Lineares Funktional
Krümmung
Datenhaltung
Reihe
Abfrage
p-Block
Biprodukt
Elektronische Unterschrift
Digitale Photographie
Teilmenge
Datenfeld
Funktion <Mathematik>
Menge
Automatische Indexierung
Rechter Winkel
Geschlecht <Mathematik>
Generizität
Ablöseblase
Information
Ordnung <Mathematik>
Objektrelationale Abbildung
Tabelle <Informatik>
Objekt <Kategorie>
Nebenbedingung
Subtraktion
Teilmenge
Multiplikation
Thumbnail
Renormierung
Gruppenoperation
Klasse <Mathematik>
Zahlenbereich
Unrundheit
Code
Datenhaltung
Loop
Datensatz
Reelle Zahl
Digitale Photographie
Datentyp
Stichprobenumfang
Programmbibliothek
Inverser Limes
Indexberechnung
Modelltheorie
Inhalt <Mathematik>
Speicher <Informatik>
Softwareentwickler
Hardware
Relativitätstheorie
Eindeutigkeit
Indexberechnung
Beanspruchung
Data Dictionary
Objekt <Kategorie>
Generizität
Patch <Software>
Minimalgrad
Caching
Normalvektor
Lie-Gruppe
Klon <Mathematik>
Router
Retrievalsprache
Compiler
Nebenbedingung
Raum-Zeit
Computeranimation
Netzwerktopologie
Last
Algorithmus
Abzählen
Umkehrung <Mathematik>
Ordnung <Mathematik>
Analytische Fortsetzung
Softwaretest
App <Programm>
Automatische Indexierung
Vervollständigung <Mathematik>
Kontrolltheorie
Bitrate
Kontextbezogenes System
Zeiger <Informatik>
Tupel
Menge
Rechter Winkel
Festspeicher
Server
Ordnung <Mathematik>
Tabelle <Informatik>
Partitionsfunktion
Subtraktion
Folge <Mathematik>
Hash-Algorithmus
Teilmenge
Renormierung
Automatische Handlungsplanung
Datenhaltung
Loop
Virtuelle Maschine
Spieltheorie
Datentyp
Hash-Algorithmus
Inhalt <Mathematik>
Ganze Funktion
Konfigurationsraum
Indexberechnung
Partitionsfunktion
Netzwerktopologie
Bit
Natürliche Zahl
Hochdruck
Fortsetzung <Mathematik>
Komplex <Algebra>
Analysis
Eins
Kernel <Informatik>
Regulärer Graph
Gamecontroller
Router
Quick-Sort
Funktion <Mathematik>
Teilnehmerrechensystem
Zentrische Streckung
Permutation
Datenhaltung
Abfrage
Arithmetisches Mittel
Datenfeld
Funktion <Mathematik>
Datenstruktur
Automatische Indexierung
Ganze Zahl
Geschlecht <Mathematik>
Disk-Array
Strategisches Spiel
Information
Schlüsselverwaltung
Objektrelationale Abbildung
Portscanner
Ebene
Nebenbedingung
Quader
Physikalismus
Zahlenbereich
Term
ROM <Informatik>
Code
Puffer <Netzplantechnik>
Multiplikation
Datensatz
Digitale Photographie
Modelltheorie
Speicher <Informatik>
Softwareentwickler
Einfach zusammenhängender Raum
Linienelement
Konfigurationsraum
Eindeutigkeit
Einfache Genauigkeit
Flickr
Sichtenkonzept
Exakte Sequenz
Quick-Sort
Mapping <Computergraphik>
Quadratzahl
Flächeninhalt
Loop
Basisvektor
Zentrische Streckung
Bootstrap-Aggregation
Gruppenkeim
Kartesische Koordinaten
Fortsetzung <Mathematik>
Synchronisierung
Computeranimation
Metadaten
Polygonzug
Konditionszahl
Kurvenanpassung
Nichtlinearer Operator
Datenhaltung
Inverse
Abfrage
Quellcode
Schwach besetzte Matrix
Atomarität <Informatik>
Teilmenge
Vertikale
Transaktionsverwaltung
Menge
Rechter Winkel
Konditionszahl
Server
Strategisches Spiel
Schlüsselverwaltung
Objektrelationale Abbildung
Aggregatzustand
Schnittstelle
Folge <Mathematik>
Quader
Hausdorff-Dimension
Zahlenbereich
Ordinalzahl
Wiederherstellung <Informatik>
Datenhaltung
Unendlichkeit
Open Source
Graph
Datensatz
Abfragesprache
Retrievalsprache
Äußere Algebra eines Moduls
Inverser Limes
Thread
Inhalt <Mathematik>
Modelltheorie
Operations Research
Speicher <Informatik>
Schreib-Lese-Kopf
Verschiebungsoperator
Graph
Einfache Genauigkeit
Objektklasse
Quick-Sort
Objekt <Kategorie>
Mapping <Computergraphik>
Mereologie
Modelltheorie
Unternehmensarchitektur
Prozess <Physik>
Extrempunkt
Datenparallelität
Weg <Topologie>
Browser
Twitter <Softwareplattform>
Computeranimation
Entscheidungstheorie
Übergang
Homepage
Client
Umwandlungsenthalpie
Zeitstempel
E-Mail
Metropolitan area network
Caching
Schnelltaste
Teilnehmerrechensystem
Sichtenkonzept
Prozess <Informatik>
Kontrolltheorie
Datenhaltung
Güte der Anpassung
Abfrage
Zeitzone
Ereignishorizont
Entscheidungstheorie
Motion Capturing
Arithmetisches Mittel
Software
Dienst <Informatik>
Druckverlauf
Rechter Winkel
Automatische Indexierung
Geschlecht <Mathematik>
Maschinenschreiben
Datenparallelität
TVD-Verfahren
Client
Server
Projektive Ebene
Information
Ordnung <Mathematik>
Zeichenkette
Web Site
Subtraktion
Stabilitätstheorie <Logik>
Wellenpaket
Mathematisierung
Content <Internet>
EDV-Beratung
Kommunikationsdesign
Analytische Menge
E-Mail
Zentraleinheit
Systemplattform
Zeitstempel
Datensatz
Weg <Topologie>
Determiniertheit <Informatik>
Endogene Variable
Hash-Algorithmus
Arbeitsplatzcomputer
Mobiles Internet
Inhalt <Mathematik>
URL
Matching <Graphentheorie>
Cookie <Internet>
Browser
Datenmodell
Validität
Einfache Genauigkeit
Sichtenkonzept
Endogene Variable
Office-Paket
Videokonferenz
Objekt <Kategorie>
Schnelltaste
Caching
Ganze Funktion
Debugging
Identitätsverwaltung
Cookie <Internet>
Bandmatrix
Bandmatrix
Benutzerführung
Vorlesung/Konferenz

Metadaten

Formale Metadaten

Titel A Nice Problem to Have: Django Under Heavy Load
Serientitel DjangoCon US 2014
Teil 22
Anzahl der Teile 44
Autor Ginsberg, Joshua (jag)
Mitwirkende Confreaks, LLC
Lizenz CC-Namensnennung - Weitergabe unter gleichen Bedingungen 4.0 International:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen und das Werk bzw. diesen Inhalt auch in veränderter Form nur unter den Bedingungen dieser Lizenz weitergeben.
DOI 10.5446/32859
Herausgeber DjangoCon US
Erscheinungsjahr 2014
Sprache Englisch

Inhaltliche Metadaten

Fachgebiet Informatik
Abstract "Don't prematurely optimize. Get your project to v1.0." This is a mantra often repeated in the Djangoverse. But what happens after v1.0 launch when your awesome site is being crushed by traffic? Scaling Django under load means finding bottlenecks, leveraging new tools, and customizing code. This talk will show you how it's done.

Ähnliche Filme

Loading...