Merken

Your App Server Config is Wrong


Zitierlink des Filmsegments
Embed Code

Automatisierte Medienanalyse

Beta
Erkannte Entitäten
Sprachtranskript
this work and we wish so and in that this is the correct response attack that think you for coming from struggle over the Nazi and for and I
wouldn't susceptibility combined at the recall the peer group lot . com and you can vote for your favorite open source project in word and donate fatter dollars that projects and there's but you just Ruby trend questions this formant all ideas the QR code but I expect you to stand only found and Ross
doing things up Cyrez 0 says not happening anymore but I did enjoy the spirit of saying thanks to people I and the me that helped you with your journey as developer and so we have the school postcards bit are you can write thanks and you give it you other person they're rails top 40 can just posted up on these white board so we had a rooster and then we will you Tweet Amir figure out a way to make that public and right after this talk to
others a break in the routine be much people from the rails scorer contributes even at flooding doing of upstairs so you have questions or want to meet those folks like error mean Raphaël of that ball on other people and you can come to that and get those questions answered I know a lot people came by and tried get shirts and re ran out within like the 1st 30 minutes maybe even less but we'll have some arches Mars so if you do stop by and tomorrow we hopefully will have short you and so with that I'll give this name so take away they differ thank you right of so this is sponsor talk but but officers on and money and so I do not work for her like you I'm not a appealing fully but but they were very nice to visit to slot
of this talk is called your app server convey is wrong my talk about application servers when talking about are things like look up passenger unique war in Web break but these are all application servers and the things that start run a Ruby applications but 1st
loaded up with that you're listening to right
now and I and here I recently moved to child's Mexico but just for the scheme basically a I
also am a motorcycle rider ever in my motorcycle cross country 3 times under roads so this is my motorcycle taking a nap in the middle of nowhere Nebraska and I was also
ensure take when I was 19 of them during the season as me sure take but my 1 of my readers give stiff I enjoyed very much I'm also a part time the
Lord I like make spicy
programming means like this 1 the 1st by
Siemens has the I made the
polynomial not remain the same with my blog about looks like this and I write about really performance topics like making rails applications run faster and I also have a consultancy that
IKL speech shop where I work on people through the applications to make them the faster more performant use less memory use fewer resources and I
have written a book of a course about making Rails application faster that rails b . com called Complete Guide to rails performance what
incorrect app server configuration is probably the most common issue that I see young client applications are it's really easy to kneecap herself but by having an Abbe certain can figure out which is an optimist and it's easy to
provision these you have of that app server can fade which makes you require more Dido's more resources and that you actually need that's very easy
but to spend a lot of money and rope you but which is great for them but it's it's easy to scale out of your problems right by just cranking that little Dino's all the way to the right and now I don't have a performance problem anymore but if you're spending more per month on roku then you have API and you probably are over provisions in this case of it's you don't have to spend 5 thousand dollars a month on your 1 thousand rpm map I mean maybe if you have some like really weird add-on that is like you know something you need to you maybe then you have to but I this is kind of a rule of thumb that i've that i've founded been able to get to that point at least on client apps that
not less than that all the other thing that can happen with this can I configured act a server is but you're not using so your resources to but the you're all reason your resources using too small and I know for the settings that you have a sense yeah so it's
suffers some definitions but
container so I I use the word container and I know interchangeably because that's how I wanted I know is right is it can't be a container and in a a big AWS Instance or whatever where they use them and what you get some proportion of their larger of server so this talk is a rope talk so I'm going to use a broker terminology I'm going to say dyno but this could also a lot of stuff all is not unique to her Oakland is going to be discussing it in a rope terms or worker but so in pool of which I'm now going to a maintainer of with Richard so that improve we have workers but or they call them in passenger and more unicorn them I use a different word but basically all of the the top 3 modern of Ruby application servers use of 14 process models so what that means is that a start your application of your Rails app . initialize or whatever and then they call for and they the year but that process creates copies of itself those copies are what we call the workers so but that's probably 1 of the main Fig settings how many processes were and run heard idle upper a thread OK well i local
node a thread is gone but I just wanna draw the difference here because it's very important in regular see Ruhi difference Wiener process and
thread right is we can run as processes
run independently and so we need to processes can process to different class at the same time of what
2 threads cannot process of but to to request concurrently but we can do things like start a request In stop creating a response in 1 thread and then maybe were waiting for a database to our data is called to return and we can release the global and lock in Ruby then pick up another of response in a different thread do some work there and go back to the original thread so we can do some limited concurrency in Ruby it's all usually just I O and but but in general 1 thread 1 request so OK so here's the
overall process really go through each step of 1 to 5 the 1st reduce determine but theoretically how many concurrent workers how many how many Barbara class do we need to work the complete concurrently secondly written term it up on the users are sorry how much memory each worker slash process is going to use the choose which container size we wanna user which died of size 1 use and how many workers many processes were put in each dyno where check connection limits how many connections we make the database make Marko over those limits the deploy and monitor RQ depths Q times of CPU usage memory usage rate of how many times that are processes restart and how how many times we have time now the OK so
this is a little hobby horse in mind this is Little's laws of a concept from queuing theory that use a lot like a factory management and so when they wanna know like how many you know Packard machines they need on the floor of the use things like law
but it's a very small equation which is why it's very small on this slide but this is the fancy like Greek letter version of it you
can google Little's lot like it that you know the the process engineering version of it but the version that were in use here is just to say that the number of things over things inside a system at any given time is equal to the rate at which they arrive multiplied by the time they spend in this system on average OK so translating that into really application
server terms the number of requests that we serve at any gear are serving at any given time is on average the number of requests we get per 2nd times are average response time can had to give you a little the survey
and dividing the average number of requests in a system by how many workers we actually have gives us an idea of how much work utilizing the the workers that we have so the work through the basil confusion work through an example here In a 2nd
exported no this is just on average a kind of assumes that are of requests are arriving at equal intervals assumes that like a request will ride every 300 ms but that's not the case of course we know that requests arrive in bunches they're randomly distributed so but this is just sort of a starting point a guideline so that's what these
numbers here to give an example of what I found these numbers in an old bottle presentation from 2013 and vital runs like being forced if you ever use that be rails out so they say that they receive 150 requests per 2nd which average a 147 millisecond response time in the use of 45 workers 45 Processes offer that which application server these actually and so we do is
we multiply of a number of
a request for 2nd 115
by the average time it takes to complete 1 request so efforts I my units the same here right so this is in seconds and that is now in seconds and that gives me 16 . 9 so on average invite I was processing 17 requests at any given point in time that they use 45 workers
to do that 16 . 9 divided by 45 37 % so using 35 per cent of their workers at any given time no so what I tell you will
do is do this calculation for themselves you know how many requests get per minute that's right on the broker dashboard what and I you know your response times it's also 1 of a of the dashboard multiplying together did and multiply that again by a factor of 5 to using 20 per cent of your theoretical capacity and that gives you your initial estimate how many processes you need that of why is the fudge factor that's taking into account the fact that you're response you're of request don't coming uniformly 1 after the other 200 ms apart or whatever your numbers the yeah but
if that was all very confusing I find brokers diner load that number on their own on the dashboard to be fairly accurate as a starting point so what about this is at the bottom of the dashboard as often possible read of so if these numbers here laughter from 0 to 8 but the dark blue line here is the the average load over 1 minute and the but lighter line here is the maximum load for the last minute but just look at that max number and so looks like an average here my max load 5 Dido's so right if I OK fine and of course this this does take into account like the fact that it's running in its 5 Dino's of whatever your configures of this particular moment but is just a starting point and what you'll probably find with final load what most most of my clients find is that this number is a lot lower than the number of diners that they actually use i'm because but the at services are not configured correctly get and how to fix them so that's step
1 estimating a work out so we know
many processes we need we need 45 processes the server the server load so how do we divide that among containers but I wanna use a 1 x 9 0 2 x 9 now you have 1st Dino's prevent L so what's the right choice
so I find most people mess up with container sizes because they have a incorrect mental model of how Ruby users after that uses memory what most people think of application memory graphs should look like in Ruby is like a solution look like a flat line all who who I know so
that both of the men that otherwise that's not even a word the we would get that's not
true but they look like logarithms so a regular application of rare Ruby application will look like other memories over time will will look like this at all have a pretty steep are stored a period this is when were requiring code building up cash is like an Active Record statement cash and and what other things like that we're creating these long-lived objects and after a while it'll start to level out but but it never goes flat and I don't want you to think that it ever will this is probably partly why her roku restart your Dino's every 24 hours because if they just let them run forever this line would just eventually go on forever it doesn't mean you have a memory leak so if memory usages of flat doesn't necessarily mean you have a but you just need and I'll talk a little more about that in a minute but what we just need to be aware that that line ever will be completely level out so you're going to have to use a little bit less memory than the than the max United enable run it at 100 close but you know right next to 100 % you have to give it some more headroom but a common mistake I see here is to use things like from a worker killer and give it a of rampant or a set of RAM numbered say kill my Rails process with more than 300 megabytes and if you set that number too low your memory graph instead of looking like that long red line it looks like this looks like this purple stuff and you'll see that sorta like it goes up to here kills itself goes back down kill itself and people see that memory graph and they think wow look at that that DEC's sawtooth pattern I must have a memory leak were really what's happening is they're not letting the processes live long enough to get to that stable point
1 people sometimes used to Mordor killer as a legacy of of faster restart so you can also give more killer like a six-hour limit and say restart my process every 6 hours and that can also produce this kind of this memory graph as well so what I'm telling you is let your process run for 24 hours and if you have to tune the number of processors per dynode down to do that do it and this could just as a temporary thing you know know church to but web concurrency down the 1 let that process run out and see what it looks like after 24 hours and you could have you know run more but do it looks like after 24 hours and it does this if it eventually stress level out and that's the real number of how much memory you need per process the
so deployed 1 extra 2 x diners 1 worker for Donald 5 threads per worker and look at the average memory usage after 24 hours the average
apple come out to about 256 megabytes all the way up to 512 so that's the number you're getting you know that's an average of 512 was not great but but that's kind of what happens with the big old mature Rails apps and use a lot of memory and that's what you get but there's really
no magical way to reduce that number I have another if you go back there the I have a really really can't talk that I gave the library because this year about reducing memory usage but but there is no magic way to do this it's a long hard process yeah OK
also so that's step 2 we determine are much memory we use per process per worker or so now how
do we decide what size container to put into the as
a sound a and B a that we want our processes that you like if you you might think of in their diet should you should be sitting at 80 % memory usage in that in that dynode to be just just right not not hitting 100 per cent and starting to swap but just sitting at you know 4 fits two-thirds memory usage of the total capacity your dyno not so these are
the of the main Dido types security using production that include like hobby and free for obvious reasons but so the death the main difference is that most of the rails applications are really care about is the memory right so that you can read the numbers here not can read them out to you but because broken Dino's or are you know shared like code of EPS and although 1 x and 2 x 902 technically have the same power the CPU's of the 2 X. I gets 2 times the amount of time amount of CPU time so and so on and so forth prevent per fell so to but if they're perf and Dido should have you know 12 acts as well I guess 1 2 3 x the prefer CPU capacity of all 1 X died the all although from from what I understand the Terrence told me so when him if this is wrong but there it's kind of interesting here about 2 x Daidalos in 1 it's Dino's have access to base hardware threads but the perfect and died only has access to so that's kind of like an interesting weird difference between per famine all the other diners although prevent does have more share all of that at that time and then to accent the whole reason perf Dino's exist that is because you do not share of a CPU time with other of peoples are collapsed so you should get more stable performance from per dyno of because you have someone else's you know badly tuned uh Rails application sitting alongside it on on this but a whatever service actually back in crowding you out of the of the CPU time another interesting thing I notice when comparing upper Dino's to 1 x and 2 x is that the perf and I know does cost 250 dollars a month but which makes it a little bit less cost-effective than the other Donald types of fell Dino's from our justice cost-effective in terms of like a dollar per compute unit and dollar program the gigabyte as the 1 x into externals titles would prevent the little bit of a hit
and that I talked about to extend of 8 CPU's that which might mean it can support higher thread counts than then the the like a profound Italy how to separate counts in
2nd soaking up more than 25 Apr instances if based on Little's Law you need more than 25 processes so I would recommend using per fell other performance diners do get more stable consistent performance and then 1 x and 2 x so don't share the the server with anybody else but otherwise try to use to x the the reason you don't wanna use 1 axis
because you should be aiming to have at least 3 workers 3 processes per Dido but in if you can't fit 3 workers inside of to external you might have to use per felt are sigh perf and of the reason that you need 3 workers per Dido but is because of the way broker does routing so request can be routed to any random I when your application of us are any random by knowing your you know formation against if you only have 1 worker per dyno but if if broken randomly router requested that Dido and that worker is already working on someone else's request it's gonna sit there it's gonna wait until that that request is done on this is gotta goes back to an Old queuing theory thing where instead of having but at grocery store instead of having multiple checkout lines you know you like at Wal-Mart of whatever you you have 10 checkout lines it's more efficient to have 1 line and then multiple people the check likely Whole Foods does it all foods and so the more work is we happened I know the more efficient routing we can get out of the out of parochial so generally I found that he had at least 3 workers per I you're maximizing years to routing performance
I just said that and if you're struggling to fit 3 workers in a 2 x title you can try reducing thread count here love or passenger enterprise you multithreaded application server reducing the thread count to 3 so if you're running high thread counts can help and or you can use J now Locke and sense afferent discourse has been sort of the pioneers in using J. Malik for production Ruby applications you can Google him and read about it but we gotta do yourself again sometimes reduce memory usage by 5 10 % you that extra little bit of head room to squeeze into a 2 x 2 I know is that of J Malik build pack which I helped to maintain so you can do this on roku search J. now Locke build pack roku you'll find it learned and I and learn how to use it on
so if you have a bit of knowledge on the you know application server management what you might think that the maximum number of processors you should run per Dido would should be equal to the courthouse you should run 9 processes if you only have a course because in theory we can only of run 8 processes at 1 time on an 8 core machine but what I found in production is that is not really the case of it can be applications really benefit by having a worker counts 3 to 4 acts are the amount of course available so on a perf albino but I I know product on the rails application product of runs 30 to 40 workers on a per fell I know which is war xt amount of course available and they also runs like node processes in the same dyno but so there's tons of stuff being competing for the CPU time but for whatever reason I don't know if it's just a a lot of waiting on I old and but I don't restrict yourself you've ever heard that advice before 2 processes must equal or Council it can be free for x that number
the thread counts the 3 to 5 this is now the way we set is now is rails max threads right but war it more threads per process than 5 tends to just fragment memory too much it's also really difficult with high thread counts just keep yourself under the connection so of war rails to connect to your posters database for example each thread needs its own connection to database so in general we keep but these amount of threads we have per process equal to the size of the database pool the rails does this by people but if you have like 20 threads per worker but end now you're you're connection that for your database is only 100 so it's really easy to outstrip that connection when it really quickly and so I found that thread counts of 3 5 or 4 really good a compromise between processing requests concurrently keeping connection limits with out of reach and avoiding memory fragmentation it How do you know
that the spread of the lab thread-safe I give this question all the time but because people don't you know our afraid of whom are afraid of and making their at multithreaded and so what I haven't even the maintainer promote recommends you could do is just a start slow just try to threads this thing starts breaking breaking just you change that configure bar back in can it ever happened up if you use many test you can try many test held that which is required at test held at the top of your test helper and it will run at each test in a new thread so that doesn't break things you're a God of and the at the end of the day if you're running MRI it's probably find and I don't see many people running into actually like weird multithreaded bugs and if they do they counted they know it's the fault of below 0 yeah Croatian use the reddest global to like you know 2 in this controller and or class little state like user ducker class variables generally they find like Pollyanna like really obvious I should realize that and and the other thing here is like all that out of my libraries thread-safe and like same thing I know is a library author I really pay attention the thread safety and make I go through our code you know to make sure that it's thread-safe no because in MRI Ruby code must be any time you execute the code that happens the BGBl the deal around it it's actually find it difficult to to run into a threading bugs what does that say doesn't happen and it is annoying but but don't be so afraid of the union tried OK we
got a container size and get our of work account now so let's make sure that we're not
going to run over our connection limits of things that use connections of Active Record actor records DB pool that you probably have connections between you your by knows inredis you catch I think most of the men cash out on providers were broken but achieving don't limit connections and Avignon cash IEEE but I think as a limited connections that is to go used to like really heavily limit them but some of the new were greatest writers don't really limit them so much post stress is really than our you know your database is really that the main
connection pool that needs to be watched because those limits are very easy to head of that the change and database not gamble the you need 1 connection per thread it that's the people by thinking that in the data we study it's generated but in and very title Edison and then cash but you may need more than
1 data right database connection per thread of if you use things like rack timeout which most people do 1 broker who the 32nd limit and what can happen is a rack timeout can raise while we're waiting on a post press of query to return and when it raises that connection can get lost and so but you may need to have
up to double the amount of database connections per process that you have threads if if if that's a problem for you know that a problem if you're if you're getting errors that say like Active Record is spent too long waiting for a connection a doesn't have 1 available but these are the paratroopers Chris planned an army connections they support but and after standard for the larger sizes it's all still limited to 500 connections you
need more than 500 connections for example of a of a post press the group who provides I think it a build pack right so the PG bounce bill pack out you can add to your app which will cool these connections for you come and you will be able to share a smaller amount of connections per process and then you actually have threats
so was it just do the math to figure out how many Dido's of wood out steal your connection limits us so as an
example 5 prevailed Dido with 20 Apr workers and each of those AP workers has a 5 threads at 100 threads and 100 dB connections so why have 5 Dido's that's 500 connections and I hit my you know a standard for Broken post-stressed connection limit
yeah so now we check the connection limits we know how many of the maximum number of Dino's we can scale to before we had a connection limits
and we were ready to deploy but so here's something so watch after deployment of
what memory but this is a pretty typical pattern that I see is the Mary's is fine than blows out but when someone hits it like the CSV export controller that looks like that out through so that's a lot that's really bad that that dark purple swap you wanna see that means using 2 but memory need to back off the number of processors per diner now when you have
a this is not a memory leak is of that action but the the the the only
way you can really track it down if if you're not
if if see a curve like this where it's flat and then something some action someone used you know blew it out to double that number on there's you get
install EPM that those memory profiling so new right does not do this very well for as much as I love you will occur withing else on skylights Scout are both commercial services that have memory profilers in production they can tell you hate this controller Action allocate 18 million objects and you say that's really battle fix it but an open source alternative is or like at like basically rights your logs from says this action did xyz memory things and then but only Kazakh along and that will give you some statistics about what controllers allocate how much
so if you're running out of memory scale-down Web concurrency that's the way a group who has a cell that by default but if it's if you're not using you know 75 per cent of all of the amount of available RAM but you can scale up with
currency are you can also tweak thread counts so you were threads will use less memory and you may think that because all of thread because thread suddenly share memory right so all you need to up to create additional thread is just 8 megabytes a stack the way that now what works and and this is something that change in this year 14 stack but it is it allocates what's called arenas to each thread when they can flex and at the end of the day all really means is would seem a lot but can have really bad memory fragmentation for a high thread count abnormal very highly multithreaded programs and you can control that with the Malorca arena max environment variable I can't do too much did each August is the real time if you just google this like now ocarina max roku Terrence wrote a really good explanation of what it is added to it so this is really only relevant for people are running high thread counts er maybe your psychic processes which 1 25 threads whatever a more J. Malik which I talked earlier tends do but a could jobless this is a
customer example a client example of what to now ocarina Max a psychic process so there's psychic process that would balloon from that 256 megabytes to gig over 24 hours that's really bad and then right here he change malik arena maxed to and it almost completely stabilized from in memory usage I watch Q times new
relic will tell you how much time on average of requests spent queuing salt how much time it was not actually being processed but less than 10 ms is good more than that is bad and if you have high Q
times that just means you need more Dido's that's the time when you want scale CPU usage
and if your CPU usage is low you may benefit from a higher thread count but
uh restarts so if you're using proworking killer if you have 2 because you have a leaking you can't fix it you need to be watching how often that died how often those processes are a starting what I find is some people install these killer automatic killer tools and they not they don't know half its restarting is like restarting every other request that's really bad but you're going to a really hamper the performance of your application if your processes don't get to live very long on at least 6 hours between restarts is a good goal from and if you care you know if it toward a killer is is or whatever you know killing a process that quickly need to change the settings for user bigger dyno so timeouts but
so we all know wrote Mesos like 30 second time-out where as if your application takes longer than 30 seconds to respond it basically gives up on you and says I will not return this response anymore so we have things like crack time out to to fix that but if
you have a lot of control or actions which tend to time frequently you don't have time to fix it not good band aid is to change to a Dynal formation where you're running more workers per Dido so as an example I had a client that had some some controller actions which took like 10 or 15 seconds to complete it I was like admin stuff and what would happen is a bunch of these requested comments like 1 after the other on and they would back up all the other request behind them so I would take 15 seconds like do have an action thing and then like a bunch requested pilot behind it so now all of those request now take 15 seconds plus whatever time they would take normally so what you have problems like that where you have these 95th percentile time which really high but you're going to benefit from having more workers per Dido and that's because of while broker who will rout randomly to whichever Dido of it wants your application server will not they all work differently here passenger prior has the best model for this but even cooler at will do a better job of routing requests to open diners which don't have an answer opened processes which don't have any work to do so it was a our customer they had running to x Dino's I put them on personnel Dido's and they almost completely a river timeouts and reduce the average response time by 20 % of this is that
big enough of but you can also so you probably have racked out but pool has a setting coworker timeout in passenger its passenger max request time until it is you where you can kill you have just chill the process of after all requests is taken a certain amount of time In promote we do this by default and it's 60 seconds and passenger it's they don't build turn somebody turn on yourself but it past your using passenger reduced just return the sword because your request probably don't need to take a minute and if they are you might was developed so this is it
that the process of those are the steps this is a slide people I wanted a picture of but a prokopec on Twitter on Tweedie slides out in a as of of the stage here and and my the web that website of my blog slash consultancy the speech up Taco thank you very much like the
so the ch ch
Lesezeichen <Internet>
Twitter <Softwareplattform>
Open Source
Endogene Variable
Gruppenkeim
QR-Code
COM
Peer-to-Peer-Netz
Wort <Informatik>
Projektive Ebene
Computeranimation
DoS-Attacke
Bit
Whiteboard
Computeranimation
Office-Paket
Office-Paket
Rechter Winkel
Elektronischer Fingerabdruck
Speicherabzug
Kontrollstruktur
Softwareentwickler
Figurierte Zahl
Fehlermeldung
App <Programm>
Server
Benutzerbeteiligung
Rechter Winkel
App <Programm>
Eindeutigkeit
Server
Kontrollstruktur
Nummerung
Kartesische Koordinaten
Hecke-Operator
Computeranimation
Arithmetisches Mittel
Mereologie
Nichtlineares Zuordnungsproblem
Applet
Vorlesung/Konferenz
Computeranimation
Hash-Algorithmus
Web log
Rechenzeit
EDV-Beratung
Gasströmung
Kartesische Koordinaten
Sprachsynthese
ROM <Informatik>
Computeranimation
Framework <Informatik>
Komplex <Algebra>
Festspeicher
Code
Speicherbereinigung
Statistische Analyse
COM
App <Programm>
Server
Client
App <Programm>
Server
Konfigurationsraum
Computeranimation
Mapping <Computergraphik>
App <Programm>
Client
Punkt
Menge
Thumbnail
Rechter Winkel
Server
Schlussregel
Computeranimation
Softwarewartung
App <Programm>
Prozess <Physik>
Menge
Server
Thread
Wort <Informatik>
Kartesische Koordinaten
Sollkonzept
Term
Computeranimation
Serviceorientierte Architektur
Subtraktion
Kanalkapazität
Prozess <Physik>
Datenparallelität
Datenhaltung
Klasse <Mathematik>
Hecke-Operator
ROM <Informatik>
Computeranimation
Inverser Limes
Thread
Rechter Winkel
Endogene Variable
Inverser Limes
Thread
Wiener-Prozess
Zentraleinheit
Einfach zusammenhängender Raum
Kanalkapazität
Prozess <Physik>
Datenhaltung
Klasse <Mathematik>
Bitrate
Zentraleinheit
ROM <Informatik>
Term
Gesetz <Physik>
Physikalische Theorie
Computeranimation
Inverser Limes
Virtuelle Maschine
Datenmanagement
Festspeicher
Inverser Limes
Faktor <Algebra>
Zentraleinheit
Rechenschieber
Physikalisches System
Versionsverwaltung
Zahlenbereich
Kartesische Koordinaten
Gleichungssystem
Physikalisches System
Bitrate
Computeranimation
Mittelwert
Physikalisches System
Mittelwert
Server
Zahlenbereich
Physikalisches System
Response-Zeit
Sondierung
Term
Computeranimation
Endogene Variable
Punkt
Mittelwert
Server
Zahlenbereich
Kartesische Koordinaten
Response-Zeit
Kombinatorische Gruppentheorie
Quick-Sort
Computeranimation
Endogene Variable
Punkt
Einheit <Mathematik>
Zwei
Computeranimation
Endogene Variable
Schätzwert
Punkt
Prozess <Physik>
Momentenproblem
Extrempunkt
Zahlenbereich
Rechnen
Teilbarkeit
Computeranimation
Client
Mittelwert
Last
Schätzung
Minimum
Endogene Variable
Server
Response-Zeit
Gerade
Lesen <Datenverarbeitung>
Serviceorientierte Architektur
Informationsmodellierung
Prozess <Physik>
Last
Festspeicher
Schätzung
Server
Kartesische Koordinaten
Ungerichteter Graph
Gerade
Auswahlaxiom
Computeranimation
Bit
Befehl <Informatik>
Punkt
Prozess <Physik>
Graph
Krümmung
Zahlenbereich
Kartesische Koordinaten
Frequenz
Code
Computeranimation
Leck
Datensatz
Logarithmus
Regulärer Graph
Menge
Festspeicher
Kippschwingung
Mustersprache
Wort <Informatik>
Gerade
Mittelwert
Prozess <Physik>
Graph
Datenparallelität
Netzwerkbetriebssystem
Zahlenbereich
Computeranimation
Übergang
Benutzerbeteiligung
Thread
Reelle Zahl
Mittelwert
Festspeicher
Thread
Messprozess
Coprozessor
Mittelwert
NP-hartes Problem
App <Programm>
Prozess <Physik>
Einheit <Mathematik>
Mittelwert
App <Programm>
Festspeicher
Programmbibliothek
CMM <Software Engineering>
Zahlenbereich
Computeranimation
Prozess <Physik>
Festspeicher
Kanalkapazität
Computeranimation
Subtraktion
Bit
Stabilitätstheorie <Logik>
Hardware
Freeware
Gemeinsamer Speicher
Computersicherheit
Kanalkapazität
Zahlenbereich
Kartesische Koordinaten
Zählen
Biprodukt
Zentraleinheit
Term
Code
Computeranimation
Angewandte Physik
Einheit <Mathematik>
Thread
Festspeicher
Datentyp
Server
Thread
Optimierung
Leistung <Physik>
Prozess <Physik>
Gemeinsamer Speicher
App <Programm>
Kartesische Koordinaten
Gesetz <Physik>
Physikalische Theorie
Computeranimation
Multiplikation
Server
Router
Speicher <Informatik>
Gerade
Instantiierung
Serviceorientierte Architektur
Bit
Prozess <Physik>
Extrempunkt
Gebäude <Mathematik>
Zahlenbereich
Kartesische Koordinaten
Zentraleinheit
Biprodukt
Zählen
Physikalische Theorie
Quick-Sort
Ordnungsreduktion
Computeranimation
Virtuelle Maschine
Datenmanagement
Festspeicher
Server
Speicherabzug
Thread
Coprozessor
Unternehmensarchitektur
Schreib-Lese-Kopf
Einfach zusammenhängender Raum
Autorisierung
Softwaretest
Prozess <Physik>
App <Programm>
Datenhaltung
Klasse <Mathematik>
Zählen
Code
Computeranimation
Programmfehler
Softwarewartung
Thread
Festspeicher
Endogene Variable
Gamecontroller
Programmbibliothek
Kontrollstruktur
Inverser Limes
Thread
Strom <Mathematik>
Aggregatzustand
Einfach zusammenhängender Raum
Datensatz
Datenhaltung
Inverser Limes
Normalspannung
Service provider
Computeranimation
Einfach zusammenhängender Raum
Datenhaltung
Mathematisierung
Abfrage
Computeranimation
Datenhaltung
Inverser Limes
Datensatz
Thread
Rechter Winkel
Inverser Limes
Thread
Serviceorientierte Architektur
Einfach zusammenhängender Raum
App <Programm>
Datensatz
Prozess <Physik>
Standardabweichung
Datenhaltung
Gruppenkeim
Thread
Computeranimation
Fehlermeldung
Standardabweichung
Einfach zusammenhängender Raum
Maßstab
Thread
Extrempunkt
Mathematisierung
Mathematisierung
Zahlenbereich
Inverser Limes
Thread
Computeranimation
Inverser Limes
Standardabweichung
Festspeicher
Mustersprache
Gamecontroller
Zahlenbereich
Coprozessor
Computeranimation
Statistik
Krümmung
Open Source
Gruppenoperation
Zahlenbereich
Profil <Aerodynamik>
Biprodukt
Login
Computeranimation
Objekt <Kategorie>
Leck
Rechter Winkel
Gamecontroller
Festspeicher
Server
Gamecontroller
Äußere Algebra eines Moduls
Vorlesung/Konferenz
Kurvenanpassung
Prozess <Physik>
Gemeinsamer Speicher
Datenparallelität
Extrempunkt
Mathematisierung
Gruppenkeim
Zellularer Automat
Zählen
Computeranimation
W3C-Standard
Benutzerbeteiligung
Echtzeitsystem
Festspeicher
Datenparallelität
Thread
Optimierung
Default
Programmierumgebung
Client
Prozess <Physik>
Mittelwert
Festspeicher
Mathematisierung
Computeranimation
Zentrische Streckung
Prozess <Physik>
Thread
Menge
Kartesische Koordinaten
Thread
Zählen
Zentraleinheit
Computeranimation
Prozess <Physik>
App <Programm>
Zwei
Gruppenoperation
Güte der Anpassung
Systemverwaltung
Kartesische Koordinaten
Routing
Computeranimation
Endogene Variable
Informationsmodellierung
Client
Prozess <Informatik>
Mittelwert
Gruppe <Mathematik>
Endogene Variable
Server
Gamecontroller
Dateiformat
Vorlesung/Konferenz
Response-Zeit
Serviceorientierte Architektur
Web Site
Kanalkapazität
Prozess <Physik>
Web log
Extrempunkt
Zwei
Gebäude <Mathematik>
EDV-Beratung
Sprachsynthese
ROM <Informatik>
Computeranimation
Inverser Limes
Rechenschieber
Benutzerbeteiligung
Twitter <Softwareplattform>
Default
Zentraleinheit
Vorlesung/Konferenz

Metadaten

Formale Metadaten

Titel Your App Server Config is Wrong

Serientitel RailsConf 2017
Teil 74
Anzahl der Teile 86
Autor Berkopec, Nate
Lizenz CC-Namensnennung - Weitergabe unter gleichen Bedingungen 3.0 Unported:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen und nicht-kommerziellen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen und das Werk bzw. diesen Inhalt auch in veränderter Form nur unter den Bedingungen dieser Lizenz weitergeben.
DOI 10.5446/31303
Herausgeber Confreaks, LLC
Erscheinungsjahr 2017
Sprache Englisch

Inhaltliche Metadaten

Fachgebiet Informatik
Abstract As developers we spend hours optimizing our code, often overlooking a simpler, more efficient way to make quick gains in performance: application server configuration. Come learn about configuration failures that could be slowing down your Rails app. We’ll use tooling on Heroku to identify configuration that causes slower response times, increased timeouts, high server bills and unnecessary restarts. You’ll be surprised by how much value you can deliver to your users by changing a few simple configuration settings.

Ähnliche Filme

Loading...