Merken

Tracing, Fast and Slow: Digging into & improving your web service’s performance

Zitierlink des Filmsegments
Embed Code

Automatisierte Medienanalyse

Beta
Erkannte Entitäten
Sprachtranskript
that afternoon and is anyone actually get the reference my title like it's the time if this so that reference a really awesome book Thinking Fast and Slow had recommended new relations is actually and so yes my name is alan roots and I am a site reliability engineers spot of time and I also you a lot of an open source evangelism internally and then
you might know me from a high ladies as well on also unfortunately and that take
up like the whole time and so if you have questions or want to chat and you can come and join me for a convenient coffee break right after this yeah I am going to look at question has anyone read at the site reliability engineering book AKA the Google History book because if you hands or I will I have recommended but but the tail the Arab like every chapter seems to be use distributed tracing and so what the the prevalence of that micro-services summary maybe not alone all the services that a request might flow through and it's certainly imperative to understand where your code fits into the grand scheme of things and then how and everything operates with each other so there's 3 main needs to trace the system performance debugging capacity planning on and problem diagnosis from although it can help address many other issues as well so while this talk will have like a slight focused words that performance debugging and these techniques can certainly be applicable to other needs so I have a bit of a jam packed today I'll start with an overview of what tracing is in the problems we can try to diagnose with that and also talk about some general types of tracing we can use and what key things to think about when scaling up to larger distributed systems yeah and then the inspiration for this talk a stem from me trying to improve the performance of some my own team services and which sort of implies we don't really traces of and have so I'll be running through some questions to ask an approach is to take when diagnosing and fixing the services of and finally I'll wrap up with some and tracing solutions that a profile performance and and as a mention before won't have time for questions so you can which means write the
right and in the simplest terms and the trace follows the complete workflow and from the start of a transaction or request and to its end and including the components that flows from so for a very simple web applications and it's pretty easy to understand and the workflow of a request and then add some databases and separate the front and the back and maybe from some caching have an external API call all behind a load balancer than scale up to tens hundreds of thousands of times it gets kind of difficult to put together workflows of requests so
historically been focused on machine centric metrics and including system will metrics like CPU disk space and memory as well as at level metrics like requests per 2nd response latency Davis writes excetera and following and understand these metrics are quite important and there's there's no new inter services dependencies or its dependents and it's not it's also not possible to get a view of the complete flow of a request nor develop an understanding of how 1 service performs at scale so the workflow centric approach allows us to understand relationships of components with an entire system and then we can file a request from beginning to end to understand bottlenecks and hone in on the anomalistic task and figure out where we need to add more resources so in looking at a very simplified
system Murray have a load balancing the front and back end database may be an external dependency to 3rd can and when we have redundant systems gets particularly confusing to follow a request so I had read about the program and of a rare workflows but how do we know which component of this system is the bottleneck which function calls taking the longest another at my house causing distortion of machine centered metrics a performance metric something like the noisy neighbors problem so so many potential paths their request and take with potential for issues at each and every node and edge and this convenient mind-numbingly difficult remember the continued to be machine centric and and and tracing of allows us to get a bigger picture and of that to address these concerns
and then we get the magnitudes of what we're up operating its qualifying and you can see that tracing if we did it would help us a lot
but so and so real quickly and there are a few reasons why we retraces system and the 1 that inspired this talk is performance analysis and this is trying to understand what happens at the 50th the 75th percentile the steady problems and this will help us identify vacancies the resource usages and other performance issues and we are also able to understand questions like did this particular deployable service have an effect on latency of the overall whole systems and tracing can also close in on an almost a kind of request flows the 9 9 . 9 per cent on and the issues can still be related to performance or can help identify problems with correctness like component failures or time and profiling is very similar to the 1st NP here we're just in this in particular components for aspects of system and we don't necessarily care about the for the full workflow here the 4th 1 we can also answer questions of and what a particular component depends on and what depends on it and particularly useful for complex system as complex systems and so when an with dependent depends identified we can also edge at tribute particularly expensive work at like component a add significant workload with disk writes to the point of being and so with which will help be helpful when attributing cost teams and service or as a component of donors and finally are able to create models of our entire systems that allow us to ask what if questions like like what would happen to component k if we did is disaster recovery test on component being so they're variant at various approaches to
tracing and I'll only highlight 3 of them here the 1st is manuals also very
simplistic and where you're just generating trace ladies and adding into your logs and they're very simple things that can be added to your web service here and especially ones that do not have dependent on depending components that you don't have access to and you won't get any pretty visualizations or a helpless centralized collection and beyond what we typically have with you want but it still can provide insight the so this is
a a flask examples super a simple and using a decorator and you can simply add a UUID to each requests and received as a header the model at together points of interest and like at the beginning and end of a request and than any other in between components the function calls and where you want to propagate and this is exactly what I ended up doing my service which may be which was better when the talk but when the semester do a lot of conference driven development and so the app is behind into next year that you're able to manipulate you can also turn on its ability to stamp on each request with the EC's request ID hadron as you see here with the ad header and proxy set hadn't you can also add a very simple it is simply at the request ID 2 and logs as well
and that's that is black box tracing missus tracing with no implementation across the components and it tries to infer the workflows in relationships by correlating variables and timing with already defined as what messages and answer from here and relationship relationship inference is done via statistical a regression analysis and it's this is easiest with the centralized logging in if there's somewhat of a standardized schema to log messages that contain that an idea timestamp and that's particularly useful in instrumenting an entire system is too cumbersome and we can otherwise instrument components that you don't own and as such it's quite affordable and in there's very little to to no overhead but it does require a lot of data points in order to win correctly infer relationships and it also lacks accuracy with the absence of instrumenting components themselves as well as the ability to actually causality with asynchronous behavior and concurrency the another approach to what parts tracing can be through a network tapping using as flow and F DAM for IPA table packet didn't know which I'm sure the NSA is quite familiar with themselves and then the final type of tracing is through and metadata Proc propagation and this approach has made popular by doubles the research paper on deference and so components are instrumented at to points onto follow causality between functions components systems or even with common Okecie libraries like what you're seeing and I will automatically add metadata to each called so metadata that is
tracked concludes that the trace idea which represents 1 single tracer workflow and as the NID free and each and every point in the particular trace later request sent from client requests received by servers server response and then the expands start and end time so this approach works best when the system itself is designed with tracing mining and but not many people do that right now so this avoids and guess work with the in inferring causal relationships however I can add a bit of overhead to response time and throughput so the use of sampling traces and limits the burden here on the system and and the datapoints storage and sampling anywhere between 0 . 0 1 per cent and 10 % request is often played ticket understanding of the system's performance and so insight have many microservices and scaling out with many more resources and there are a few points to keep in mind when instrumenting a system particularly with the and metadata propagation approach and
sometimes want to keep in mind and I'll go into detail that each 2nd and we want to know what relationships to track essentially how to follow a trace and what is considered part of a workflow and how they attract and constructing metadata to track all the relationships is particularly difficult and there are a few approaches each of their own forties and drawbacks then how to reduce overhead of tracking and approach 1 chooses and see if 1 is largely defined by what questions you're trying to answer on which interesting and there may be a clear answer but not without its own penalties and finally how to visualize and the visualization is needed and will also be informed by what you're trying to answer with trees right so I want to track and
looking within a request and we can take 2 points of view either the submitter point of view or the trigger point of view and so the symmetric continue
and follows largest focuses on 1 complete request and doesn't take into account if part of that request is caused by another requester actions so for instance the evicting cash here and there was actually triggered by request you is it still attributed to request 1 sense that's the that comes from the 1st test and the trigger point view of boxes on the trigger that initiates the action and we're the in the same example request to Vick's cash from across 1 and therefore the eviction is included in the questions trace so choosing which to follow depends on the answers they tried to find an inference into this really matter which approach is chosen for performance profiling and that following trigger causality will help to detect anomalies by showing critical path right and how to track
essentially what is needed a new metadata and this essentially boils down to an it's it's very difficult to reliably track causal relationships within a distributed system and sheer nature of distributes system implies issues of ordering events and traces that happen across many hosts and there might not be a global synchronous clock available so care must be taken when deciding what goes into happening in the data that is threaded through and and and and trace so
using a random idea like UUID the annex requested the hadn't and will identify cause causal related activity but then tracing mentations must use some sort of external clock to collect traces and then in the absence of a global synchronize clocks are more to avoid issues like clock skew I'm looking at networks and receive messages can then be used to reconstruct calls relationships and because you can exactly as we see the message before it sent and in a lot of tracing stations use this as a very simplistic approach however this approach and what is the resiliency in and there's a potential for data loss of from external systems or inability to add tracepoints 2 components that are owned by others and tracing
systems can also add timestamp derived from a local logical clocks and to the workflow lighting where this is exactly the locals systems timestamp but either counter or sort of a randomized timestamp is paired with the trace message and so this approach we don't need another tracing system to spend time on the ordering of traces a collects since its explicit in the clock data but parallelisation concurrency can complicate understanding these relationships and then
1 can also add the previous and trace points that have been already executed within the metadata itself to understand all the courts and joints and it also allows immediate availability of the tracing itself as soon as the workflow and there's no need to spend time on on on collating the establishing and the order of causal relationships and but as you can mention the metadata will only grow in size as it follows the workflow adding to the payload yeah and so this thing boils down
to this you really care about payload requests than a simple unique ideas you go to but but at the expense of needing to infer relationships you can then add a time a time stamp of sorts to help establish explicit causal relationships and but you're still susceptible to potential hoarding issues of traces if data is lost now you may add the previously executed tracepoints to avoid data loss and understand the forks and joined the vitreous and while I'm gaining immediate availability of trace data since causal relations are established for the new and suffered in payload size then there's also the fact that there are no corpora open source tracing system that actually implement the last and so and then tracing will have an effect on the time and storage of in manner what you choose and for instance and if Google were to trace all web searches despite its unintelligible tracing implementation and it would impose a 1 . 5 % throughput penalty and add and 16 cent the response time and I will go into very much detail but there are essentially 3 basic approaches to sampling the 1st is and had based and which will make a random sampling decision at the start of the workflow and then we're following follow it all the way through to completion the next 1 still based on which will make the sampling decision at the end of the workflow implying some cash going on here until this sampling and needs to be a little bit more intelligent and is particularly useful for tracing anomalous behavior and finally unitary sampling and where the salmon decision is made of the trace point itself and therefore prevents the construction of a full of workflow and so have base is the simplest and probably mostly deal for performance profound and both had based any material most often seen in current tracing implementations on and I'm not quite sure if and there's a tracing system that actually implements tail this
right have what visualisations you choose to to to look at depends upon what you're trying to figure out so again charter are popular
in different quite appealing but it only shows requests from a single trace and then you can you define it seen this type for if you looked at the network Tabaqueira browsers tools and
we're trying to get a sense of where and the system's bottlenecks request flowgraph Acadia directed basically of graphs and will will show workflows as they're executed and unlike gantt charts can aggregate information of multiple requests some of the same workflow yeah and useful representation
is the calling context tree nor to visualize multiple requests of different workflows and this reveals both valid and invalid patterns that are requested take and best for creating a general understanding of system behavior so the take away here is that there's
a few things and we need to consider when we trace a system and we should have an understanding of what you want to do what questions you're trying to answer with tracing and and certainly there will be other realizations and questions that come from a trace system for example a dapper and Google is able to audit systems for security I'm asserting that only authorized components are talking too sensitive services and but not without understanding what you're trying to figure out how you may end up approaching your instrumentation incorrectly the and the answer to this question will help identify the approach to causality where there and from the trigger point of view or from submitter point of view another important question how much time the was put into instrument in a system or can even mentioned all parts and so inform the approach the use tracing the black box or not and if you can instrument like all the things are at least some of that and that then becomes a question of what did it and you should propagate through an entire work that the entire flow and finally how much of the flows do you want to understand you want to understand all the requests and then you should be prepared to take a performance penalty on the service itself and then you can have fun storing all that data and or is the percentage of the flows OK and if so and then how do we approach sampling and that's in your answer of what we want to know how question so from the same performance had a sampling is certified we also need to think about whether or not you want to capture the full workflow my request or only focus on a subset of the system and it's also informed and you're sampling approach the unitary or not and so in terms of performance of an understanding where bottlenecks on an and you you want to try and preserve the trigger calls relevance matter as issues like the critical path to that bottleneck and had a sampling is fine as we don't need intelligence and sampling and even with very low sample rates we can get a good idea of where problem lies and since we and essentially care about the 50th or 75th percentile and finally a request flow graph here and is ideal city and don't care about an almost the here and now we want and information of the big picture rather than looking into particular individual or workflows and so most often
once you're tracing system the problem will reveal itself as will the solution and not always so I do have a few questions ask yourself and figuring out how to improve the services formants the 1st 1 is an are making
multiple requests to the same service and round trip and never calls are expensive and perhaps there is a way to set up and batch request or accept that requests on your and and perhaps a service doesn't need to be and synchronous or unnecessary unnecessary blocks and for example if you're around some big social networking sites and can you grab a user's profile photo at the same time the people up their timeline while you try and get the message is sometimes yeah and is the same data have been repeatedly requested and but not cast or maybe you're catching too much or maybe not the right data and is the expiration and too high or too low and what about your site's assets and could there be than a could be better or in order to improve loading time Can you minimize the amount of inline scripts or maybe make your scripts is and are there a lot of distinct domain lookups that had time to with the DNS responses and how by decreasing the number of actual files referenced more maybe minifying press them and there's a bunch of stuff that can be done with the front-end part and then finally perhaps you can use coding returning large amounts of data and more you otherwise able to have your servers produce elements of response as they are needed rather than trying to produce all elements as fast as possible right now probably the
most interesting part and so on
about a tract-tracing systems that are there so there is an open standard for distribute tracing allowing developers to the code without vendor lock-in and they do this by standardizing the trace standing here I'm yeah and 1 criticism I have of open tracing that they don't prescribe a way to implement more intelligent sampling other than a simple percentage and setting priorities and is also a lack of standardization for how to track relationships and whether the submitter were trigger an and it's pretty much all yeah and I'm really just a standardization for managing the skin itself and the mind it's very young specification that's evolving and as and there are few and self-hosted popular solutions and that do support the open tracing specification and probably the most widely used is can from Twitter which has implementations in Java go job scripturally and scholar yeah and the architecture so that is basically the instrumented absence data out of band to remote collected and that's that's a few different transport mechanisms including HDP Kafka transcribed so with propagating data from a service and all the current Python libraries only super whose new RPC supported and in the end does provide a nice gantt chart our waterfall chart individual traces we can see and we can view a tree of dependencies but it's essentially only tree with no information I'm like latency is a set of skills or anything else and using pies
kid on which other libraries are based you can define a transport mechanism like it in here with the transport and which is can be just simply posting and a request with the content of the trace norm can otherwise meet 1 for a Kafka words crime but then otherwise it's just a simple context manager and being place you want to trace and figure is another
self-hosted system that supports of tracing specification and comes from right and rather than the application of client library reporting to remote clutter and reports to a local and regional and the team and then sends out traces to and unlike the can which supports Kafka and elastic search in my support eager only supports Cassandra for storage that the you is very similar to the can with the really pretty a waterfall graphs and depends the training began again nothing to help aggregated performance information interested in and the documentation is also horribly like and fortunately but they do have a pretty decent tutorial to walk through on behalf of the client library for Python is bit cringeworthy and so this is a trend
example from the docks and just make meaning to give just here and basically initialize a tracer and that the open-source further the open tracing Python library will used to create a standard child spans context managers and but their usage at the end of time that sleep for yielding to I owe you I owe you a bit of a heads Scranton and it starts also make mention of supporting monkey passion libraries like requests and writers in your lab to so all I can say is in use at your own risk after I presented at high kind and cup months ago and like the day after they created an issue and in inversely made a comment in the code reasoning line this is still don't get and so the couple others I I'm
not familiar that familiar with them including at destined might stuff and there are and a few more that don't have a preference client libraries and in case you don't trust your own system there a few services out there to fold and there's stacked arbitration will not to be confused distracter over wanting so i'm unfortunately global has no Python your PC client libraries instrument your apples and they do have a rest an Odyssey interface if you feel so inclined and but they aren't the support as it can traces were you can set up a Google flavored server either on their infrastructure on yours and have it forward traces to and stock driver and actually get pretty easily pretty easy I was able to span of doctor certain traces within a couple minutes unknowingly they have a storage limitation of 30 days have the same with their logging and in my last criticism is there you I'm and they have simple plots of response time over the past few hours in a list of all traces the automatically provided anyone but you have to like manually analysis reports for each time period they are interested in to get all the fancy distribution grants they're not automatically generated unfortunately and then finally ends on also
has a tracing service available called x-ray and I said that the Democrat and looks like they do not explicitly support Python I'm only know java . net and the Platonistic a on bottom and has support for sending traces to a local demon which then forwards to the an X-ray service and what's nice about ray despite it being proprietary and not open trees compliant and is you're able to configure sampling rates for different you're of your application based on their fixed requests per 2nd or a percentage of requests however it's not impossible to configure and these rules of and also our almost redeemable and is the visualisations and so there's a typical water culture they also have a request flow graph but then we can see average latency is captured traces per minute and request broken down and by response status and so basically an idiot rays seems pretty cool and probably the most useful of all these and will take some time instrument in your app and introduces vendor lock-in come into marble mentions that do Apr performance measure measurement and I don't have a prosodic entities but and diving you relic that might be of interest to me you the right a quick
opinionated wrapper it like many here if you run
microservices and should be tracing them I'm otherwise very difficult understand entire system's performance on an almost behavior resource usage and among other many aspects however good luck and you would have to choose a self-hosted solution or a provide a service that documentation is all around locking and granted very young space very much growing as open tracing standards standard is developing and as I mentioned language support is 100 % even if and in mind I'd be there and there's a lack of configuration for relationship tracking more intelligent sampling and and available so visualisations but and it is indeed an OpenStack and that can be influenced or you might feel so inclined to implement your own to which has saved and then
finally although this in some pretty graphs and stuff is as of my own blog posts and thank you thank
Intel
Web Services
Software
Web Site
COM
Wurzel <Mathematik>
Open Source
Relativitätstheorie
Facebook
Wurzel <Mathematik>
Web Services
Web Site
Bit
Automatische Handlungsplanung
Profil <Aerodynamik>
Kanalkapazität
Nummerung
Physikalisches System
Quick-Sort
Code
Computeranimation
Systemprogrammierung
Web Services
Rechter Winkel
Datentyp
Kontrollstruktur
Wort <Informatik>
Steuerwerk
Web-Applikation
Ablaufverfolgung
Term
Raum-Zeit
Computeranimation
Lastteilung
Übergang
Task
Virtuelle Maschine
Web Services
Mini-Disc
Endogene Variable
Mixed Reality
Zusammenhängender Graph
Figurierte Zahl
Web Services
Zentrische Streckung
Sichtenkonzept
Zehn
Linienelement
Systemaufruf
Physikalisches System
Datenfluss
Transaktionsverwaltung
Caching
Festspeicher
Web Services
Vektorpotenzial
Linienelement
IBM Client Access
Datenhaltung
Systemaufruf
Physikalisches System
Virtuelle Maschine
Last
Knotenmenge
Last
Front-End <Software>
Front-End <Software>
Client
Punkt
Zusammenhängender Graph
Verzerrungstensor
Größenordnung
Soundverarbeitung
Softwaretest
Punkt
Leistungsbewertung
Profil <Aerodynamik>
Physikalisches System
Sollkonzept
Computeranimation
Komplexes System
Beanspruchung
Informationsmodellierung
Web Services
Mini-Disc
Identifizierbarkeit
Wiederherstellung <Informatik>
Zusammenhängender Graph
Hilfesystem
Analysis
Proxy Server
Lineares Funktional
App <Programm>
Server
Punkt
Login
Computeranimation
Eins
Informationsmodellierung
Web Services
Client
Visualisierung
Zusammenhängender Graph
E-Mail
Bit
Punkt
Metadaten
Freeware
Inferenz <Künstliche Intelligenz>
Datenparallelität
Blackbox
Ausbreitungsfunktion
Implementierung
Computeranimation
Metadaten
Client
Variable
Front-End <Software>
Endogene Variable
Datentyp
Programmbibliothek
Zusammenhängender Graph
Response-Zeit
Zeitstempel
Speicher <Informatik>
Regressionsanalyse
Lineares Funktional
Datennetz
Physikalischer Effekt
Einfache Genauigkeit
Physikalisches System
Datenfluss
Login
Mereologie
Server
Ablaufverfolgung
Ordnung <Mathematik>
Overhead <Kommunikationstechnik>
Message-Passing
Tabelle <Informatik>
Netzwerktopologie
Vierzig
Metadaten
Weg <Topologie>
Punkt
Sichtenkonzept
Stichprobennahme
Weg <Topologie>
Mereologie
Visualisierung
Overhead <Kommunikationstechnik>
Ordnungsreduktion
Computeranimation
Softwaretest
Vervollständigung <Mathematik>
Sichtenkonzept
Punkt
Inferenz <Künstliche Intelligenz>
Quader
Physikalischer Effekt
Weg <Topologie>
Natürliche Zahl
Gruppenoperation
Profil <Aerodynamik>
Physikalisches System
Synchronisierung
Ereignishorizont
Computeranimation
Programmfehler
Metadaten
Rechter Winkel
Mereologie
Datenfluss
Ablaufverfolgung
Instantiierung
Einfügungsdämpfung
Vektorpotenzial
Datennetz
Datenparallelität
Physikalischer Effekt
Logische Uhr
Relativitätstheorie
Stellenring
Systemaufruf
Physikalisches System
Quick-Sort
Synchronisierung
Computeranimation
Logische Uhr
Parallelrechner
Arbeitsplatzcomputer
Randomisierung
Identifizierbarkeit
Zusammenhängender Graph
Zeitstempel
Ablaufverfolgung
Message-Passing
Einfügungsdämpfung
Bit
Punkt
Implementierung
Steuerwerk
Computeranimation
Wurm <Informatik>
Metadaten
Benutzerbeteiligung
Unitäre Gruppe
Stichprobenumfang
Punkt
Zeitstempel
Response-Zeit
Speicher <Informatik>
Soundverarbeitung
Konstruktor <Informatik>
Vervollständigung <Mathematik>
Physikalischer Effekt
Open Source
Logische Uhr
sinc-Funktion
Relativitätstheorie
Stichprobe
Wurm <Informatik>
Physikalisches System
Quick-Sort
Entscheidungstheorie
Ordnung <Mathematik>
Ablaufverfolgung
Steuerwerk
Instantiierung
Graph
Multiplikation
Datennetz
Browser
Selbstrepräsentation
Visualisierung
Information
Ungerichteter Graph
Physikalisches System
Systemaufruf
Computeranimation
Subtraktion
Sichtenkonzept
Punkt
Graph
Physikalischer Effekt
Blackbox
Computersicherheit
Physikalisches System
Kontextbezogenes System
Term
Datenfluss
Computeranimation
Teilmenge
Netzwerktopologie
Multiplikation
Web Services
Joystick
Mereologie
Stichprobenumfang
Mustersprache
Visualisierung
Zusammenhängender Graph
Information
Datenfluss
Ablaufverfolgung
Web Site
Stapelverarbeitung
Zahlenbereich
Unrundheit
Element <Mathematik>
Computeranimation
Domain-Name
Web Services
Digitale Photographie
Parallelrechner
Front-End <Software>
Direkte numerische Simulation
Endogene Variable
Skript <Programm>
Systemaufruf
Just-in-Time-Compiler
Physikalisches System
p-Block
Elektronische Publikation
Endogene Variable
Rechter Winkel
Mereologie
Server
Stapelverarbeitung
Ordnung <Mathematik>
Message-Passing
Explosion <Stochastik>
Subtraktion
HIP <Kommunikationsprotokoll>
Implementierung
Twitter <Softwareplattform>
Code
Computeranimation
Netzwerktopologie
Systemprogrammierung
Web Services
Prozess <Informatik>
Verkehrsinformation
Gruppe <Mathematik>
Pi <Zahl>
Programmbibliothek
Softwareentwickler
Umwandlungsenthalpie
Web Services
Kraftfahrzeugmechatroniker
Gruppe <Mathematik>
Strömungsrichtung
Physikalisches System
Twitter <Softwareplattform>
Menge
Mereologie
Computerarchitektur
Information
Ablaufverfolgung
Standardabweichung
Umwandlungsenthalpie
Kraftfahrzeugmechatroniker
Explosion <Stochastik>
Wellenpaket
Dämon <Informatik>
Speicher <Informatik>
Kartesische Koordinaten
Physikalisches System
Ungerichteter Graph
Kontextbezogenes System
Computeranimation
Client
Datenmanagement
Twitter <Softwareplattform>
RPC
Rechter Winkel
Programmbibliothek
Wort <Informatik>
Information
Elastische Deformation
Inhalt <Mathematik>
Ablaufverfolgung
Normalvektor
Speicher <Informatik>
Figurierte Zahl
Verkehrsinformation
Hilfesystem
Distributionstheorie
Bit
Code
Computeranimation
Message-Passing
Client
Datenmanagement
Web Services
Programmbibliothek
Inverser Limes
Response-Zeit
Speicher <Informatik>
Gerade
Schreib-Lese-Kopf
Schnittstelle
Analysis
Open Source
Konfigurationsraum
Speicher <Informatik>
Indexberechnung
Plot <Graphische Darstellung>
Mailing-Liste
Physikalisches System
Frequenz
Kontextbezogenes System
Frequenz
Druckertreiber
Betafunktion
Flash-Speicher
Client
Ablaufverfolgung
Verkehrsinformation
Standardabweichung
App <Programm>
Subtraktion
Konfiguration <Informatik>
Graph
Stichprobennahme
Wasserdampftafel
Applet
Stichprobe
Kartesische Koordinaten
Bitrate
Datenfluss
Computeranimation
Endogene Variable
Web Services
Rechter Winkel
Wrapper <Programmierung>
Endogene Variable
Minimum
Visualisierung
Zustand
Datenfluss
Ablaufverfolgung
Dämon <Informatik>
Einflussgröße
Web Services
Web log
Formale Sprache
Ungerichteter Graph
Physikalisches System
Ganze Funktion
Konfigurationsraum
Raum-Zeit
Computeranimation
Standardabweichung

Metadaten

Formale Metadaten

Titel Tracing, Fast and Slow: Digging into & improving your web service’s performance
Serientitel EuroPython 2017
Autor Root, Lynn
Lizenz CC-Namensnennung - keine kommerzielle Nutzung - Weitergabe unter gleichen Bedingungen 3.0 Unported:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen und nicht-kommerziellen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen und das Werk bzw. diesen Inhalt auch in veränderter Form nur unter den Bedingungen dieser Lizenz weitergeben
DOI 10.5446/33744
Herausgeber EuroPython
Erscheinungsjahr 2017
Sprache Englisch

Inhaltliche Metadaten

Fachgebiet Informatik
Abstract Tracing, Fast and Slow: Digging into & improving your web service’s performance [EuroPython 2017 - Talk - 2017-07-11 - Anfiteatro 1] [Rimini, Italy] Do you maintain a Rube Goldberg like service? https://s-media-cache-ak0.pinimg.com/564x/92/27/a6/9227a66f6028bd19d418c4fb3a55b379.jpg Perhaps it’s highly distributed? Or you recently walked onto a team with an unfamiliar codebase? Have you noticed your service responds slower than molasses? This talk will walk you through how to pinpoint bottlenecks, approaches and tools to make improvements, and make you seem like the hero! All in a day’s work. The talk will describe various types of tracing a web service, including black & white box tracing, tracing distributed systems, as well as various tools and external services available to measure performance. I’ll also present a few different rabbit holes to dive into when trying to improve your service’s performance

Ähnliche Filme

Loading...