Merken

Log all the things!

Zitierlink des Filmsegments
Embed Code

Automatisierte Medienanalyse

Beta
Erkannte Entitäten
Sprachtranskript
the when come to this George that I want us to service that session them to buy my interest groups they have today who like France is applied to level and gender per and so that's good few so good morning everyone so I'm here to talk about loading all the things and while that seems somewhat some obvious I wanted take in a little bit of a step further and explore what it was loading is what can be and what are the important aspects to to keep in mind when you're when you're doing logging especially when you're doing centralized logging and also the motivation for it the so also every good job what begins with the definitions of what what is a lot what do we what do we mean by logs when I when we say during during the stop what essentially uh when you come to think about it log is any sort of message any sort of document any sort of piece of data that has a timestamp and doesn't change after it's been created so it can be applied to many other things they enjoy as long as it is you think of them as lines in the file it can be you a Twitter feed is essentially also lots of any sort of stream of events as they as they happen and but also something that happens in your organization on the business side all the invoices they that use and out all the all the money that you get back all these transactions can also be considered logs and can be treated in very much the same way you can actually use the same system to to treat them any that like I'm trying to convince you hear that it would be it would be beneficial and also metrics can be viewed as well so or CPU usage your free memory and all this information it's traditional stored next to the logs so in some other support systems but in reality it's kind of the same data but except for some textual representation of what's going on we have an American but it's really the same thing and you want to I treated the same way and you want to work with is exactly the same way as you would with lots so I will probably keep saying it can sing loads throughout the solid just keep in mind that it applies to anything that has a timestamp and doesn't really change model which once created so that's that's what long our that's what loading means for us today so why should we care why do we talk about this long and symmetric so much well currently any company out there generates a huge amounts of data all the information that's that's going on all the different events that's happening any incoming requests from from a user on the load balancer there on the web server that only the servers of serving the static files then on the database and this is just when we're talking about a simple simple website imagine that you have anything more complicated imagine God forbid that you have something like micro-services in it to track all of the all of the different services and all the different requests that are there are going around that's a lot of data and and those are just the technical later we also have a lot of a lot of this is out of How is your business doing how are you how your traffic what so what sort of thing is happening on on the business side of things so those are all the data and a lot of the times they're just going to waste if there if there were recorded at all some of them you you always have to try because services depends on it but a lot of it is going to waste and can be used but let's start from the from the simple question whatever last tuesday preferably at 3 in the morning if I if if the customer comes to you and says hey I by use your use your service are really like it but by the last Tuesday at 3 in the morning I had this annoying thing happened to me and it would really bugs me would you tell them how do you find out what it's actually happened while the 1st approach is you typically try to grab it you have some somewhat follow someone you try to rotate in that brings down 40 quickly like that's fine on your local machine is if you're if you're looking looking for something that just happens but if you're if you will look at a production system specification of that node that nice because you would have to go to multiple machines sure we can't all do SSH scripts and stuff like that that's fine we can go to multiple multiple what files even that we can do with graph getting a little will Harry but you can still do that what's harder to do is any sort of analysis for discovery and because the graph you already have to know what's what you're looking for if you don't know what you're looking for there there is there is no way that grapple help you and lastly the the crucial part of the question is Tuesday at 3 AM who can correct all the different files for what happened to the 3 I by see no hands raised up and that's probably fairly accurate because FIL time is fun especially when you're dealing with them in orders and the nice part about the nice thing about time and end time format is everybody likes their and and people don't really like to share their so we had stood the sample here of some of the uh some of the formats that you might see in a very common files and some of them are quite interesting for example postfix just assumed that it will never run for more than a year that's not really the con some looking for in my systems by OK and you can see that they're all over different some of them are not even sortable so how does this work if I wanna cry for something that happened Tuesday 3 more morning all the obvious answer is you cannot so we need something else instead of graph so what's it going to be so let's let's see what we what we need to do but also we want to be able to do more things and just look at the individual what files or individual events from individual sources we want to be able to correlate different events and that's why I gave you such a huge feel about what would what
actually is and what it can be because only once you get out of blogs or data from multiple sources do you really get to see some some interesting stuff for example if you compare your blogs from you load balancer and from observer and just look at the wrong number so you can immediately doubt certain things in certain behaviors for example if you're traffic the load balancer is going way up and the web server traffic is still going to study that's probably a good thing that really means that you have some sort of catching on the load balancer and just were not and but if you see them rising together that means that the caching that you have in place doesn't work and that's something that's nearly impossible discover without having both of these systems together in 1 system that you can compare these numbers I the same difference in our web server database like you also want to have some sort of some sort of caching you definitely don't want to scale linearly that the more what request that you do the more database request that you like that doesn't scale really was so you need to be on the lookout for these kind of parents and this is again something very difficult to discover also what happens when you see it a rise in error it's on your on on your web server does it may be correlate with the new deploy employer or with the with the new employee getting on board or a new client or something like that but you can also notes so there anything going to the more business he kind of sort of thing so we've all these ads we all this traffic from someone that do we really do we really have something to show for sure a lot of the of of of these things you can you can go to external services but Excel services are external they don't know your system so there would be they may be difficult to tie in to the rest of your infrastructure so this is sort of everything that we want to we want to be able to look at what happened Tuesday 3 am and we would be able to answer all these questions to do the correlation so how will how will it look what's the ideal state of this pair of system so we need we need a set of storage we need something that can add that can handle the different data that's coming from different sources that can handle handle the amount of data we also need the data to be enriched we don't want just want the raw data and the the raw text file from the log that's not interesting we wanted we want pass and also we wanna do some enriching so for example if we stick with examples running a web server we have a URL and we will to match the URL to the the article and also wrote it or the product already shot and and the category of that growth because once we get that we can we can immediately see much more information in our data the same as we when we have a finite p or the user agent we might wanna see which country they come come from and also additional stuff like we see some cooking they're at school but was that user logged in on what was the user all of this all this information and once we have this information of course we want to be able to search only to filter out to get the results back so far if you know you have an annoying musical homes only once you II and find anything on the website you can just easily do a search say hey but from this user did the ICA 404 maybe there is something something wrong with this rather some so this is this is what we want we also want to be able to analyze all but so know just look at individual records but see it's local patents visualize the data and be able to discover some interesting stuff so essentially what we've designed here what what what our wish list equals to is is centralized that's not a technical term for forward the system and consists of several steps and those steps will not be surprising at all to you right now so we need to collect the data we need to parse the data in in case of there in in a textual format so we need to extract the different fields that are otherwise hidden in the text that we need to create a structure from from the text that we need to the enriching so do did you like the look the IP address this and other other side the other stuff so we obviously need to store the data somewhere that that's capable of doing the search and aggregations and finally and most importantly we need to visualize the data because as we as humans we're pattern recognition machine it's it's very it's very easy for us to spots out of an anomaly in a pattern but it's very hard for a computer to do so you would have to instruct the computer specifically what to look for or you would have to have a very very smart computer benchmark computers are expensive especially in time so how can we accomplish that using the last expect so 6 is the company I work for we produce all of the all of these things do all of that and don't worry this cells which everything everything is open source so this is how we this is how it maps so in the center of everything to store and doing the search and analysis we have we have search which is the data store that can that can handle this this amount of crap of 4 visualizations we have but I will will see will see pretty pretty screenshots later and for the collection passing we have we have 2 parts we have beaten with blocks and they they are a little bit different words beads is more like a like we agents that will sit on your machines calling the data and send them somewhere either for further processing into work such retire and 206 search log stages more heavily weighted is much more of a much more options but it's also much heavier to to use just to demonstrate what I mean by that it is a small agent retaining go it's it's statically becomes compiled binary then you can just upload somewhere in this work worst workstation ones in J so written in Ruby runs on the JVM and pretty sure that's very popular in this crowd and formal have sophisticated or if you if you really need more from system this is typically how it would look that you would use but these which we collect the data and then looks at for doing departing enriching because that's that's what this is all about so this is the the overview so
now let's get into it so the 1st step in the process is is and is sort of a just a family of products where there at several different beats and more most importantly you can create your own beats Peterson goes how we even have been generated so you can just run a command that will create all the scaffolding all the boilerplate code for you when you just have to essentially right 1 function that actually collect the data and we have several different types of the beats out of the box so let's see let's see some examples the first one that we have here is metric metric something that regularly does something collect some data has different different modules this is an example working configuration where 1 upon figure of what we would want to write this everyone 2nd we want we want essentially capture the info from host 1 and we also have 1 for Apache where every 30 seconds we want to do the same as the same thing and then beings have these have these modules for actually knows how to going and fetch the infor mation we also have a file which is essentially just the ideas lot just entailing it at an all optionally say if you see a lot that doesn't begin actually with with the hat with the hat just merged with the with the line before so that problem is that there is a stack traces something that spans multiple lines so we can group them together already on the beach level when we're 1st collecting the data because doing it later is a hard problem when you have data coming from multiple sources 105 which actually belong together in the simplest and then make my favorite B is is back the all you have to do with that could be is to say that I have this protocol running on this for and then packet people just keep monitoring the network and logging what's going on in the and because it's on the sensor protocol it can give you more information for example it understands the Poles just protocol so can tell you yet we have this is a select this is a transaction this is a selected going to this this table and log all that information in a structured manner and then finally I want you these are all the inputs that you can have it and finally you have an output output is either plastic surgery for I lost and out or in in this example it's it's it's so we'll just take it and send it to walk step for further processing so it's is that constant DC protocol to get into can do some more stuff so it's forwarded I'm going to go to session ourselves so what searches it's the data ingestion pipeline their inputs and there's a bunch of filters and then there some output it's really just that it's really a pipeline so know what
are the different what are the function there are many many different inputs but the most interesting ones at least for me it is all the different cues that we have right is scoffed and Q 0 and Q all of the different ways how we think how we can get data from from a queue also how you can get just from the network you just open and a TCP socket and listen to whatever comes in or the special as 1 like the beats input that's pretty obvious or even exist walk for j i you can't even just go to S 3 or 4 as you as or some other version of the system so many many different types of inputs how you can how you can get the data and in just them into the pipeline then sort of the the the mean of it is all the different filters that you can apply to to your data this is just a small sampling of of of the filters that they are and highlighted are the ones that again I personally consider more interesting for example anonymize if you if you consume something that they can potentially contains some sensitive information like e-mail addresses and stuff that you don't necessarily want exposed to everyone in your company but we want them to have the ability to inspect the logs Use minimize everything which will go with through a one-way hash so all of the same the same in males will like to the same hash but nobody will be the wiser which actually uses this I've talked about the GUI filtered that it will just take an IP address and give you back the country the city where the where the user came from visualize very nicely on a map where you're traffic is coming from and you can even know for example of which users or the world have the best experience the best latency or the works Gropius if you wanna parts that Jason is kind of obvious if you have data entries and so just parts user region is if you've ever seen a user-agent string neural files it's it's it's a nightmare to to make sense of even for 4 people so use original Ashley will pass that into a structured information this is grown version 7 . 7 2 and it's running on on window and then has again in a number of number of different outputs but the crucial 1 is probably search but there are there are many others you can just write it to a different cue to be processed by another system you can write it to a different completely different storage if you're so inclined you can even write it to my sequel or something like that of I don't know why you would do that but you can any action might make sense for some of the data because what you can do with looks quite easily is you can save put all the data elastic search and if you see some people error so that to me over e-mail and if it's really really critical just being major duty and and have my pager go so I can so I can jump on it right away so we can have multiple different output with filters so you can be alerted immediately what's going what's going on in real time so that's that's look statutes it's really not that hard and in concert if input civil number of number of filters and you have you have some outputs the the only interesting part is that you can have multiple outputs and obviously multiple filters and multiple input and then the data bits into 6 search so what is what is uh Elastic
search again just to as a high-level overview it's a distributed search and analytics engine it's it's open source it's document-based by by document what we mean it is that everything that you can express history on we can we can index and we can we can search on and it is based on a battle scene which is sort of the library that does all the heavy lifting and it is therefore related to the institutional rigidity then there obviously clients in in any of your favorite languages my guess is that your favorite language by phone so we do have a compliance for all 6 that you can add that we can use and in the nice part about search is that it is distributed and it has some the qualities that make it very well suited for for the of for loading scripts so what is how does it look inside of us search In the most highest of levels of over so gnostic search is a cluster solution so you have a number of nodes that work together from the outside it's completely transparent you don't really care what's happening inside and what's in what's known as a as a client you can always talk to any of the nodes in your cluster and they will all answered the same questions in the same way so you don't have to worry about any of this but it's nice to know how it works so that you you can reason about what your expectations can be In this in the cluster the data is stored indices and each index is essentially a collection of shock so what we do is we say we have this index which is just a logical grouping of documents and we also quantified weights so we'll split into 5 shots and each of these little shots we'll store twice you know in case we lose 1 node so we can still keep going on In these shots are actually the unit of scale felsic search it when we have the posterior so in this case we have 2 indices 1 with 4 shots and 1 replica each so we have 2 copies of the chart and 1 with only 2 shots and known no replicates we don't care really about that index that much the those shots are what lives actually on the notes for the cluster keeps rearranging so if if I were to add 1 more node at the cluster will say all I have a free node and will move some of the some of the shots all to to that you know you will have a timing and replica which is just a logical difference it really doesn't matter the charge exactly the same they do exactly the same amount of work so again something that typically you don't have to you don't have to worry about but what this means is in a very important thing when you Sir across through the orders index in this case you will have to go local 4 shots and and that's OK we can we can actually we can actually do it beginning stand and so on and that means that it is the exact same operation if I want search for shards no matter where they come from so they can be inside 1 index or inside for indices and the only thing that really matters is the number of shock and this allows for some interesting interesting things or we can create a new index every day we had any number of shots typically you would start with 1 shot when you're when you're starting system and you would grow with the number of shots as as the 1 shot becomes not and then when you when you search just search over as many indices as you need data for so if you will data for the last 7 weeks you just search over the last 7 indices and this also means that you can treat the indices differently so for the for the current index for the index for today you will have will have more replicas and you will have it all on those nodes that live on longer boxes the boxes with SSD drives everything because those indices are doing the most work they're actually actively indexing new data and as the as the data gets older so a week old index so we'll just we'll just back it up you will you will do a snapshot with Churchill stored on S 3 or something like that and you will remove the replica which means that at this point if you lose some node you will lose some data but that's OK you have a backup and also this data is not that important it's we called like that it's OK to prior to save a little unstable and money sometimes and then a month old they die you might as well you might wanna move to Wikibooks is so have a boxes just the huge spinning datasets that everything will everything will live on and then you can even close the indices so will still live on the desk they will not be in memory there will not be available for search but you can make them available for search very very easily just by opening them and finally like after after some amount of time you can delete the data so you have a very clear right sort of plan how to up to degrade your of your data and make them use less resources even while keeping them sure it will it will mean that if you search for all the data it will be slower but that's OK 90 per cent of your users will probably just wanna search today or yesterday or typically actually just last 1 hour they just wanna see the the best for for the last one hour that they can actually just put on the wall and have always there order refreshing every minute so that's the that's 1 nice feature of search that's very relevant to the loading use cases how you can how you can make make use of and speaking those words so the last sort of parts in the elastic stack is combined the mind is
a small JavaScript application and that provides visualizations for your
data analysis success it doesn't have to be a little data by out of the data that have about a started and that's really where where China and you can see immediately here what I talked about the continuous you gap here in the data enhancing because again you're you're human so that's why visualizations of course and this this law 1 you have so this is spread by country for each country we split again the users of our web site where they're authenticated or not end for and for each of these 2 groups we ask what browser they're using and immediately you can see conflict you can see very different things for different countries so we have so we have we have China here where we have mostly authenticating users and some not authenticated users and in the end we have a I don't know what country but nobody there is a lot there and you can see that immediately because it just pops out because you know again the human thing and all that you can
uh if you if you use the Geo IP holding locks as you can see where your users are coming from I just by just by
clicking on a mouse and you're not limited to just pretty pictures you can actually drill down to the individual records and you and you can do search so in this case I'm looking for uh responses that went to II I use 6 and in R 400 to 600 kilobytes inside and I can see the individual records I can see the individual your house you can see that we're using the data the data from from US government they actually published publishes status of public so you can you can drill down you can click into it and can see all the different all the different value so putting this
things altogether this is this is how it looks logically you you collect the data would be you send them to what sector on which then you store them inelastic search and visualize using combined this is the sort of the ultimate ultimate thing all the ultimate architecture would be that instead of just the arrows you would only have like a Q EU in in each area we each error you have like a comfortable being beaten lots fashion between low extension of the locks that have then put into our 6 search for this only once we're talking hundreds and thousands of requests per 2nd like hundred thousands requests per 2nd or millions per 2nd if you only care about thousands per 2nd you can just do do it directly like that and you'll be perfectly fine if you need more capacity use add more more machines more nodes at each level you can have more more than 1 what social obviously in more than 1 . search you should have more than 1 . 6 search to get any sort of high availability and was so this is this is how it works it's really not that difficult to set up you can just start with everything on 1 machine when you wanna start by by recommend you just use these and all 6 search alone know what sexual will just work and only 1 when you discover more and more things than you need like doing the enriching etc. you can introduce section male will be minimal change in your document in your configuration and you can sort of grow from their 1st thing that you will do is throw a separator loss in search of oxygen to produce chains and and and sort of keep growing from there so how despised Python come into it what are the concerns when you're loading from from Python so 1st important 1 is to enhance your don't just log well this happens but also tell for example how long did it take I'm holding queries to the data is to take all our how long did they take also include some sort of some sort of metadata so was used to work with this 1 what is the what is the page that we're currently along and speaking about the web the example and ideally what did Justice Jason because if you look at this text then you will have to pass it later so you're both serializing it into text and then and then passing out from the test both of these things are pretty error-prone and they take a lot of CPU and no human is going to look at the at the individual message we will we will be looking at it through to about we care about the individual fields note about 1 textual presentation including all of that so what is this and the way how to do it is there's a Python package structural that's actually created by neck he somewhere around I believe he is given of giving a talk at 1 of the 1 of the other tracks and what struggled enables you is to do exactly that so add structured info to your loading and qualified fields with with the names and values so with that you you can do you can track the the info through the services so if you have if you for example have you load balancer you can attach a session ID 2 as a as an HTTP header and track even if it has to go to 2 different web servers you in in the end put them back together and track their 1 request for your different system you can add a little a little comment to each 1 of your sequel queries again to match it back to the request that started it so it's a but of track 1 user action on your front end to everything that happens on your on on your back that ideally you want to load that into a file you can send it directly to like you were to be search Mopsuestia something like that but and at that point what happens if you're looking infrastructure goes down or if you want a greater and the worst case scenario here is that it will actually impact your production your application that's not really very acceptable so what you wanna do is sort of use some sort of buffer and the easiest offer that you can find that's most involves supported while less file to the slogan into a file and then you can have followed beats sitting there listening to it and sending it to I The directed to all 6 surge or to look set for further processing and you can be perfectly fine if if you're walking system goes down because you just playing with it and you yourself committed let's find your application will still run you will not lose any data you can you can backfill them later and it will give you a lot more flexibility so and I
think that this is this is OK for our overview of what's possible why you should do it and what are the the key concepts that you should keep in mind when designing a system like this and now we have some time for questions and thank you the the phone this is going to like a question the is that open so source solution for a user authentication inelastic set currently now less by our citizens things http so what you can do is take an engine x in front of it and you HDP off and SSL on it it's very difficult to do like different levels of access it's possible but you don't like always the that if we reported cases but that will get you 80 per cent 80 % there and it's very easy very easy to set up if you need more than that unfortunately currently you have to pay us money we do offer commercial plug-ins for all 6 search and the 1 2 security is is 1 of the this is a fight in return for what's Europe you suggestions for allow environment where we're at least time for full always stack products at the client side and we don't have an answer so everything is there sometimes we don't just think don't 550 if we ask client so what's your solution for this matter to effectively get the talks starting loss uh so there there are 2 2 different ways of to do this 1 is that you install alsike searching combiner and declines with every installation but that's probably only within the client will get some money out of it as well if that's the case and just create a pack of the logs that should them over and then have a of your own stated that will be configured to actually get that pack of of referral the logs run it through the entire pipeline get into search and then visualize it and at that point it's only up to you whether you will just quit to temporary 1 like on AWS just for each each 1 of these facts that you received or you will have 1 big 1 and you will get all the data from all the clients make sense at all at the other person of 1 so you mentioned that understands different protocols and we can contribute to listen to the to the to see people bottom and sensor locks down down the pike I would say they have no money so this is running dark how do later them so we don't care there several several ways how to do it you can install a so the doctor listens on the network interface so that the easiest way forward for this article were a beach which is which is back in the so you can just wait inside the dock a container with the application that you're trying to monitor or you can run in separate containers that you configure networking that will be able to listen to to that to that traffic alternatively you can not use the fact and just follow a things directly using the metric the which can live in its own container and just keeping the other the other services alternatively doctor has its own logging functionality that you can then feed into lockstep so you can you can you stop at you could collect all the logs from all your containers aggregated and aggregate them together and send them send them a lot safer processing and promoting 2 and 12 so there are many many different approaches it depends exactly on what you're doing thank you questions that have to because we have 2 minutes just this my corpus snorkel really a question but not that that the mind is that people want what they doing and I can see from other things that what you're doing so please of while you're doing things something is secret so sauce I don't care about the things that you do want to a lot of time to log why it is please let the y further but about this flow of luck messages through time are there is some hoax inelastic search for for that signal like after 1 we do this with data or do you have to just write scripts thank you for the question that's what I forgot that yes there is a tool it's ulterior it's written by Python actually and it allows you to do just this and also in New version of search 5 which will come hopefully later this year it's already built into offered search so it's it's an API inside of the search so acid search QA search for it or if it had been sort your way that's the tool in as a command line interface so we just sticking your chronic periodically you will see that you will run arcuata have thing older than 5 days removing or any any other actions that that you might have another
Bit
Selbstrepräsentation
Gruppenkeim
Login
Computeranimation
Übergang
Streaming <Kommunikationstechnik>
Prozess <Informatik>
Skript <Programm>
Zeitstempel
Gerade
Korrelationsfunktion
Umwandlungsenthalpie
Datenhaltung
Güte der Anpassung
Ausnahmebehandlung
Biprodukt
Ereignishorizont
Web log
Arithmetisches Mittel
Dienst <Informatik>
Transaktionsverwaltung
Twitter <Softwareplattform>
Geschlecht <Mathematik>
Server
Dateiformat
Information
Ordnung <Mathematik>
Message-Passing
Subtraktion
Selbst organisierendes System
Mathematisierung
Zentraleinheit
Lastteilung
Virtuelle Maschine
Knotenmenge
Benutzerbeteiligung
Informationsmodellierung
Stichprobenumfang
Hilfesystem
Analysis
Graph
Linienelement
Open Source
Physikalisches System
Elektronische Publikation
Quick-Sort
Programmfehler
Last
Mereologie
Speicherverwaltung
Baum <Mathematik>
Resultante
Bit
Prozess <Physik>
Web log
Computerunterstütztes Verfahren
Computer
Computeranimation
Eins
Metropolitan area network
Client
Mustersprache
Visualisierung
Korrelationsfunktion
Benchmark
Zentrische Streckung
Kategorie <Mathematik>
Datenhaltung
Stellenring
p-Block
Biprodukt
Mustererkennung
Speicherbereichsnetzwerk
Konfiguration <Informatik>
Dienst <Informatik>
Datenfeld
Rohdaten
Menge
Datenerfassung
Server
Dateiformat
URL
Information
Message-Passing
Aggregatzustand
Fehlermeldung
Subtraktion
Web Site
Zellularer Automat
Zahlenbereich
Term
Netzadresse
Whiteboard
Lastteilung
Virtuelle Maschine
Multiplikation
Benutzerbeteiligung
Datensatz
Arbeitsplatzcomputer
Vererbungshierarchie
Luenberger-Beobachter
Speicher <Informatik>
Analysis
Open Source
Mailing-Liste
Physikalisches System
Automatische Differentiation
Elektronische Publikation
Quick-Sort
Keller <Informatik>
Vorhersagbarkeit
Chipkarte
Mapping <Computergraphik>
Summengleichung
Mereologie
Wort <Informatik>
Personal Area Network
Baum <Mathematik>
Logik höherer Stufe
Sensitivitätsanalyse
Bit
Prozess <Physik>
Adressraum
Familie <Mathematik>
Versionsverwaltung
Fortsetzung <Mathematik>
Baumechanik
Gesetz <Mathematik>
Login
Computeranimation
Übergang
Eins
Metropolitan area network
Schwebung
Bildschirmfenster
Figurierte Zahl
E-Mail
Gerade
Funktion <Mathematik>
Lineares Funktional
Filter <Stochastik>
Datennetz
Kategorie <Mathematik>
Biprodukt
Ein-Ausgabe
Arithmetisches Mittel
Konstante
Polstelle
Transaktionsverwaltung
Rechter Winkel
Socket
Information
Ablaufverfolgung
Schwebung
Zeichenkette
Fehlermeldung
Subtraktion
Quader
Gruppenoperation
Keller <Informatik>
Zahlenbereich
Netzadresse
Code
Multiplikation
Hash-Algorithmus
Datentyp
Warteschlange
Elastische Deformation
Speicher <Informatik>
Konfigurationsraum
Protokoll <Datenverarbeitungssystem>
Open Source
Zwei
Physikalisches System
Elektronische Publikation
Modul
Quick-Sort
Mapping <Computergraphik>
Chirurgie <Mathematik>
Echtzeitsystem
Mereologie
Textbaustein
Baum <Mathematik>
Subtraktion
Gewicht <Mathematik>
Punkt
Sechsecknetz
Quader
Formale Sprache
Gruppenkeim
Automatische Handlungsplanung
Zahlenbereich
Kartesische Koordinaten
Analytische Menge
Mathematische Logik
Datensicherung
Computeranimation
Demoszene <Programmierung>
Metropolitan area network
Knotenmenge
Erwartungswert
Client
Einheit <Mathematik>
Lesezeichen <Internet>
Datenreplikation
Visualisierung
Programmbibliothek
Skript <Programm>
A-posteriori-Wahrscheinlichkeit
Indexberechnung
Speicher <Informatik>
Zentrische Streckung
Nichtlinearer Operator
Gruppe <Mathematik>
Open Source
Physikalisches System
Speicherbereichsnetzwerk
Quick-Sort
Chatten <Kommunikation>
Rechter Winkel
Automatische Indexierung
Festspeicher
Heegaard-Zerlegung
Mereologie
Wort <Informatik>
Ordnung <Mathematik>
Baum <Mathematik>
Binärdaten
Subtraktion
Web Site
Datenanalyse
Gruppenkeim
Versionsverwaltung
Visualisierung
Räumliche Anordnung
Gesetz <Physik>
Computeranimation
Einfügungsdämpfung
Punkt
Prozess <Physik>
Fortsetzung <Mathematik>
Kartesische Koordinaten
Information
Computeranimation
Übergang
Metropolitan area network
Metadaten
Schwebung
E-Mail
Softwaretest
Hochverfügbarkeit
Abfrage
Biprodukt
Dienst <Informatik>
Verkettung <Informatik>
Datenfeld
Menge
Datenerfassung
Server
Garbentheorie
Information
Message-Passing
Fehlermeldung
Subtraktion
Mathematisierung
Gruppenoperation
Zentraleinheit
Kombinatorische Gruppentheorie
Lastteilung
Virtuelle Maschine
Puffer <Netzplantechnik>
Weg <Topologie>
Benutzerbeteiligung
Datensatz
Knotenmenge
Endogene Variable
Zeitrichtung
Maßerweiterung
Datenstruktur
Konfigurationsraum
Trennungsaxiom
Kanalkapazität
Physikalisches System
Elektronische Publikation
Quick-Sort
Flächeninhalt
Debugging
Computerarchitektur
Baum <Mathematik>
Subtraktion
Einfügungsdämpfung
Punkt
Prozess <Physik>
Atomarität <Informatik>
Schaltnetz
Versionsverwaltung
Kartesische Koordinaten
Login
Computeranimation
Übergang
Client
Minimum
Skript <Programm>
Ganze Funktion
Schnittstelle
Lineares Funktional
Protokoll <Datenverarbeitungssystem>
Datennetz
Open Source
Computersicherheit
Plug in
Physikalisches System
Biprodukt
Datenfluss
Hoax
Dienst <Informatik>
Menge
Authentifikation
Programmierumgebung
Baum <Mathematik>
Message-Passing

Metadaten

Formale Metadaten

Titel Log all the things!
Serientitel EuroPython 2016
Teil 163
Anzahl der Teile 169
Autor Král, Honza
Lizenz CC-Namensnennung - keine kommerzielle Nutzung - Weitergabe unter gleichen Bedingungen 3.0 Unported:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen und nicht-kommerziellen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen und das Werk bzw. diesen Inhalt auch in veränderter Form nur unter den Bedingungen dieser Lizenz weitergeben
DOI 10.5446/21149
Herausgeber EuroPython
Erscheinungsjahr 2016
Sprache Englisch

Inhaltliche Metadaten

Fachgebiet Informatik
Abstract Honza Král - Log all the things! Many times these logs are thrown away or just sit uselessly somewhere on disk. I would like to show you how you can make sense of all that data, how to collect and clean them, store them in a scalable fashion and, finally, explore and search across various systems. ----- Centralized logging (and the ELK stack) is proving itself to be a very useful tool in managing a production infrastructure. When combined with other data sources (application logging, business data, ...) it can provide even more insight. This talk is an introduction into the area with some overview of the motivation, tools and techniques that can prove useful. We will show how the open source ELK (Elasticsearch Logstash and Kibana) stack can be used to implement this. It is geared towards people familiar with the DevOps concept that are looking to improve their lives by introducing smarter tools.

Ähnliche Filme

Loading...