Realtime Telemetry - Powered by Docker
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 133 | |
Author | ||
License | CC Attribution - NonCommercial - ShareAlike 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this | |
Identifiers | 10.5446/48830 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
NDC London 201674 / 133
2
6
10
12
15
17
23
24
28
30
31
32
35
36
39
40
43
44
45
47
51
52
55
58
59
60
61
62
63
64
67
69
71
73
74
75
82
84
86
87
97
103
107
108
111
112
114
115
117
120
123
126
128
129
132
133
00:00
Software developerLemma (mathematics)CodeProduct (business)MomentumPlanningWeb pagePoint cloudBitNetwork topologyPattern languageComputer architectureExpected valueLoginTerm (mathematics)CASE <Informatik>Goodness of fitLastteilungMultiplication signInformation technology consultingDeciphermentPhysical systemStructural loadRight angleCycle (graph theory)Cartesian coordinate systemStack (abstract data type)Analytic setDecision theoryEvoluteLimit (category theory)Data recoveryQuicksortService (economics)JSONXMLUMLComputer animation
02:45
Software developerInformation securityComputer forensicsTrailTraffic reportingMathematical analysisRule of inferenceInformation securityCloud computingQuicksortRow (database)Electronic mailing listCycle (graph theory)Game controllerMultiplication signMathematicsFrequencyAreaType theorySoftware developerPoint cloudMathematical analysisCross-correlationOrder (biology)Physical lawTraffic reportingWeb applicationPhysical systemLoginRight angleComputer forensicsExistential quantificationDifferent (Kate Ryan album)Pattern languagePlanningCartesian coordinate systemVideo gamePressureInformationBusiness modelMobile appDebuggerSoftware as a serviceProbability density functionWeb 2.0Computer animation
05:40
Computer virusSoftware developerComputer programCodierung <Programmierung>Carry (arithmetic)Web browserMobile WebWritingWeb 2.0MereologyCodeClient (computing)Mobile appFront and back endsExtension (kinesiology)LoginComputer fileRight angleOperator (mathematics)Cloud computingContent (media)Configuration spaceAreaProcess (computing)Point cloudData storage deviceMultiplication signMetric systemWeb browserTable (information)InformationServer (computing)Identity managementDatabasePoint (geometry)Inheritance (object-oriented programming)BlogWeb pagePhysical systemGateway (telecommunications)Exterior algebraComputer animation
09:21
Software developerBlogRootE-textoutputCodierung <Programmierung>Visualization (computer graphics)Elasticity (physics)Binary fileShared memoryMultiplication signRootLoginGoogolCausalityProduct (business)BitRight angleOrder (biology)File formatNumberDifferent (Kate Ryan album)Source codeEvent horizonSystem administratorInstance (computer science)WordLevel (video gaming)Analytic setType theoryCodecSubject indexingVirtual machineMoving averageInternet service providerGene clusterResultantProcess (computing)Augmented realityPoint cloudBlogPhysical systemCASE <Informatik>Elasticity (physics)CodeCoefficient of determinationWebsiteBlock (periodic table)QuicksortMessage passingoutputComputer configurationComputing platformComputer animation
15:01
Software developerProcess (computing)BitDemo (music)Medical imagingMultiplication signVirtual machineWindows RegistryComputer animation
15:50
Software developerComputer fileRight angleLoginMedical imagingoutputPoint cloudPoint (geometry)Client (computing)Software testingConnectivity (graph theory)CuboidType theoryCloud computingDoubling the cubeMobile appMathematicsDifferent (Kate Ryan album)Software repositoryNumberBeat (acoustics)Integrated development environmentProcess (computing)Single-precision floating-point formatRevision controlGene clusterVirtual machineMereologyWeb browserDirectory serviceCache (computing)CodeSound effectDefault (computer science)Cartesian coordinate systemRepository (publishing)Density of statesVisualization (computer graphics)Configuration spaceDevice driverBitIP addressBuildingWordDigitizingElasticity (physics)Structural loadComputer clusterSubject indexingVariable (mathematics)Source codeComputer animation
24:04
Software developerConfiguration spacePrice indexIP addressCuboidDisk read-and-write headMultiplication signMobile appClient (computing)Point cloudSubject indexingStructural loadTrailConfiguration spaceEquivalence relationBitBuildingSocial classMereologyRight angleComputer animationSource code
26:27
Software developerContent (media)Right anglePoint (geometry)Cartesian coordinate systemLoginStructural loadQuicksortType theoryDreizehnContext awarenessSource codeComputer animation
29:09
Software developerConfiguration spacePrice indexWeb applicationComputer configurationTimestamp2 (number)CuboidPrice indexPoint (geometry)Right angleWeb 2.0Computer animation
29:59
Software developerDemo (music)Host Identity ProtocoloutputBlogShooting methodFunction (mathematics)outputPoint (geometry)Configuration spaceFile formatData storage deviceFilter <Stochastik>Single-precision floating-point formatSoftware testingTable (information)Transformation (genetics)Mobile appMedical imagingMessage passingField (computer science)Product (business)Key (cryptography)BlogCategory of beingDecision theoryComputer animation
31:40
CodecSoftware developerCodeDisintegrationRevision controlTimestampFunction (mathematics)Subject indexingRevision controlField (computer science)Different (Kate Ryan album)Modal logicoutputOrder (biology)Data storage deviceTable (information)Level (video gaming)TimestampMessage passingKey (cryptography)Term (mathematics)String (computer science)Point (geometry)File formatCodecData structureMultiplication signSpecial unitary groupRight angleFilter <Stochastik>Computer animation
33:29
Software developerField (computer science)TimestampoutputComputer-assisted translationType theoryConfiguration spaceMedical imagingData structurePartition (number theory)Order (biology)Subject indexingComputer fileCategory of beingTable (information)Web 2.0LoginDefault (computer science)Function (mathematics)Transformation (genetics)Representation (politics)Cartesian coordinate systemPoint (geometry)Data storage deviceDifferent (Kate Ryan album)Revision controlNumberInformationRight angleWeb applicationMultiplication signKey (cryptography)Physical lawWater vaporSource codeComputer animation
38:09
Software developerConfiguration spacePrice indexFinite element methodRight angleWeb pageSet (mathematics)Goodness of fitImage registrationCodeLevel (video gaming)QuicksortExistential quantificationRow (database)LoginPlastikkarteQuery languageType theoryoutputException handlingMobile appSubject indexingPoint (geometry)Error messageBitSystem callMereologySpecial unitary groupComputer clusterVisualization (computer graphics)Game controllerServer (computing)Physical systemWebsitePattern languageWeb 2.0BlogComputer animation
42:30
Software developerMathematicsPhysical systemExplosionPurchasingAuthorizationDataflowLoginFunction (mathematics)PasswordInformation securityPurchasingHookingTheoryDefault (computer science)LoginPasswordVideo game consoleGame controllerAuthorizationInterface (computing)Function (mathematics)View (database)Server (computing)Identity managementCASE <Informatik>2 (number)Application service providerMobile appTracing (software)MereologyException handlingMathematicsWeb 2.0BitPoint (geometry)QuicksortRight angleGodCodeDemosceneMessage passingConnectivity (graph theory)Physical lawUsabilityBit rateCentralizer and normalizerDataflowPhysical systemComputer animation
45:06
Software developerElectric currentException handlingTracing (software)HookingGame controllerVideo game console1 (number)Unit testingShooting methodCentralizer and normalizerWeb 2.0Software developerConnectivity (graph theory)QuicksortFilter <Stochastik>WeightoutputComputing platformCodeProcess (computing)Equivalence relationHuman migrationFunction (mathematics)Ocean currentLoginDefault (computer science)Point cloudCartesian coordinate systemMultiplication signOpen sourceProof theoryControl flowPhysical lawSoftware testingComputer configurationSolid geometryMobile appComputer animation
47:14
Software developerClient (computing)Mobile WebClient (computing)Scripting languageProcess (computing)Server (computing)Information securitySymmetric-key algorithmImplementationLoginConnectivity (graph theory)Key (cryptography)MeasurementComputer animation
48:04
Software developerMereologyStreaming mediaBitLoginMobile appPlug-in (computing)Level (video gaming)Standard deviationPoint (geometry)Default (computer science)Connectivity (graph theory)QuicksortError messageMathematicsWeb applicationDifferent (Kate Ryan album)Domain nameMultiplication signServer (computing)Filter <Stochastik>Message passingPhysical lawFunction (mathematics)Structural loadRight angleoutputQuery languageLine (geometry)Elasticity (physics)Source codeComputer animation
52:19
Software developerComputer animation
53:17
Software developerQuery languageString (computer science)Computer animation
54:10
Software developerRight angleComputer clusterType theoryGodoutputCASE <Informatik>Random matrixComputer animationSource code
55:26
Software developerGodMultiplication signMedical imagingBoom (sailing)Web applicationComputer animationSource code
56:42
Software developerGoodness of fitWordWeb applicationError messagePoint (geometry)View (database)Object (grammar)Level (video gaming)Physical lawAlgebraic closurePointer (computer programming)Term (mathematics)Field (computer science)Computer animation
58:02
Software developerBlogCodeElasticity (physics)outputElectric currentLoginTerm (mathematics)outputCuboidComputer fileBeat (acoustics)Mobile appElectronic mailing listEvent horizonProduct (business)Right angleTwitterInsertion loss1 (number)Different (Kate Ryan album)ÜberlastkontrolleComputer animationSource code
59:11
Software developerElasticity (physics)ImplementationSingle-precision floating-point formatLoginRight angleMereologyData storage deviceSingle-precision floating-point formatEndliche ModelltheorieWeb 2.0BlogGene clusterScaling (geometry)Row (database)Coefficient of determinationPhysical lawCentralizer and normalizerComputer animationMeeting/Interview
01:00:59
Software developerPoint (geometry)Axiom of choiceCodeMereologyScripting languageState of matterWeightScaling (geometry)Computer fileLoginComputer animation
01:01:56
Software developerMaxima and minimaScripting languageBlogMeeting/Interview
Transcript: English(auto-generated)
00:08
Hi, everybody. How are you? After lunch? Feeling good? Okay. So I'm going to be talking about, you know, telemetry, Docker, some of the business cases around the use case
00:24
I'm going to show you, and how we might deal with Elk to help us fast track our telemetry problems, hopefully, or solving them. My name is Michelle Laruba-Stomante, so I'm here. My company is called Salliance. I do architecture consulting and, lately, a lot of microservices-related evolution, which, of course, is so new
00:43
that it's a lot of fun right now and a lot of decision-making, if you might imagine. And one of the things that I really like about this talk is that I work with customers that are on the Microsoft stack, people that are on, you know, the Amazon stack, but in general, they don't always have the best
01:01
topology or planning in advance around telemetry in the system, because no matter what cloud you go to, sometimes you really don't have the visibility you might want in one place, right? How many people have ever had a production issue and had to fight to find where is this problem coming from? Anyone? No? You're so lucky. That's awesome. You know, because I
01:24
kind of see it a lot, right? And with customers that are, you know, even if they have good topology, load-balanced failover, disaster recovery, it doesn't matter. The issue really is that sometimes code just doesn't work. And sometimes you deploy an update, especially to a monolith, and things
01:43
just don't work as expected. And depending where the logs are and how easy they are to get to or even how much practice you've had in terms of, you know, deciphering problems, you know, every application, every solution has a personality for those things, right? Like, you know, this is the pattern we've figured out after probably a few efforts that were not so
02:06
nice to where we've kind of got it. Okay, I know where I'm going to do. I'm going to go to, you know, Google Analytics first, and I'm going to see which pages are getting hit hard, and that's going to help me know where should I go look in the system, which logs should I try to find the problem in. So this problem is near and dear to me, and since I've been
02:24
playing with Docker for the past year or maybe a little bit less than that and realizing that a lot of my customers don't have time to pull together telemetry story, I sort of evolved this idea that, you know, maybe we can just help them spin up a quick cluster and get something done. And so that's the impetus for this talk. So hopefully
02:43
that resonates with you perhaps. So let's talk first about why we log, right? We want to troubleshoot for visibility, right? Something's going on now, and it's usually not a convenient time. Hopefully it's not in the middle of the night, but it even could be, and you're under a lot of pressure.
03:01
Sometimes it's around security audits, review early detections. So if you have any sort of compliance requirements in your company, you know, SOC type 2, for example, has very, very strict requirements around how you spend your time. You have to have somebody dedicated to reviewing logs every day. You have to report that you did that,
03:21
and you have to do some sort of monthly kind of correlation of those logs in order to prove that you're following your compliance and you're looking for security issues and so on. And so that's an interesting space in itself. Post-incident forensics is another one. So something did happen. I've noticed on a growing frequency that when
03:41
I'm working with customers on security planning for their life cycle, their business life cycle as well as development, you know, we end up having these questionnaires that come in from their customers. So they're a SaaS host in the cloud, and their customers are saying, you know, do you dot, dot, dot? And the list is very long.
04:01
And among that is usually things like how long do you keep your logs? Do you audit your logs? Do you put any personal information in the logs? There's a lot that goes into the thinking around this. And so in order to, and the other thing they'll ask is, you know, do you have the information you need to do post-incident forensics? And sometimes that means you have access to the VMs, which means
04:22
you have to defer to your cloud provider rules. Can I get what I need, right? And sometimes it's all about your logs and what you have control over. So there's that. Change history, I think that's more of sort of a log that's more of application-related log, but I've noticed that's also a very common
04:42
pattern these days is tracking on, you know, every single change to database records. I wouldn't necessarily look at that as an elastic search thing, but it's just worth collecting as the story of logs go, because there's lots of different reasons to log. Insights into user activity, also very important, right? Like where did they go in the system?
05:00
So there's tools that do this on the front end that you might, you know, integrate into your web applications, and maybe that gives you visibility through your features, so that's not as much of an issue. But in general, you know, there are other aspects, I guess, of the app that you might want to track, like frequency of hits
05:22
to this area or frequency of PDF generation, if that's something that you're doing. So again, there's lots of ways to do that, but sometimes logs is another place that you can do it. And in general, just reporting and analysis on all the things. So why not just animate that again, because it's a really fun list.
05:40
So I wanted to sort of bring that to your attention and then talk about how easy it might have been in the past, right? Hello World was this one big thing, and you know, that was how we used to write code, just one big thing way back. Maybe I'm dating myself, of course I am. And so you'd go and compile and leave for the day, literally, and come back the next day, and hopefully the C++ app would be compiled. So anybody else ever do that, or should I duck now?
06:05
And then today, of course, we have all of this, right? Lots and lots of moving parts, web apps, web APIs, you know, devices, rich clients still, and back-end services, of course, that could be workers or just multi-tier.
06:20
So for logging, that to me means that that's also more complicated, right? Because it used to be a flat file, and it still is, you know, I mean, certainly to some extent we still have, you know, operation logs that get written to a file. You know, Linux still has this log, and you can ingest that. But certainly at some point we started using database for that, and now we've kind of got this problem, right?
06:41
We've got these client apps, and those are hitting, you know, servers in the cloud perhaps, and that's hitting data. And then we've got web browsers hitting web APIs and or web pages, and then we've got mobile apps hitting their APIs. Maybe there's API gateways, and there's these identity stories over there, and then there's other content and access,
07:00
and so on and so on and so on and so on. So it's a lot of stuff. And if you move to a cloud vendor, then, you know, there's probably logs in all of these places, right? If I'm using Azure AD, then I've got logs there for user access. And if I'm using, you know, cloud services in Azure, which was one of the early PaaS deployments,
07:22
then that would have the logs written to the horrible, epic wad config. If you've smiled, you probably use that and hate it as much as I do. And that's just because it's table storage, and it's almost impossible to search, right? So can't get any information out of that.
07:42
And then, you know, if I have storage in general, access to containers and blobs and things like that, you can turn on the metrics and then take a little overhead hit. But, again, it's another area of logs. But they're all disparate. And so you think, oh, I'm in the cloud, so they're going to pull that story together for me. And the truth is, none of the cloud vendors
08:02
really do a good job of that yet. You still have to work for it, right? You still have to get a tool. Some people buy tools like, you know, Operation Insight is the new one. Microsoft is starting to evolve a better story around than the past alternatives. But, again, these are things you invest your time
08:21
learning to pull together the story, to ingest from all the places. Splunk is a great tool. A lot of people really like. Again, expensive, costs money, but super, right? Does a great job. And then there's stuff you can do on your own. So I guess in short, you know, we look at this picture.
08:40
We get these little dashboards and things that we can use to try to troubleshoot problems, but it's not always easy. And I think that we still need a way to, how do I just see what's going on right now in my system? Like, what just happened? I have great examples of that that happen all the time, where I have customers that still don't have this solution, not yet, but are trying to evolve to it.
09:06
And that is just, you know, worker role is a great example. Again, it's particular to Azure, but the logs go to a place that it's just, it might as well be in a dungeon, right? I just can't really do anything with them unless I take them out and pull them into a tool.
09:21
So where should you start then? So I guess, you know, the problem with logging is there's lots of stuff. There is the panacea, which is give me a tool that pulls it all together and gives me a beautiful story. But what's the problem with that? Anyone? Tick tock. Time.
09:40
Nobody has time. So, you know, at the basics, you have a production issue, you wanna find root cause, you need it now, you can't find the logs, it's unmanageable, and you just wanna Google your logs to solve a problem right now. That's what I'm calling it anyway. I just invented that. It's patented already or something.
10:02
So what I'm really just gonna talk about first here is the idea of what if you could just stand something up and start firing logs to it and actually have a bit of instant visibility? Would that be nice? Right? Okay. And Elk isn't new, right? This has been around for a while.
10:20
And the idea of standing up an Elk machine, right, a system, a solution, is not new because there are VMs out there that do it too. But what I like about the idea of looking at a Dockerized solution is that it really will work just if it's working on your machine when you're testing, it's gonna work when you put it up in the cloud. And number two, if you look at a simple solution
10:42
to start with, even if it's on Linux and you don't necessarily have Linux administrative skills, we're talking about an augmentation to what you're already doing, not replacing it. We're talking about adding value and giving you some instant results that help you do your job.
11:02
And then you can look at evolving that story to the next level, which might lead to a PaaS provider who actually hosts this for you and clusters it for you. There's a couple out there that maybe only cost $50 a day, which is not a lot, it's 18,000 a year, if you think about replacing that with a person.
11:22
And that's another option, right? There are a lot, I'm talking about a big package with billions of logs, of course, rolling logs. So you can go all that way, but you can start with, hey, let's just stand something up and get it working. So that's my goal today. Now, the ELK stack is Elasticsearch, right?
11:41
So this is your Lucene index over a cluster, hopefully. But in this case, it's just a single instance to start with, with full text search capability. So the idea is you need to get your logs into there. Logstash is a way to get the logs into there. So Logstash is capable of pipelining logs
12:02
from many different sources and inputs, transforming and filtering and doing a codec on it in order to get it into the right format on its way into Elasticsearch. But it can also be used to not only write it to Elasticsearch, but write it somewhere else. So you could evolve the story and possibly consider Logstash as your pipeline,
12:24
your ingest engine, if you will. Much like you would look at event hubs in Azure, for example, if you looked at that, or others like it. Not to say that you have to, just that that's another option. I'm trying to look at this simplistically first and then just say, I just want to get some logs
12:41
into Elasticsearch, the rest I want to leave alone. I've got a SUS system, it might be fragile, I've got existing code, I don't want to touch it, but I want those logs also to go over here. So what's the least intrusive way to just get it done so I can start searching and doing stuff? Make sense? And then Kibana would be the visual
13:00
and analytics side of things, right? So the way to build out your, I guess, charts and things that give you some instant visibility. But again, that to me is the last step because the first step is I want to type a word and I want to find what just happened there, if that makes sense, so. Okay, so just to sort of, I guess,
13:23
set the stage around virtualization, containerization. So I've already had a talker talk. I know there's a few other talker talks here. We did a workshop on microservices, talked a little bit about Docker on Monday. Just a quick summary for those that may or may not be new to it. The idea being virtualization,
13:40
we've been accustomed to for years, right? We've got the idea of hypervisor and virtual machines that have a guest OS, which has a lot more overhead. It's certainly an option, again, I'm not suggesting that you couldn't do an Elasticsearch deployment with this, but it also means more time to stand up, more time to recycle.
14:02
If you want to snapshot it because it's fragile, a lot harder, so that's one of the big draws, in my opinion here. So things like that, yeah? So this would be not a container, this would be a container. A container being there's no OS on it, it's very lightweight, it's very efficient,
14:23
and essentially I can stop it, restart it, back it up, recycle it, build a new one, deploy a replacement, keep the old one, recover, roll back, all very lightweight, easy to work with. Not that there aren't a little bit of manual steps
14:40
unless you're using a tool or a platform, but there's a lot of simplicity to it as well, right? Very hands-on, very I know what's happening here, yeah? And so, again, the promise of this is the ubiquitous meme, it works on my machine, so, and I just like this guy, so I put him there,
15:00
because he drinks beer. So how long will it take to set up a solution? So let's take a look at just what it would take to stand up a quick solution, and I think in the process just talk a little bit about what we do with Docker in these scenarios. So let's go to here, the command line again.
15:22
So, command line demos are the most visual ever, aren't they? You get to watch text fly by and stuff. Okay, so I'm gonna take a look at my machine here. I cleaned it up real quick while we were talking, but you'll see that I've got this image downloaded already just for time and efficiency, and it's called sep-elk.
15:42
If I were to go over here to, let's go to Docker Hub. In Docker Hub, which is a registry for pre-existing Docker images, for example, and you can create your own accounts as well, I can do a search for things like Kibana, oop, I can spell.
16:05
And you can see there's a number of different repos here, because what people will do is they'll take the base image and they'll extend it and do something with it, and then somebody who does something really cool will become popular, and that'll be the repo that everybody's going to, right?
16:21
Kibana's just one part of the ELK stack, that's the visualization tool. Obviously we have Logstash as well, so Logstash, MesoCloud, Logstash. Again, these are cloud vendors saying, look, I'm gonna make it easy for you to pop it in our cloud, right? DigitalOcean, this is Wonderland though. Okay, well, there's a lot of interesting names.
16:43
And then we've got Elasticsearch. Now the reason I'm showing you also that they have these official Elasticsearch and other related is that I can go in here, I can see the tags, I can see the versions, I can see what's the latest, right?
17:01
And the reason that matters is that if I do a search in here for ELK, and I see this sept-elk, then the first thing I want to do is I want to take a look at the Dockerfile that they're using, and I want to see that they're using the latest version, because otherwise this could be stale, and I might need to update it myself,
17:21
and then I could create my own copy, if that makes sense. So this Dockerfile is essentially saying, we're gonna go ahead and grab these versions of ELK, Logstash, and Kibana. Turns out they are the latest, so that's already been verified. And so then the rest is all of his instructions
17:40
for setting up a single node, or a single container cluster of that, right? The cluster's the wrong word, so a single container. So it's setting up some environment variables, it's installing all the packages, so it's going and getting Kibana, getting ELK, sorry, Elasticsearch, getting Logstash, and having it properly installed in this container,
18:05
and then getting it started up at the end, start with the start instruction that he's provided. Make sense? So that's essentially job done for me. Now, I didn't bring you here to watch how to build a Dockerfile to do this, because the whole point is to leverage the community.
18:21
What is good is what do we do with it after we deploy it, because there's some things that we do need to do to turn on endpoints that we can send logs to in the first place, for example. So, let's go back to here, and I think this guy. So I have, let's see what I'm in right now.
18:42
I'm actually in here already, so let me go to, wait, let's make sure I'm in my ELK directory. Okay, so right now I'm in my local virtual box, just to show a point before I go to my VM that already has this installed and running, okay, and with data in it.
19:01
So, what I'm gonna do is just take a look at the Dockerfile, and so it says I want to get, create an image from this base image, right, and then the two things that I have turned on immediately
19:22
are the TCP and HTTP input, so that we can actually send logs from, so this is the question, how do I know what input I want to enable? If I didn't enable anything, there are some other default things already enabled, like Lumberjack, for example, but there is no HTTP input enabled,
19:40
so if I wanted to non-intrusively just add some HTTP client code to forward some logs over from my other apps without installing a bunch of NuGet components that might disrupt my solution, again, this whole point of this is, how can I do this the least intrusive way so that I can actually go to a customer or even into my own solutions and simply not be confident
20:05
that I'm not gonna break the rest of the code, and you know how it goes, right, you install a NuGet package, maybe there's a side effect or a conflict with other things and now you can't deploy your whole app anymore, so this is, I guess, the point is that I kind of feel
20:20
like HTTP is a good way to go there to make it a little bit easier. The other one is the TCP input because I have some Node applications and Node has a component called Bunyan and Bunyan uses the TCP input, so basically I integrated that, so it's just an example of using one of the tools instead of HTTP, okay, and there's many more of those,
20:42
so I'll give you a list. Okay, so I guess that's the point is that we have a Docker file that's basically saying I wanna add these configurations and we'll take a look at those when I go to the cloud and then exposing some endpoints. So let's do this, Docker Compose,
21:07
and so what this does, it says after you've built the image, although I could have built the image from here too, I just did it in two separate steps, the other way I could have done is to say build instead of image and then build the Docker file right here.
21:20
So this will say go find the DOSBLON ELK test image, if it's not here, go look in Docker Hub and then open up these ports for Kibana, Elasticsearch and then the existing inputs that we already had which were beats, beats is a way to get heartbeat from like basically you can have agents
21:40
on all of your client machines that will watch for file changes and send over logs. So you can install those type of things for Linux for example, syslog watching, stuff like that. And then Lumberjack would be another tool that you can plug into some of your client apps and then TCP and HTTP which I added,
22:00
so I'm basically making those ports available, right? Okay, so that would be that and then Docker images, I'm just gonna double check, we don't have that image yet, right? So I'm gonna go ahead and say Docker build, actually I probably have that since my typing skills sometimes suffer,
22:21
we'll just go back a few, there we go. So I'm gonna build ELK test just to show what happens when we say we're gonna build this, it's gonna go to the Docker file and it's gonna use that instruction to produce an image, right? So there we go, now why is it so fast?
22:41
Because I didn't do a lot on top of the original layer which was the sep-elk, if I said no cache it would have had to download sep-elk, put it on my machine if it's not there already or the version that I'm asking for and then it would go ahead and do this. But again, all I'm doing is setting up a couple basic things like on the actual image
23:01
which is those folders with the conf files that you saw. And I'll show you what's in those again when we get there. So now that I've got an image, so I can see now I've got blonde elk, dos blonde elk test, right? So dos blonde is the name of my repository if I wanted to push that up to my Docker hub,
23:21
that's just the name I'm using for testing, so. Okay, so now we can do a Docker compose and like I said, I could have done that in one step if I made the Docker compose file do a build instead of an image. But now I've got an image so I'm looking for the image
23:42
to do the Docker compose and that was fast. And so Docker ps should show me any running endpoints and you can see that I have that here, yeah? So this is running on my virtual box which means the port or the endpoint is somewhere over here
24:05
and here. So this is my IP address to my virtual box on this machine and this, if I just go to the port 5601, loads me up Kibana. Now I can't really do anything for configuring indexes
24:20
until I actually write a log. So I'm going to kind of forward track this and go to the one in my cloud. So let's do that.
24:44
Oops, there we go. I'm just going to grab an example before we go. Save some time. And we're going to go into my cloud Linux box,
25:07
MLB Docker Linux one. And it takes a little longer here than it does other time
25:22
so yeah, obviously not a dancer. That was funnier in my head. And why do people always laugh when I say that part? I can tell you a joke later if you like. Okay.
25:42
Okay, so NDC, let's see. Let's go to Elk. And so a couple things that are going on in here. So first of all, let me do a Docker PS and we can see that my Elk that I showed you
26:00
building and running on the local, right? This is the one equivalent only with a bit more configuration already done, which I'm going to walk you through in my Linux VM. So let's just imagine that that's the one we just ran. I also have an app that's a client app that we'll go through just to illustrate the Bunyan example, right?
26:23
And then let me go to here and we'll just go to, so I have this at port 80. I actually had it at a different port, but from this conference center, there's some ports that are blocked or something, so I switched it over so that I could actually show it,
26:42
which is helpful when you have to present. So there we go. Okay, so this is basically saying how much have I logged in the last 15 minutes, which is probably not that useful right now, so I can just say this week, for example. I have some back years into 2013 in here too,
27:01
and then auto refresh if it's on. So let's see if it's on. I don't know. Let's see, auto refresh. Like every five seconds or something. Okay. So this is just basically, again,
27:24
before going into sort of how I got the logs in here and so forth, I just thought, if we were just starting fresh, before I could even see anything here, I'd have to at least curl one example. So that was the point of that story, is I just wanted to make sure that we could do an example like that.
27:42
So I'm gonna go ahead and, ooh, not quite doing what I like, yeah? Okay, I'll tell you what. Why don't we get out of that little instruction and I'll just type, now that I have my cheat sheet to get my context right.
28:07
So content type and application JSON and X post. So I could do this from Postman or Fiddler, I guess, as well.
28:23
And we're gonna do mlb docker-linux-1-cloud-app.net and port, which port are we hitting? So actually, I think it's the same port, 5033.
28:41
Okay. And dash d and what's the data? It's gonna be, I'm just gonna make something up. The username is mlb and permission denied. So I'm just creating a little JSON
29:03
to check this out, basically. And now we should see that that loads up. And at some point, we should see something come through here. Five seconds passing. And am I on the HTTP one?
29:22
Possibly not. There we go. I'll explain that in a minute. So permission denied. Okay? So once I have that, I can start setting up. I can go in and create indices and such like that. You can decide to work with timestamp.
29:44
If I switch over to web app, then I have different options, actually. So I'm gonna go through a couple of those things once we move along. So what we've done so far is, basically build a solution with not much effort, right?
30:00
So all I need is what? I need a Linux box, Azure, Amazon, pretty easy. I need a container, got it. I need to expose some endpoints I'm gonna use. So these two examples are gonna work for node, which is a more integrated approach using Bunyan. And an HTTP endpoint, which maybe I could use anywhere
30:21
just to be non-intrusive. And other than that, the rest is just, let's start getting some data over there. And then you might ask, well, how would I get some back data over? So there will be inputs that we can use for that as well. So, so far so good? Just okay, it's a container. And because it works on my machine,
30:41
because I can test it and set it up, I can get all the configurations done and save that as an image and then ship that off, which means we wouldn't have to work so hard to get it deployed to everybody or all the apps that we want to use it, or run it in test and QA and eventually production. So the single node setup is essentially a single ELK container.
31:01
It's got TCP and HTTP input that I enabled. It could have others. The couple things that I had to add, filtering. So for example, the format of the messages I was ingesting over HTTP from my blob storage or my table storage were kind of ugly and they had a couple of fields that I didn't like,
31:22
so I just did a filter as an example of stripping fields that you don't care for before it goes into Elasticsearch, so I'll show you that. Doing a transformation would also be possible in the same way. Setting destination properties, meaning things like I needed to indicate, well, I'll actually go through that.
31:41
So in TCP, I needed to specify a codec that would support JSON, because otherwise it wasn't working well from Bunyan if I sent JSON instead of just a text entry. So it wouldn't be transferred to the key value pairs. What that means is when it came into Elasticsearch, all I got was one big blob message.
32:01
And I didn't get all the fields that were in the JSON so that I could index on them, so that I could do searches on them. So it was kind of useless, right? The only thing I could do was a string search. I couldn't do a level equals warning, for example. So in order to achieve that, I had to turn on JSON's codec for the TCP endpoint.
32:20
Likewise for HTTP, turn that on, make sure it can handle the same JSON, and then adding a filter to just remove a field to try that out. So those are a couple of things that are worth taking a quick look at. We also had to look at not having conflicts between the inputs. So for example, the TCP input from Bunyan
32:42
had a certain format. The timestamp field came in with a certain structure, and it was a different field by name. And because my fields coming in from my table storage had a different timestamp field, and at some point they weren't compatible
33:01
in terms of their arrangement, the HTTP input was crashing, essentially. So nothing was getting into Elasticsearch. And the way that I fixed that is by sending them to a different funnel, basically, by saying this is gonna go to a different index altogether, this is gonna be the HTTP input, this is gonna be the TCP input, called lockstash.
33:23
So you can do things like that where the ingest coming from different places has a different name. So conceptually that might be possible or necessary. So let's take a look at the customizations. And then we can take a look in the UI. So, let's see. I'm gonna do a cat.
33:41
Can you guys see that okay? It was bigger before. So, okay. So let's go through a couple things. So I've got my Docker file. And as I mentioned before, but I've added a couple other things here. We added to the folder structure of the image
34:02
the TCP and HTTP input configuration. A WAD filter, which was for my WAD logs coming from Azure Storage, table storage. And then the other one was the output configuration. So this is for me to indicate that this came in from HTTP, call it something different,
34:21
go to a different index, basically, for timestamp. Because you can only have one timestamp index and then the rest you do off of the JSON fields, right? Or the fields. Okay, so there's that. And then let's go ahead and, O3, TCP, input, config.
34:42
Okay, so all that is is an input saying we're gonna use Kodak JSON. So that's kind of straightforward, yeah? So by the default it would have been just giving us the port. We can add other features. Simple. So by doing that I can now receive the JSON.
35:00
I can now parse the JSON into actual fields I can insert on, is the point. Otherwise I wouldn't be able to. Next one, cat, O4, HTTP, input, config. There. And that is setting the type to HTTP
35:21
so that it's clear on the way in this should be in a different index path. Otherwise it was being lumped with logstash, which is the default name for the type. And that was getting conflicts with the timestamp, for example. And then,
35:45
wad, config. This is an example of a filter. It says, look, if this type is HTTP input, which we've already figured out from the previous input step, let's go ahead and remove the field. And this is an example of a mutation, right? You can transform, you can remove
36:03
the fields that are coming in if you know they need to be set a certain way in order to normalize them into Elasticsearch. You don't need all of the fields to be the same in Elasticsearch, but what you do need is that the timestamps don't conflict. So this was actually a superfluous removal to just illustrate the point of that you can do this.
36:22
And also because it was kind of an unnecessary field. So if you're familiar with Azure Storage, the table storage uses a partition key, which is a long number representative of a date, but then they also have a timestamp field. So we were using the timestamp field, therefore removing this partition.
36:41
Okay, and then cat output config. So what this is doing is setting up my index for the web input for the HTTP path
37:02
to be called webapp instead of logstash. So it's basically saying for this path, we're not gonna be, I guess, collated with all the other logstash inputs. You really kinda do wanna have different inputs.
37:21
Partially that's gonna come from the JSON itself, like have a category like which application is this and so on, and those are just fields. And you can add them as you go, so that's the beauty of it is you don't have to actually have all your logs look the same. The only challenge is the timestamp. So if you don't wanna do it through different inputs, which I'll show you how that looks in Kibana,
37:41
then what you would do instead is do some more transformations of a more interesting kind in the filter. Either way, the same accomplishment can be made, right? Which is make sure it all gets in there and has a timestamp for the initial index because you have to have that. Okay, so this is already running.
38:03
Those are the customizations, and I think the first and foremost example that I was looking for upon setting this up, let me just refresh so we can get back to our main page.
38:22
So the first thing that I was looking for is this. I wanted to be able, oops, I'm just like not in my right place, okay. I wanted to be able to come in here and I wanted to be able to just type something. I wanted to say, you know, level warn or something like that,
38:42
and that's not a good example because that's probably in my other index. I wanted to be able to find error, or I wanted to be able to find, right, oh, oh, I know what I'm doing wrong. It would help if I did not have this limited
39:06
to, what happened to my top bar? I lost, what? Yeah, I know, I did something that lost me my,
39:25
here we go. Okay, so let me just go back to, I'm gonna go to like five years. So I have some going back to quite a bit, and we'll do five seconds, that's fine.
39:41
And then let me close that, and let me go back. I think this is my web, yeah, okay. And so that's what I needed to do. So I was going to look for stuff like,
40:02
I don't know, exception, something like that. So basically just anything that was an exception before I have any searches or queries that will filter it automatically for me, when something goes wrong, I just wanna be able to find it quickly. Another example would be this, like let's say one of my websites,
40:21
we do PayPal style work, and credit card's gonna expire, something's going on. When we have registration issues, we need to be able to see the visibility, and we have trouble with that right now, because it's a small website, it's not really one of those like 24-7, let's throw a bunch of people at it.
40:41
And so when stuff goes wrong, it's just equally as painful as it would be for somebody who has a 24-7 concern. But I guess the point is, the visibility is so critical when something goes wrong, it's so painful when you don't have it. So just ingesting those logs from the Node.js app all of a sudden makes it easier for me to just do a quick search on the type of error,
41:02
so that I can go in and say okay, well maybe I can find from here the record of the registration, and then I can go look it up, and I can go see what might have gone wrong. Or even better, if I'm smart enough to eventually add activity tracing to my logs, which as we all know is not gonna happen day one, but eventually if I could have one ID
41:22
that tracked a request all the way through my system, which is the panacea, then maybe I'd actually see all the things that led up to it. They were on this page, they called this method, we tried to call registration, that's when it failed, that kind of thing. So the sort of quick and dirty approach is just show me something,
41:41
let me know when something goes right, let me know when something goes wrong. And let me just see if I can find some intelligence on what went on here, right? So without even doing any indexing patterns, without any visualizations, I've already got something here that I can use essentially.
42:02
So I wanna talk about how you get the logs in here, because right now we're just talking about okay, assuming then I've turned this on, once I could start writing to the logs, then I actually can come in here and do these things. So we've got our setup, we've got two inputs that I've enabled, one for Bunyan with node, and one for HTTP with which you saw me curl,
42:22
which means I can put that in any code base as well. So let's go take a look. So let's go to, okay, so let's just talk a little bit about what to log, and I'll go through this part quickly because it's mostly just high level,
42:41
but I think important in the sense of okay, we've got these app logs, OS logs, possibly IIS or web server logs, you don't really control those, but you might wanna ingest them eventually so that you have that holistic view. I don't think that's the first and foremost for you to troubleshoot a problem though. The first and foremost is, what had happened in my app,
43:01
I probably have a log or a console, right, somewhere. So there we go. How many people create a logger interface in their apps? A reusable logger interface, most people today, okay. How many people just rely on the default console output or trace output? Okay, so then the other people just don't log.
43:22
Just kidding. So trace output would be another, and again, you can hook these things, and then there's your apps, right? For security logs, you're supposed to log every login attempt, every unauthorized access to every API, to every endpoint, to every asset, and password resets and such.
43:41
Now, some of these things should be handled by an identity server if you're actually using such a thing, in which case you're not dealing with the password resets, you're not dealing with the login attempts. That's something that should be baked in and logged by your identity server. That's one of the values of using one, because then they can do all the compliance stuff around security things. So like, I didn't serve Brock and Dom,
44:02
that they launched now also this weekend, ASP.NET 5, right? And I guess others like it, right? You've got Azure AD, you've got Auth0, and so on. But you still have to log your app, unauthorized requests, and nobody's gonna do that for you. It's a good thing to know that you should do, enough said.
44:20
Other things, things going on in the session, purchase flow, things, activities through the system, even just feature access, right? Every method, exceptions, toss them in there, those are my trace outputs. Any exception that happens, try to catch it in the middle and pop something onto the logs. And then of course there's change history,
44:40
but for me, all these things in red are things you control, meaning you're the one probably writing the log this somewhere in your code, or somebody's doing it if they have the identity server. So, and then the things for change history, I'm gonna take that out, just because that's more of a feature thing than it is, I need logs to investigate and analyze.
45:01
So everything in red here, you have control over, which means all you have to do is make logging easy. So my theory here is, and probably, again, some of you seem to already do this, is have some sort of central component that all your team can use for the .NET folks, for the node folks, for whichever platforms you use, that they don't even have to think about what to do.
45:21
They should just call logger current do, right? Or something like that. If you do that, then people get in the habit of not using the default trace log, console log, and they use your API, hopefully. The other thing you can do, of course, is hook it somewhere central that the developers don't even have to think about it,
45:41
like in the exception filters for the web API, or for the MVC app, if it's .NET, and in node, you know, the equivalent. So that's another catch-all that you could do. And then the other would be to go down to, hey, wherever console log goes out, let's hook it there, which is another option, because there are inputs for console log.
46:00
One of the ones that I found for ELK, for Logstash, wasn't working very well, though. So again, some of these things, they're open source, they may or may not work as well as you'd like, and if you're gonna put this in a mission-critical place, like console log output that literally is hit everywhere, I think I'd rather have control by having a central logging component
46:21
where somebody intentionally logs, and that goes to console, and then the intentional log can be overridden to say, hey, not only write to console, but now let's shoot this thing over to Elasticsearch, by the way, swallow that exception if it goes wrong, because we don't want to fail because we're logging. We're not trying to introduce problems, makes sense, yeah? And by the way, if there's one piece of code
46:42
you unit test end-to-end, it would be that one. I remember introducing a logging component for one of my customers, probably going back a couple of years now, and because it was such a fragile, huge application, but we really needed to get the visibility beefed up, because it was going to the cloud, going to Azure, migration process.
47:03
So I tell you, if there's one thing I did, it was bulletproof that thing, really solid, lots of tests, which of course we should do all the time. So, okay. So encapsulating that in the logger, enough said, probably obvious, but something to remember, and then what I would do is just pop in,
47:22
by the way, go to Elasticsearch and swallow any exceptions, don't let it break you, okay? That means it can also work for client and server logging. So folks that are doing on the JavaScript side and client side will sometimes expose an API to do logging also, so then it centralizes everything you do. Challenge with logging APIs is how do you secure them?
47:44
Because typically you also use them from security things, and now how do you secure the security from the security thing? So sometimes you have to look at maybe just symmetric keys and stuff like that to keep it protected, but that's a sidebar. So wrapping it all up in an API
48:01
or in a reusable component, no problem. So let's just take a look at the Bunyan implementation for the Node app and at least then get the idea of how a plugin might work, talk a little bit about those other inputs. Let's see. So I want to go to, let's see here.
48:24
Okay, Node Web App Elk. Are we awake? It's okay? Okay, good, just making sure. So what I'm going to do is just go in here
48:41
and go to app.js. So this is a simple app, and what it does is at the beginning we added the requirement for Bunyan and log stash TCP, and then as part of using it throughout this app,
49:02
I just have a create logger so I have a single log component, and in the create logger what you do is you say what the default level is, and then what you do is indicate where the streams are going to go. So we've got the standard out, but we've also got the Elasticsearch stream. So this is going to go to my Elasticsearch server.
49:21
Obviously this is hard-coded to do that. It would probably need to get it from config, et cetera. So this is like a 10-line app, so I didn't want to over-engineer it, if you know what I mean. So yeah, so this is hitting the TCP stream for that. Again, Bunyan create logger is their component,
49:41
so I plugged this in. Found a couple of problems with it, like for example when we didn't fix the JSON part, it just kind of blew up the whole app, which is part of the reason I said, if you were smirking when I said, well, just use HTTP, try not to intrude anything that could break you. I mean, anything can break you. We know that, right?
50:01
Like, anything can break you. And everything seems like, oh yeah, but that won't be a big deal. And then, well, kind of it is. All the time. It's kind of horrible, actually. So I've sort of adopted the less change the better in all philosophies at this point. The next thing is, not that I live in fear,
50:23
just saying, okay. God, you're a tough crowd. It's beer time or something. So app get is just me saying, every time we get a request, we're gonna do something. And I added a little bit just for a query so that I could log different things, info, warning, et cetera.
50:41
So I just added that. And then ultimately, it just does a log dash info, a log warn, and a log error using the Bunyan component. So pretty straightforward, yeah? Anyway, this is already running. So if I go over here, so basically, it's another container, Docker PS.
51:04
So Docker PS, this is running on 8080, right? So that's the app. And we'll just kind of hit over here. And I come up here, and I have level warning, message. You know, this one says user XYZ, and then some fake IP address.
51:21
So I guess I can just log that. And then I can say user, I don't know, 5678 or whatever, and log that, just to show a couple of different things. And then this is a warning. So now we can do error, and we'll log that.
51:43
And then we can do like an info, and we can log that. So over here, I should be seeing output from that. And I'm not sure if I'm filtering. So I probably want to do like, let's get rid of that.
52:04
Okay? And so we see this user, da da da. Do I see my other one? Hopefully soon. Sorry, did I what?
52:30
Oh, refreshing didn't do it? Darn it. Hitting enter should though, I would think.
52:45
Am I seeing it? I must be. Info, really?
53:27
I'm not really sure what's going on there. That's weird. Okay. Well, it worked before. So there's that.
53:42
Let me see if I can do this. I'm gonna go to just, I guess, yeah. See, I've got a bunch of them. I don't know what happened there. That's so weird. I wonder if I've just done something to break it. Let's do this.
54:03
Unless I'm not running the one with the query string. Ooh, there's that possibility. Don't laugh at me. Right. Let's see.
54:22
Oh, don't be silly. It's almost like it's just not receiving anymore. That might be really silly, huh? Let's see.
54:43
I'm just gonna do this just for the sake of it. In case I blew something up with my inputs that are not well tested or something.
55:00
And then likewise. Only because I like typing command lines so much and it's really fun to watch. It's fun to watch how fast I type. Isn't it amazing? I'm just kidding.
55:21
I'm running out of jokes. That's the thing. So, let's see how that goes. Finally, UI, oh my god. Okay. So, and then we'll do that.
55:47
Now we're really cooking. I bet I know what that is.
56:16
Okay, so what I'm gonna do is I'm gonna just rebuild that guy.
56:21
Docker images. Boom. Let's see what I got. And it is node web app v4. I'm sorry? I only have two minutes?
56:40
Oh, that sucks. Thanks for keeping my time for me. That rocks. I mean that. It's okay. It's good. In other words, command line, don't do it at home. Don't try this at home. Okay, so I guess the point
57:01
that I was gonna try and go through here is essentially, let's see if we can go to my settings. And go to web app. And I'm gonna find my objects. I've got some searches here.
57:20
So for example, my view search. Warnings or whatever. It will do a search and it will grab the, or sorry, this was level error. So it'll filter on error. So I can save a bunch of searches obviously. I'm not trying to teach everything elk here. Obviously it's an hour. I'm just trying to give the idea that, again, that whole sense of I need visibility now
57:41
and if I just pop this up, I don't have to do a lot. I clearly haven't even done a lot here, but I can do stuff. I can see errors. I can see warnings. I can search on just words. I can search on any of the JSON values that you see, noticing that these are some popular things that I've filtered on. But if you go to the main view,
58:01
all the fields appear, like literally everything that I've used in any of the logs so I can filter on any of those and automatically I have some value add. I think that's really the main point or takeaway. So coming back to that in terms of just some closure, there's a bunch of inputs.
58:22
This website actually illustrates quite a lot of them. I think I have that actually already here too. So just to give you an idea where that is over here. Tons of them. Even an Azure input that I haven't actually tried yet that might be able to do some automatic ingestion.
58:42
But look at all the different ones. Beats can look at files on your Linux box and actually pull in. So you put an agent on the box and it can pull in logs as it sees it. There's stuff for other products like Meetup. So if Twitter, I mean S3 if you're doing Amazon.
59:01
So you can see that there's quite a long list of things that might help you automate ingestion. Even GitHub for events that are happening there. Again, I still come back to the theme of this is really I want visibility into my app which means I own my log. I need to write them. I need to be non-intrusive. And I need to start out with something which means a simple thing without clustering
59:23
would be a good start. And so the clustering part is really one of well let's start with what do I do if I have this single node thing? It's a container, right? And it's everything in one which means I've got Elasticsearch with Kibana with Logstash all together.
59:41
It's not gonna scale well. So I might run into trouble if I really start firing at it, right? But the good thing is I can back it up. I can snapshot it. Preserve some of the data. Maybe I don't care how lossy it is. Again, it's not my central store. I've still got all these other logs going somewhere else.
01:00:00
looking for, give me some visibility, please, now, you know, because honestly, I see that so often that I just feel like that's really powerful. Start with that, but eventually you will probably have to look at scale-out and state. And the idea here is that Logstash is stateless, Kibana is stateless, those could easily be separated onto a web tier, but your Elasticsearch part, if you're starting
01:00:24
to get a lot of logs, would probably eventually have to be clustered, so you can decide then if you want to go to a hosting model that's cheap. If you don't have a billion records, it can be really cheap, like $5 a day or something, did I say it was, I think it was $5 a day and $50 a day for like a billion, right?
01:00:42
That's pretty cheap, though, if you have a real company. Now, if you are a startup and you want everything free, just, you know, don't worry about doing that yet, you know, go with your own until you have a problem, which means you have a lot of users, which means you have the money, and you can do something about it, okay. So at the end of the day, this whole problem here, you know, you don't have to solve
01:01:04
it right away, but some of these points that we see here could be ingested. And I think, you know, you can start with just your logs get pushed, your .NET code logger starts adding on something that goes to Elasticsearch, you look at scale as you need to, but keep in mind what I just showed you around, which part is stateful,
01:01:22
so there's, and actually there's a clustering with Swarm, which I've shown briefly earlier on Docker, that you can deploy this to, and there's actually already a script for that, a compose file for that, I'll make sure that's in the references, so that's another thing that you can do if you want to do clustering, you can do it on your existing
01:01:41
Docker cluster and play with that and get a clustered solution, so that's good. And so solving all of this doesn't have to be all at once, but eventually you could start ingesting from those destinations as well, that's another part of it, so you have two choices, right? When something goes wrong, you can either say it was aliens, and you have no idea, or you can log all the things, thank you for coming.
01:02:09
So, thank you, I'll make sure all the stuff that, all the scripts and things will be on my blog, by the way, after I get home, so you can look for that.