Pydantic Logfire — Uncomplicated Observability
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 131 | |
Author | ||
Contributors | ||
License | CC Attribution - NonCommercial - ShareAlike 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this | |
Identifiers | 10.5446/69488 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
00:00
9K33 OsaInformation securityCorrelation and dependenceQuery languageError messageElectronic mailing listMessage passingAttribute grammarDataflowFormal languageOpen setPressureData centerBitRule of inferenceInternet service providerFreewareStandard deviationCapability Maturity ModelDecision theoryTerm (mathematics)Multiplication signOverhead (computing)Complex (psychology)CASE <Informatik>Level (video gaming)Formal languageTracing (software)LoginMetric systemString (computer science)Limit (category theory)Link (knot theory)Fraction (mathematics)HierarchyAbstractionContext awarenessData managementEndliche ModelltheorieForm (programming)Point (geometry)CoroutineDegree (graph theory)Block (periodic table)Data structureEvent horizonCartesian coordinate systemComputing platformSoftware developerFunction (mathematics)ResultantMobile appTask (computing)Electronic mailing listCodeError messageMessage passingElectric generatorInformationMetadataAttribute grammarRaw image formatNetwork topologyoutputView (database)Validity (statistics)TimestampState observerOpen setPrimitive (album)Software engineeringProgramming languageFunctional (mathematics)Natural languageSystem callData modelThresholding (image processing)Default (computer science)Different (Kate Ryan album)IntegerObject (grammar)Software repositoryLine (geometry)Sound effectCodierung <Programmierung>BuildingLinearizationDemo (music)LaserSoftwareComputer animationLecture/ConferenceMeeting/InterviewUML
08:30
View (database)Socket-SchnittstelleOptical character recognitionEndliche ModelltheorieSet (mathematics)EmpennageCapability Maturity ModelProgrammable read-only memoryTime zoneGraphical user interfaceWindowElectronic mailing listComputer-generated imageryTimestampWeb pageRaw image formatSmith chartPrinciple of relativityCodeOctahedronHost Identity ProtocolDemo (music)Computer-assisted translationComputer configurationProcess (computing)Mobile appTerm (mathematics)Medical imagingTouchscreenLoginComputer animation
10:05
TimestampComputer-generated imageryGamma functionWindowView (database)CodeDemo (music)NavigationServer (computing)InformationProcess (computing)Coefficient of variationAiry functionDatabaseState of matter19 (number)Computer fileDirectory serviceProjective planeSelf-organizationElectronic mailing listMedical imagingLocal ringProduct (business)Integrated development environmentStructural loadData loggerLoginSoftware repositoryDeclarative programmingDatabaseMobile appCodeVariable (mathematics)TheoryCommon Language InfrastructureSoftware developerComputer animation
11:57
Token ringConfiguration spaceView (database)Graphical user interfaceGEDCOMWindowSocial classOrdinary differential equationNavigationCodeComputer-generated imageryUsabilityMetric systemTable (information)BitParameter (computer programming)Line (geometry)Data loggerComputer fileLoginClient (computing)Configuration spaceMobile appCodeElectronic mailing listMedical imagingCartesian coordinate systemMultiplication signOpen setSource codeComputer animation
13:37
Euler anglesTimestampView (database)Graphical user interfaceComputer-generated imageryElectronic mailing listRouter (computing)Computer-assisted translationComputer iconElectronic mailing listData loggerStatement (computer science)LoginMedical imagingParameter (computer programming)Computer animationSource code
14:08
WebsiteComputer-generated imagerySpecial unitary groupCAE Inc.CodeAttribute grammarGraphical user interfaceBookmark (World Wide Web)View (database)Gamma functionCoefficient of determinationMedical imagingRight angleComputer clusterCASE <Informatik>Computer animationSource code
14:29
TimestampComputer-generated imageryGame theoryCone penetration testView (database)Graphical user interfaceGodBookmark (World Wide Web)GEDCOMCheat <Computerspiel>System of linear equationsCodeAttribute grammarManufacturing execution systemSineTable (information)Theory of everythingCoefficient of determinationEvent horizonPoint (geometry)2 (number)Open setMedical imagingDatabaseBitInformationTable (information)View (database)CASE <Informatik>LoginData storage deviceElectronic mailing listComputer animationSource code
15:48
Network topologyComputer-generated imageryTimestampLogic gateGEDCOMElectronic mailing listDefault (computer science)Computer iconInterior (topology)DatabaseRegulärer Ausdruck <Textverarbeitung>WebsiteOvalQuery languageBit rateView (database)NavigationWindowCodeMedical imagingElectronic mailing listAttribute grammarLine (geometry)Structural loadQuery languageTimestampObject-relational mappingEndliche ModelltheorieCodeRow (database)Computer animationSource code
17:18
EmailGrass (card game)Manufacturing execution systemLocal ringComputer-generated imageryElectronic mailing listTimestampSingle sign-onDatabaseConnected spaceFrame problemBeat (acoustics)Server (computing)Process (computing)Uniform resource locatorComputer fileInformationVorwärtsfehlerkorrekturSystem of linear equationsSineQuery languageSubject indexingMedical imagingCountingServer (computing)Musical ensembleMultiplication signGreatest elementDatabaseComputer animationSource code
18:32
Manufacturing execution systemSima (architecture)Computer-generated imageryElectronic mailing listTimestampNetwork topologyLevel (video gaming)InformationProcess (computing)Engineering physicsVariety (linguistics)Route of administrationSoftware testingMedical imagingStructural loadCodeInteractive televisionDatabaseGroup actionComputer animation
18:55
Cross-site scriptingBeta functionDatabaseExistenceComputer-generated imageryElectronic design automationIRIS-TWeb pageNavigationWindowType theoryView (database)CodeCoefficient of variationGEDCOMMoving averageStatement (computer science)Multiplication signLevel (video gaming)Directory serviceMobile appParameter (computer programming)CodeEvent horizonQuery languageGroup actionRight angleDemo (music)Source codeComputer animation
20:00
First-order logicBit error rateFinite-state machineOpen setFormal languageDatabaseMobile appCommunications protocolFreewareStandard deviationKey (cryptography)Limit (category theory)Point (geometry)Regular graphProduct (business)NumberSoftware testingAttribute grammarLibrary (computing)SynchronizationFile formatComputing platformCodeDefault (computer science)INTEGRALString (computer science)Right angleMultiplication signSystem callRevision controlDifferent (Kate Ryan album)Software development kitGoodness of fitLecture/Conference
23:19
GEDCOMDatabaseNavigationDivisorWindowCodeSynchronizationWebsiteParameter (computer programming)Endliche ModelltheorieFinite state transducerString (computer science)Electronic visual displayDifferent (Kate Ryan album)CASE <Informatik>Statement (computer science)Social classComputer animationSource code
23:49
System of linear equationsDatabaseCodeSurjective functionArithmetic progressionMiniDiscComputer fileEvent horizonStructural loadInformationTracing (software)WordMobile appException handlingOpen setContext awarenessCASE <Informatik>Game controllerExtension (kinesiology)Factory (trading post)Medical imagingProduct (business)SoftwareBeta functionSoftware development kitView (database)Hand fanForm (programming)Feedback2 (number)Standard deviationComputer animation
27:23
SicArtificial lifeHTTP cookieMusical ensembleLecture/ConferenceComputer animation
Transcript: English(auto-generated)
00:04
Thank you so much. I guess all of you have some idea of what Pydantic Logfire is, so I'm going to do a bit of explaining the principle, and then I'm going to do some example of instrumenting an app live. That may go well. That may go badly. We will see.
00:21
Obviously, if you have any questions, we can take some Q&A here, but also we're at the booth, so if you have more detailed questions or if your question, we don't have time, obviously time to answer them at the booth. So what is a log? Here in Python nomenclature, it is effectively a 1D list of three things, generally three things, not even
00:44
all of these in some cases. A timestamp, when it happens, a level, often that's not available, but generally it is, and a string. It is, if you were to write it out as Python code, it would look something like this. Now, this is all very well, but it has enormous limitations in terms of trying to work out what's happened.
01:03
So if we look at this particular example and we see, for example, this error happened here, which request is it associated with? Basically impossible to know. There is no link between this log message here and the, my laser's not working, and the request that caused it. That's only the beginning of the problem with logs.
01:26
There's also no hierarchy, really. Sure, there's a level, which tells you that the error might be more interesting than the warning or the info, but you probably want the info to get the context on the error, so you can't just look at the error. So the point is here that this is really what code looked like in about the 80s. Even Fortran 77, which predates the 80s, had subroutines, right?
01:51
Even in the 1980s, we could do it some degree of putting our code into blocks and therefore making it easier to understand, both for the person writing it and for those reading it later.
02:04
So what should it look like? What about this structure for a log? So we still have a start timestamp, but we also have the end timestamp in the event, in the case that it was some kind of routine that took some amount of time. We have the message, same as before. We have attributes. We might well want to send through more data than fits into a single line of a log output.
02:28
And then most importantly of all, we have children, and that gives us effectively a tree view of logs rather than just this linear list. This is a far more powerful model through which to think about what it is
02:41
to observe an application than the linear list of logs that we're all used to. Now, some of you will be coming from having used existing observability platforms and will be like, this is all new, none of this is new, why are you even talking about this? That is true, but for most Python developers, for most of us, most of the time until now, logging is basically the default.
03:00
Getting the other things is expensive in terms of cost, but most of all, it's expensive in terms of FAF, in terms of the mental overhead of trying to configure it. And so for most Python developers up until now, logging has been what we've been stuck with. So like I said, the innovation here in Logfire is not that this idea is completely new,
03:23
but trying to make it as easy as possible for everyone to use so everyone can get it. So what is Pydantic Logfire? What are the advantages of it? The first and most important one is it is simple to get started with. It is founded on the same principle as Pydantic that even the most powerful tools can be easy to use.
03:46
I am totally convinced that there are lots of software engineers, less so now, but definitely in the past, who tried to prove how clever they were by making things with complex APIs. And I think that the one reason really that Pydantic is successful is that it does the opposite,
04:00
to the point of breaking the rules of how you should do things to make it easy to use. And we wanted to do the same thing with Logfire. We wanted you to be able to understand it without having to go and spend days reading about the definition of observability, what a trace provider is, et cetera, et cetera. But again, it's built on top of OpenTelemetry. That means we get enormous amounts of stuff for free.
04:21
OpenTelemetry is an open standard for observability, started three or four years ago, but just kind of coming to maturity now. Lots of people are building on top of it, but it has its API looks very much like someone has tried to prove how clever they are by defining it. Actually what happened was they made a decision that every programming language should have the same API, which
04:42
means they have to use effectively the like lowest common denominator of things that are available in all languages. So they couldn't go and build the nice abstractions that you would want. For example, you can't do a context manager because they're not available in JavaScript or in Rust or in really any other language like they are in Python. So we went and took OpenTelemetry.
05:02
We built on top of it. We effectively built some things on top of it that make Logfire particularly easy to use in Python. But of course, because we're just OpenTelemetry, you can use Logfire with any language. So you can send data to our platform from any language. You have the same shitty user experience setting it up in your app that you would
05:21
have if you were using OpenTelemetry with anyone else, but we'll get to other languages in time. Because, again, we're OpenTelemetry, we have traces as logs, as I'll show you in a minute, and we have metrics. We have auto tracing, which I won't demonstrate today because it's a bit trickier to set up, but it effectively allows us to insert a span around every function call.
05:43
And so you get something broadly akin to tracing, but within your application. But it will be clever enough not to instrument function calls that are very, very fast. So you have a basic threshold, and above that, we'll log it. And lastly, we have structured data. So I was saying earlier that we had attributes that was a dictionary that contained more details.
06:05
One of the limitations of OpenTelemetry is that the attributes, effectively the extra information you can send with each event, are really primitives. They're really just strings and homogeneous lists of integers, effectively.
06:20
So we do a bunch of work on top of that. Effectively, we JSON encode Python objects, and then we send metadata about that JSON so that we can reconstruct something that looks like a replica. So this is the Logfire dashboard, as I guess you will all have seen on the booth. But I'll just pull out the three things that make it particularly different.
06:42
So this, we call it nested logging. Of course, this is really trace data. So this is, as I showed you at the beginning, not just events, but also, in most cases, their duration or the duration of different tasks, nested effectively within a top level event, like in this case, an HTTP request.
07:01
Secondly, in the Logfire platform, we allow you to search all your data with SQL. So whether that be filtering on this live view, exploring your data in the Explore tab, building dashboards based on SQL, or indeed setting up alerts, again, defined by SQL. We think it is, SQL is a very powerful way of effectively putting the complexity in a standard tool that lots of you will have already used.
07:28
And the bumper is that OpenAI or generative AI is very, very good at generating SQL. So if you don't want to have to go and write the SQL, you can get OpenAI to do it for you. So if you want logs from yesterday between 2 p.m. and 4 p.m. that were error or above, you don't need to work out the SQL.
07:43
You can use natural language querying to write that out and get the result. And then lastly, there's the structured data. So here we're showing, I don't know whether you can see it, but we have a pydantic model. This particular piece of, this particular item that we're showing the details of is the auto-instrumentation of pydantic.
08:02
So we have, you can't see it all here, but we have the raw input data. We have the status of that validation, which was successful. And then we have the result, which is here a pydantic model. So again, because we're sending the metadata, we can get the name and the model and we can show that here. And obviously, we have all of its members. And again, you can go and query on that data.
08:20
It's stored in a structured form and you can go and query on it, where I don't think other tools generally make that possible. So I'm going to go on now and give, if I can find a way out of this, I'm going to do a demo of using log file. So what I have here, if it's going to work, is a very simple Fast API app, which is called CatBacon.
08:52
And it will generate an image of whatever animal you give it in the style of Francis Bacon. So if we give it the most obvious option of a cat and we let it go and run, this is, by the way, using Fast UI that I talked about earlier.
09:05
But obviously, there's nothing special about Fast UI when it comes to log file. We go and we get back and we get this image of a cat. And of course, we can go and do something slightly more fun than a cat, like, for example, a llama.
09:23
Is that llama? I don't know how... It'll do the job. And the problem is here, right, we have no instrumentation. So all we have in terms of what happened is the standard screen that we get from...
09:43
I don't know whether you can see that, but maybe I can zoom in. What was the... We just get the standard log that we get given by uVehicon. Not very helpful. In particular, we don't know where that slowness came from when we were waiting for our picture of a llama to load.
10:06
And we could go and look at some other things. So we have a list of the previous images that were load that have been run. And this endpoint seems kind of slow, given that, in theory, it's running locally on the local Postgres database on my incredibly expensive Mac.
10:21
It should be instant. What's going on to make that slow? So let's take this app and try and instrument it with Fast API to get an idea... Sorry, with log file, to get an idea of what it can do. So this is the code for the app here. Hopefully, some of you can see that. If you can't, I have not yet, but I will immediately after this upload it onto the repo that I mentioned at the beginning.
10:46
So this is where our app starts, effectively. This is where we have the Fast API declaration of the app. And we're going to come and first thing we're going to do is set up... I'm going to make it bigger again.
11:00
First thing we're going to do is set up a project with log file. So the log file Python package is both a package you install and has a CLI. So if I run log file, who am I? I should get that I'm logged in as Samuel Colvin and there's no log file credentials set up for this directory. So if I do log file projects new, I get asked what organization I want it to be attached to, defaulting to my own organization.
11:29
Fine. I'm going to call the project talk, which is good enough for now. And our project has gone and been created. And now if I run log file, who am I? Not only does it know I'm logged in as me, but it knows what project I'm linked to.
11:41
And that's just from a .log file directory that gets created in your local directory. We have slightly different instructions if you're using log file in production where you set an environment variable, but this is very useful for development. So if we open this, we get... Well, first of all, we get instructions on what to do next, but we know what we're doing.
12:02
So we can then see the log file UI. Nothing here yet because we haven't logged any data. So let's go and instrument our app with log file. So we're going to start by importing log file. We're going to log file configure. We don't need to set anything here. There's a whole bunch of keyword arguments you can use to configure how log file works.
12:24
None of them do we need at this particular time, but we're also going to instrument our fast API app. So that's just log file dot instrument fast API app. Thank you. I'll also... I know I'm using async PG, so I'll instrument that.
12:41
Log file dot instrument async PG. And we're also obviously using OpenAI, so I'll go and instrument the OpenAI client. So if I do this, and I happen to know the instrument fast API, OpenAI takes the client as an argument.
13:02
So I'll do instrument OpenAI. Thank you. And so that's the... I guess the main bits of our app. You'll see we've added one, two... I'm sure we reconfigured that there. Three, four, five lines of code so far to our app.
13:20
And now if we run it again, you will see that we start seeing some different logs come out here. But more importantly, if we go look at our application, we start seeing... First of all, we see some SQL queries coming through and how long they're taking. But probably more interesting is to come and look at our app and look at, say, we've got our list of images.
13:44
And if we come back in here, you will see the log file statements associated with those requests that I just ran. So for example, this was me. Actually, I won't talk about that one because it's got something exciting to show you in a minute. But if I get the API endpoint, you'll see I can see the fast API arguments.
14:04
Nothing very exciting in here. If I generate an image of an animal, I'm dyslexic, so I need the name of an animal that's easy to spell, please. Okay, we'll do dog. Thank you for that. And while that's running, we can come over here and you can see that request to generate the image ongoing right now.
14:25
And we can literally see what's happening, in this case, and it's finished. And we have a picture of... Sorry. Oh, yeah, sorry. Dog. There's not much more interesting...
14:48
Yeah, yeah, yeah, yeah. I'm very sorry, that was not intentionally quite as unhelpful as it was meant to be. So is that a bit easier to see?
15:02
But you can see here we have the request to generate. We can see that it took 9.3 seconds, and we can see that the events that went on within it, not particularly interesting. We spend 9.3 seconds of that making the API request to open AI. And you can see here that a bit of information about what happened, the prompt that was actually used to make the request.
15:26
We don't obviously show the data from the image in this particular case in log file, although we can do that. And then you'll see the database request to store that data in the images table.
15:41
And you'll see how long that took, although in this particular request it's not very interesting. From the point of view of log file, what gets more interesting is if we come over here and we look at the list of images, and you see that's kind of weirdly slow for happening locally. So let's look and work out why that request is slow. So we can open up our images endpoint, and we can see, whoa, we're doing loads of queries.
16:09
And so the first one, which is really quick, this is weird. So the first one is getting ID for all of the images sorted by timestamp.
16:23
And then we're going through each of them and getting the data for each of them, I guess. Which is weird. Now, if you look at the code, it looks obviously contrived because I'm using actual SQL, so it's not easy to make the M plus 1 mistake. But if I was using an ORM, it would be very easy to forget to do a select related or do an only,
16:42
and then our ORM would be going and making lots of requests to get extra attributes about a given model. So you can see here that our code is actually far from ideal. We're getting just the IDs, and then we're iterating over all of the rows we want to return and getting the extra attributes. So let's replace that with, let's do this.
17:07
Oh, we want that line still. We're going to call that rows, and we're going to select star. And UVicorn has happily reloaded. Now if I go back over here and I reload this endpoint, it still seems to be weirdly slow.
17:24
But if I look at the bottom here, at my request, you will see, sure enough, we're only doing this query here is very fast, and we're only doing that one query. I was about to show you another, like, oh, this is still slow.
17:42
But what I've realized is I forgot to remove the index that I was about to be like, so you should add an index. But I've forgotten to delete the index. So bear with me while I go and delete the index and then pretend I haven't deleted the index. If I go and log into the database, you will see we have this here that, is that showing?
18:07
Maybe there is no index on it. That's weird. Well, the main thing you can see here is that, like, we've gone down to just making the one request, and we can see what's taking what time, and we can see that the count of images is still taking 200 milliseconds.
18:24
Why is that? Well, it's because if I come out of here and we run the server again, you'll see what's weird about this is we don't just have those few images I've generated when testing and when showing it to you, but we have loads of randomly generated images.
18:42
So, well, let's look at where that's happening, and let's try and basically instrument some of our code in the database or in our database interactions to make it easier to, like, group. For example, you'll see here when we're booting up our app, we have a bunch of SQL queries going on that are happening as, like, top level events within our log.
19:03
We'd like to, like, group them into, like, for example, startup. So what we can go and do is we can look for, like, where is that happening? Here we've got this database prepare statement. Let's just put logfire.instrument around that.
19:21
We need to import logfire in this directory. You see here we're not having to do anything fancy to edit our code. We're just inserting a decorator. And now when we start up our app, you will see immediately that we have db prepare, and we can even see the arguments that were passed to it.
19:42
And then we have all of our SQL queries within it nested and what was taking what time. I don't know how long we've taken. We're probably on about the right time. Gosh, that's amazing. So thank you very much. That was my demo. I can show you more stuff later on, but thank you. You're in five minutes too early, right?
20:08
Oh, my gosh. Well, if anyone has any questions, happy to take them. Yeah, we are now in Q&A session, so go to the microphones and ask your questions. Please feel free. Hi, again. Can you use other collectors than logfire?
20:21
Yeah. Yeah, so you can send data to logfire from any standard open telemetry integration in Python or in another language. So I don't know. Is there any limitation to that? So I can show you, for example, we have a database internally that we're working on migrating to for logfire. And that is itself instrumented with logfire, which has its own weaknesses, you might imagine.
20:46
But if I go back, this is not the production one. This is my local testing, so I need to go back a few days to show some data. And I'm hoping we will see, I don't know how much data there is in this app, but you'll see this data here. You'll see that the scope name is open telemetry OTLP, which is Rust.
21:04
And you'll see that the data is not quite as pretty inside. That's weird. The data on these attributes is not quite as pretty because it's coming from the Rust open telemetry integration with the tracing library in Rust.
21:21
And it doesn't, so for example, these numeric values are encoded as strings. So we don't get all the prettiness, but you still get to see within this particular write what took what time. The limitations are you need to use the protobuf version of OTLP and you need to use GRPC, not whatever the other one is.
21:42
I think in time we'll support all of the different protocols and encoding formats. And similarly, if you really want, you can send data from the Logfire SDK to another open telemetry sync. So if you, for some reason, don't want to use our platform, you can send that data to something else.
22:00
It'll look uglier and we think that the Logfire platform is best, but you can do it because it's just OTLP. And that's obviously the insulation from us being a startup is that in the end, you could just use Logfire the SDK and you would get a better experience observing code in Python than you would with the default open telemetry SDK, but you could send the data somewhere else.
22:21
But we just hope that our platform is good enough, you'll end up using it. Sorry. And there's another question on Discord. So what is the key selling point versus using regular OTEL instrumentation, for example, Kibana or Jaeger or something else? So there's a number of different things that standard library, standard OTEL can't do that we can only do because we
22:43
are controlling both the SDK and it could have been a planted question for all the things I forgot to say. So you will see here where we were, I don't know if we have a good example in this particular case, but maybe we don't and I can show you one here. I can go and I'm going to add another Logfire call here, logfire.info.
23:04
This is like manual tracing that I didn't really bother to show you before because mostly it's not necessary, but I'm going to say like X and I can do, well, let me define X here.
23:21
And then I will turn that into an F string. And you think that using an F string is obviously the worst possible idea in this case, but if you look at where that has just run, you will see not only do we have that statement printed, but we've also been able to extract the argument from the F string and display it. And that's displayed prettily as in this case a dictionary, but if it was a data
23:41
class or a pylantic model or whatever else, we can show it in a pretty way. None of that is possible if you're using standard OTEL. The other big difference is, as I showed you here, if I come back and I put in like fish, another nice easy spelling, and you'll see here this request going on, we are able to show requests while they're in progress.
24:01
You wouldn't get that with standard OTEL because they only send information about spans and traces when they finish. That doesn't matter if your span or trace is taking like 300 milliseconds, but it does matter when it's going to take 40 minutes and you can't see anything until it's finished. And that's because we control the SDK and we basically, to an extent, and it's obviously
24:20
configurable to switch off, hack OTEL to send information about traces when they start as well. This one failed for some reason, and I wonder if we can get the trace back. It would be nice if we did get the trace back, but we don't seem to be getting the trace back on why that was happening. Oh, here we are. Oh, yeah, for some reason the word fish caused OpenAI to think that we were doing something illegal.
24:41
So it does a weird thing where when you generate images, it takes your prompt and then it gives it to normal GPT and says give loads more context to this. And that loads more context, sometimes I presume generates something obscene that then is considered invalid. Anyway. So there's another question. Hi. So in a way to respond to the previous question, but also to you with a question, because I
25:07
don't think actually Jaeger and Yella Cake is a competition to that, but I mostly already get it from Sentry. Also like there is the gay or even open telemetry breadcrumbs automatically collected.
25:21
So why use your product instead of Sentry? I mean, I'm a big fan of Sentry, and I personally think that the greatest form of flattery or the greatest is that they've suddenly announced tracing a month after we went into open beta. So the reason I had until about three weeks ago was because they don't do tracing. They don't have any view like this and whatsoever.
25:42
And this this model of like it feels like logging, but it's actually tracing. I find incredibly valuable and the feedback from people has been that it's really useful. The other thing that they don't do, and as from what they've said, they are like diametrically opposed to, is letting you query the data with SQL. I mean, it's hard. I would say just like go and try and use Sentry for this kind of experience of working out what happened live in your app.
26:03
And I've never found it as useful for this. It's great for exception handling. I don't find it as useful for other things, but it's great company, very impressive. No criticism. I don't yet have any experience with open telemetry. All the way.
26:21
Can see use case for the business where I'm working now where we have a lot of actually Raspberry Pi is running controlling a factory. How well can it handle that the network goes down and back up? Yes. So what the log fire SDK will do, which again is above and beyond standard.
26:41
One of the things that we do at the hotel is we store data in proto buff files locally when requests fail, and then we retry them sometime later when we hope the network is back up. We have to be a bit careful about not doing that in a stampede in a way that could cause stampedes. But we try very hard to basically store that data locally and yeah, send it again when your network is back up.
27:05
And that obviously is also some insulation. So the first thing is the hotel standard SDK will like retry up to kind of like, I don't know how many times, but over like 30 seconds. If it still failed after that, we store it to disk and then we try and pick that file up and resend it later.
27:22
So are there any questions still in the room? Because we don't have any on this code. Nobody's raising his hand. OK, so thank you very much. I'm really always impressed about live coding and I think he has a cookie deserved. Yes, thank you very much.