We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Introduction to Concurrency in Ruby

00:00

Formal Metadata

Title
Introduction to Concurrency in Ruby
Title of Series
Part Number
14
Number of Parts
89
Author
License
CC Attribution - ShareAlike 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
In this talk we'll learn about the options we have to let a computer running Ruby do multiple things simultaneously. We'll answer questions such as: What's the difference between how Puma and Unicorn handle serving multiple Rails HTTP requests at the same time? Why does ActionCable use Eventmachine? How do these underlying mechanism actually work if you strip away the complexity?
PlanningDemo (music)CodeCloningRepository (publishing)Computer animationLecture/Conference
Standard deviationLibrary (computing)Concurrency (computer science)QuicksortEndliche ModelltheorieBitServer (computing)Exception handlingDifferent (Kate Ryan album)Web 2.0Type theoryEvent horizonLoop (music)HoaxProduct (business)FreewareOnline chatMetropolitan area networkData conversionWebsiteComputer filePairwise comparisonSelectivity (electronic)CoprocessorLecture/ConferenceComputer animation
Server (computing)Set (mathematics)Computer file
SoftwareServer (computing)Process (computing)Standard deviationLibrary (computing)Streaming mediaComputer fileVirtual machineRadical (chemistry)Repository (publishing)Connected spaceMessage passingCodeThread (computing)Line (geometry)Client (computing)Electronic mailing listIterationAddress spaceBooting1 (number)Loop (music)Moment (mathematics)Communications protocolPhysical systemoutputNamespaceWordGreatest elementType theoryData managementMereologyMultiplication signBitRevision controlBlock (periodic table)Beta functionCASE <Informatik>Network socketProjective planePoint cloudMedical imagingEntropie <Informationstheorie>CoprocessorDualismDisk read-and-write headForcing (mathematics)State diagramHypermediaForm (programming)Right angleVector spaceData storage deviceOrder (biology)View (database)TelecommunicationMehrprozessorsystemSinc functionDifferent (Kate Ryan album)Operating systemWeightLogicMacro (computer science)Socket-SchnittstelleSurfaceComputer architectureLecture/Conference
Process (computing)Population densityTrailRevision controlDatabaseContext awarenessServer (computing)Moment (mathematics)Limit (category theory)Incidence algebraGroup actionMessage passingBit rateState of matterPoint (geometry)BootingData managementMultiplicationMultiplication signForcing (mathematics)Semiconductor memorySingle-precision floating-point formatTwitterMassConsistencyDistribution (mathematics)DivisorSound effectWritingImplementationDeadlockEndliche ModelltheorieCASE <Informatik>Error messageVulnerability (computing)Element (mathematics)Physical systemArray data structureNetwork socketAxiom of choiceLine (geometry)Cartesian coordinate systemRight angleSurfaceMusical ensembleBitGoodness of fitSet (mathematics)AnalogyDemo (music)Reading (process)Socket-SchnittstelleQuicksortLinear regressionDistanceProduct (business)Address spaceSystem callClient (computing)Open setThread (computing)Computer programmingWechselseitiger AusschlussMehrprozessorsystemDifferent (Kate Ryan album)Concurrency (computer science)Crash (computing)LaptopBlock (periodic table)WordLecture/Conference
Thread (computing)Exception handlingLine (geometry)CodeWeb 2.0CalculationInterpreter (computing)Different (Kate Ryan album)BitRevision controlConcurrency (computer science)Multiplication signDatabaseTrailMoment (mathematics)HookingPhysical lawData storage deviceHash functionPresentation of a groupDeadlockTwitterRight angleQuery languageSphereSoftwareLatent heatContext awarenessLecture/Conference
Concurrency (computer science)Thread (computing)CodeElectronic mailing listBefehlsprozessorMehrprozessorsystemTelecommunicationQueue (abstract data type)Event horizonLoop (music)Fiber (mathematics)Multiplication signProcess (computing)Web 2.0Data storage deviceContext awarenessSingle-precision floating-point formatMessage passingAxiom of choiceINTEGRALSemiconductor memoryGoodness of fitConnected spacePhysical systemInstant MessagingVirtual machineOperating systemWindows RegistryFunctional (mathematics)SoftwareStandard deviationComputer fileSocket-SchnittstelleNetwork socketBoss CorporationStreaming mediaMoment (mathematics)Right angleHand fanUniform resource locatorArithmetic meanSinc functionGroup actionReading (process)Kernel (computing)Product (business)Cheat <Computerspiel>Revision controlPoint (geometry)WavePressureLibrary (computing)Multiple RegressionSystems integratorWeightObject (grammar)BitClient (computing)FreewareSet (mathematics)File system
Right angleMetadataLoop (music)Data storage deviceMessage passingElectronic mailing listClient (computing)Connected spaceFiber (mathematics)Revision controlMultiplication signBlock (periodic table)State of matterQuicksortTimestampBuildingCASE <Informatik>Metropolitan area networkFunction (mathematics)BitSystem callServer (computing)CodeEvent horizonGradient descentFilm editingInterpreter (computing)Scheduling (computing)Exception handlingReading (process)WritingPhysical systemNumberSampling (statistics)Open setReliefOperating systemDifferent (Kate Ryan album)Concurrency (computer science)Stack (abstract data type)Thread (computing)Hash functionWechselseitiger AusschlussWorkloadProgrammschleifeRepresentation (politics)Video game consoleComplex (psychology)CoroutineScaling (geometry)Slide rule
Endliche ModelltheorieEvent horizonLoop (music)Concurrency (computer science)CodeLaptopSoftware bugThread (computing)Digital electronicsInjektivitätHand fanPairwise comparisonMetropolitan area networkData compressionGroup actionStrategy gamePresentation of a groupForm (programming)System callWater vaporComputer animationLecture/Conference
Computer animationLecture/Conference
Connected spaceMoment (mathematics)Event horizonThread (computing)Process (computing)BootingServer (computing)Network topologyRevision controlElectronic mailing listDifferent (Kate Ryan album)LaptopCrash (computing)Presentation of a groupPhysical systemProper mapBitClient (computing)Self-organizationState of matterHecke operatorRight angleReal numberVirtual machineCategory of beingRadical (chemistry)ExpressionAsynchronous Transfer ModeComputer animation
InformationWeb 2.0Server (computing)Sound effectSoftware bug
Computer animation
Transcript: English(auto-generated)
Hey, welcome, my name is Thijs. If you wanna join, I'm doing a little demo later on, so if you wanna join that and also look at the code
I'll be talking about, please clone this repository. So today I'm going to talk about how to do concurrency in Ruby using just very simple stuff from the standard library. So, I work on a monitoring product for Ruby
called AppSignal, and we support a lot of different types of web servers and tools, which kind of forced me to learn about all the different ways you can do concurrency in Ruby. And then I realized that, in the years before, this always sort of sounded like an intimidating subject,
and I felt like I didn't really understand what was going on, but it was actually quite a bit easier than I thought. So today I'm here to share some of these insights with you. So, in general, there's a few exceptions to this, but for simplicity's sake, we're going to be talking about these three main ways of doing concurrency.
So you can either run multiple processes, you can run multiple threads, or you can run an event loop and have that kind of fake concurrency, in a sense. So you're probably familiar with these free web servers. So they all use one of these models
with their up and downsides. So the thing, so we're going to try and do this, discuss this subject by building a very simple chat server, that's kind of like Slack. So I implemented a little chat server in,
implemented a little chat server in Ruby in these three different ways. So my colleague Roy over there is already logged into it. Hopefully, well, of course it's not working.
Hi Roy, ah, it's working. So, well, here's our very minimalistic Slack. I'm afraid we won't be getting millions of VC funding for this, but at least it works.
So, we'll start with discussing the chat client. So this file is called client.rb, if you checked out the repository. And this uses the just basic networking stuff from the Ruby standard library to make a connection. So it starts by requiring socket, which brings in all this network logic.
And then it opens a TCP connection to a certain address. And then it boots up a little thread that, that just asks the client for incoming data. So basically anything the server sends back to the client will get written out to the command line.
And finally, it just listens on the command line. That's this scdin.get, basically just waits for you to type something, press enter, and then it will trigger that loop. And then it puts that on the client. So, and that means that it gets written back to the server
so the server receives this data and the server can write data back to the client. And the client is able to either get user input from you or write stuff that the server wrote back to the command line. And basically this is a full chat client.
I'm just sorry to say it doesn't support any animated GIFs, but yeah. So, as discussed, there's three ways to do this in Ruby. And the first and most simple way in a sense is to use multiple processes. So this is how Unicorn works. What happens is that there's one master process
that gets started by your system. And whenever some work needs to be done, that forks into a child process. And that child process can do some work and then might get killed again or will live for a little bit longer. And this worker process does the actual work
and the master process kind of manages these child processes. And if you would look at this on your activity monitor on your Mac or on top on the server, it would look something like this. So you've got a master and a few Unicorn workers. And you could actually kill one of these workers
or just let it crash. And then the master process will make sure that a new one gets spawned. So this is pretty resilient architecture. So what does this look like in Ruby code? We'll first start with actually starting the server.
So this piece of code is the same for all the examples. Basically we start at the TCP server on a certain port. And from that moment on it just listens for new income and connections. And since we're using multi-process here, these processes have a completely different name space.
So anytime you, if you modify some variable in one of the processes, this won't influence the other ones at all because they're actually completely isolated by the operating system. So that's why we need a way to communicate between those. Because if we receive some chat message on a process that's handling one connection,
we need to be able to write it to all these other connections that are actually different processes. So I've simplified this a little bit. You can see the full code in the examples. What it comes down to is that we use a pipe. You might have seen that on the command line if you can use this to grab through stuff for example. What happens is that a pipe is just a stream of data
from one process to the other. So if you open a pipe and you write from process one, the data will be listenable from process two. So we just set up this communication, the details are in the examples, and that will magically make sure that all the other processes also get all the chat messages.
And then we get to the management part of how this works. So this is very similar to what Unicorn would do. So we start a loop, and we try to accept the new connection from the server.
So server.accept just waits for somebody to connect. And whenever this happens, it's the next iteration of the loop. Then we set up a pipe so we can write between the new process and the master. And we add this pipe to the list of processes in the master
to make sure that we can also write back to the child process. And then there's a little magical word that does a lot of stuff, which is fork. And what fork does is it just makes a complete copy of the process exactly as it is at that moment. So basically, the moment you call fork,
anything that happens within the do end block is the new process, and anything behind it is the old process. So the old one is still there, and there's a new one, which is just a complete clone, which start doing work at the moment do ends,
like anything inside of do and end. Which this is actually quite a hard concept to wrap your head around. I think you really have to try this out a few times on your own machine to really get it. I know I didn't really get any of the explanations. I really had to see it for myself to be able to understand it.
So now what happened is we have a child process, and this child process is aware of which stream it's connected to. So next up, we can actually do some chat logic.
So the first thing we do is read the first line from the socket, and just assume that's the nickname. That's kind of the protocol of our chat server. The client just writes out your nickname as the first line. And then we just write back a little message back to the client. And then what happens is,
this is not, line 31 is again simplified. This starts with a little thread that writes an incoming message back to the client. So I'm sorry to say, we do actually need a thread in a multi-process example to make it all work, otherwise you wouldn't be able to really implement the whole thing. And then it just basically waits for you
to time something. So there's a while loop at the bottom. It tries to read a line of text from the socket, and it writes back this line of text back to the pipe, and the pipe writes it back to the master process.
So it's quite a few moving parts. So in this case, doing it multi-process is actually a bit more code than the other versions. Sorry, I skipped one thing. So what happens next is that the master process
can write this text message back to all the children, and the children can write it back to your terminal. And we will see how this works in the demo, like how this operates in reality in the demo at the end of the talk. So there's a few good things about multi-process concurrency.
So one thing is that you can basically forget that such a thing as concurrency exists, because anything that happens in the process is just executed in a single thread, and there's no way any thread safety issues can arise. And the next good thing is that workers can crash. So for example, GitHub already likes this model,
so both Unicorn and Resque, which were written by them, use this model because they do a lot of call-outs to the git command on the command line, and that has a tendency to use up a lot of memory and crash. So they would have issues with a threaded model because the thread could bring down the entire process,
and in this case, the master will just reboot anything that crashes. And the downside is that it uses a lot of resources. So any time you want to do anything that happens at the same time, you need multiple processes which use memory all over the place.
So it's actually a very poor choice for a chat server. But it does work, as we'll see in a bit. Which brings us to the next model, which is multi-threading, which makes a lot more sense for a chat application, actually.
What happens here is that you have a single process, and you can boot, within this process, you can boot threads that do work, but they still share all the memory. So if you mutate something in memory in one thread, it will also be different in another thread. And that looks something like this.
So again, we have exactly the same TCP server that gets opened. But then things are a bit different. So what we do here is we're basically using this messages array as a database.
So any time a new message comes in, we just put it into this array so we can store it and send it to other people. But if multiple threads would be reading and writing to this array at the same time, it might actually end up coming into an inconsistent state. Because one could be reading stuff, while at the meantime another one is inserting stuff,
and maybe then a message wouldn't be written to our clients, for example. And that's why we need a mutex. So a mutex is basically like a traffic light. So a thread can lock a mutex and basically tell a mutex,
I'm working with this data at the moment, and then when it releases the lock, then another thread can lock it and also work with the data. But they can't do it at the same time. So this enforces that your data stays in a consistent state. The downside, of course, is that if you have a lot of locking, then the whole thing
might actually be just as slow as a single process application. Because if all the threads end up just doing work one by one instead of concurrently, then you still don't have a concurrent system. So if you hear the word locking,
also in a database context, it's kind of like how this works too. So next up, we do the same server.accept call. So again we're waiting for somebody to connect to the server. But instead of forking, we're actually starting a thread.
So anything that happens within the do and the end block is running in the separate thread within the same process that's, which is running sort of independently of the other threads. So again we're reading the nickname from the socket and we're writing something back to the socket. And here it's slightly different.
So instead of having to set up all these pipes, we run a little thread again that just sends incoming messages back to the client and reads messages from the client. If you look in the examples, you will see the implementation of these two methods. They're also, for simplicity's sake,
I didn't add that here. And when a new message comes in, then we basically just push it onto this messages array, which is kind of like our mock database. So we call mutex.synchronize, which locks the mutex
and makes sure that only our thread is currently doing anything with this messages array. And then we just push a new message onto it. So in reality, you would probably store this in Redis or whatever to make sure it would survive a crash of the process.
And next up, we write these messages back to every client every 200 milliseconds. So again, we have to lock. And then it collects all the messages that have to be sent. So we're storing a sent until timestamp, yeah?
Yeah, for sure. Yeah, so that's a good question. Mutex can deadlock. So it basically means that if you lock something but never release the lock, or you lock something and then you lock something else,
and these two locks are kind of like waiting for each other, then basically your process will never continue doing any work. So this is also a risk of using a mutex. And these kind of issues are the reason why people in general say that programming using threads is pretty hard because you have to be aware
of all these risks and not do any stupid stuff. But it's very easy to do the stupid stuff. So yeah, that's kind of a problem. So we've got the messages we want to send, and we just write them out to the socket with the socket.poots line. And we sleep for a little bit.
So this is kind of, you can already sort of see the Achilles heel of the system, maybe, because we have to lock the messages array all the time. So if we have a lot of throughput, then probably the percentage of time that the messages array is locked will get higher and higher, and maybe at some point it will be so high that we won't actually be able to send out messages
as fast as they come in. But I don't think at the moment we're probably not going to reach that limit. I don't see that many people with a laptop on. And it will look something like this in your process manager. So there's a single process with a single process ID,
and it's just running a few threads. So there's another thing to think about when you use threads in Ruby, which is the global interpreter lock. And the global interpreter lock is this thing that's specific to Ruby.
It's kind of like a, it's kind of a relic from the past, which is still, at the moment, still in the MRI version still present. So in Ruby, a line of Ruby codes cannot be executed in multiple threads at the same time. So for example, if you run a few threads, and you would store stuff in a hash,
or do some calculations in different threads, then actually they're just running one by one instead of concurrently. And the only exception to that is IO. So if you write to a socket, or you write to disk, or you read to disk, from disk,
then the lock actually won't be active. And this is the reason why using threads in Ruby is often useful, because especially in a web context, or doing networking, of course, most time is spent doing the database and getting it back, and in the meantime, Ruby can do other stuff. So in reality, it's still quite useful,
even though there is this lock present. And another thing is thread safety, which we already discussed a little bit just now. There's a risk of deadlocks, if you're not careful, then you can mutate things at the same time, and then your counters are totally off, for example.
So it's not really for the faint of heart to, in Ruby at least, to do this. You do already have to know what you're doing. So the positive side of this is that it uses a lot of memory per connection than multi-process.
And you can share data easily, so you don't need to set up all this communication between the processes, you can basically just have an array that's stored in a central location, and the whole thing will just work. Well, and you do have to make sure that your code is thread safe,
and it doesn't make any sense for CPU intensive operations, since these will only run in a single thread at the same time anyway, which is rare in web lands, so it's usually not that much of an issue. And I skipped one, so if a thread crashes,
there is a possibility that the whole process crashes, and then basically the whole thing is gone, because there's no master process, making sure that it gets started again. And then we get to the final way of the concurrency in Ruby, which is an event loop.
And the funny thing about an event loop is that it's not actually concurrent. The trick about it is that it's so fine-grained that it's kind of like a magic trick. It seems to be very concurrent, but actually it's only doing one thing at the same time. And we'll get to that in a bit.
And it uses very little memory per connection, so it's a good choice for something like a chat client in general. And an event loop looks something like this. So it all starts with the operating system. So the operating system, of course,
knows what kind of network connections are open, and what's going on. And you can ask the operating system to basically tell you when something happened. So this is called registry interest. So you can, for example, you can tell the leadership's colonel, please inform me when this connection is ready for reading. And then it can just ping back to your Ruby code
and let you know. Yeah, so the operating system tells us like this thing you're interested in is ready. And that gets pushed onto an event queue. So there's this list of things that are happening, and you can just loop through them.
So basically the event loop is doing nothing more than just endlessly going over this list of events that's in the queue, and just doing stuff with every single one of them. Usually it has some kind of storage. So in the context of a chat client, you want to know the nickname of the person
that's connected to a certain stream, because when a message comes in, you want to be able to know who this person is, so you can write the nickname back out there. And the event loop can often also add an event, and it can ask the colonel to tell you
when something happens. So basically this is a single process usually, and a single thread, and it's just endlessly spinning around and waiting for stuff to happen, and it reacts to something, and it writes something to a stream, and then it just goes on to the next tick of the loop.
So you need quite tight operating system integration to make this work. And in Ruby there's a gem called event machine that offers this integration. So if you would actually do something in production, you would need something like that. At least maybe we'll end up in Ruby itself at some point,
too, I hear they're thinking about doing like a more, maybe like an actor-based concurrency model, so then stuff like this will be easier. But for now, I kind of cheated, and I made a kind of like a not so nice event loop, but at least it's simple.
So what we're doing is we're using a fiber, and an io.select. io.select is a function that's in the Ruby standard library, and you can pass a list of sockets, of io descriptors into it, like a socket or something on a file system, and you can ask it, please inform me
when one of these sockets is ready for reading or writing. We'll get to that in a bit. But first, fibers. So fiber is a new concurrency construct that was introduced in Ruby 1.9, I think.
It's kind of like a thread, only much more lightweight, so it has a very small stack of, at the moment, 32 kilobytes per fiber, and it operates like a thread, but you can pass it and resume it at any time you like. So in this example, we have a little fiber
that's looping around, and it calls fiber.yield, and fiber.yield basically passes until resume is called on the fiber, and whenever a resume is called, then the yield call continues,
and the fiber does whatever work. So in this case, the console output would be one, two, three, four, because we're just asking the fiber to put this back to the console. So a fiber is kind of like a Go routine, which only Go interpreter schedules the Go routine itself,
and in Ruby fibers, you have to do this yourself. So if you don't resume, then this fiber will just endlessly be passed. So again, we have the same example, the same TCP server opening, if we change there.
And then we have a list of clients, and a list of messages. And we don't need the mutex here, because it's just a single thread, there's no actual concurrency going on, it's just faking it, basically. And in the clients, hash will store some metadata, will store a fiber for every connection.
So this is basically what our chat server client representation is. So anytime a new connection gets opened, we start a fiber that's just endlessly looping, and it waits for itself to either
become readable or writable. So the event loop will tell the fiber this, you're now readable, and you can do some work. And then it can actually do the work. I get the sense that this is a little bit confusing. So let me think how to maybe explain it a bit better.
We'll just see on the next slide what happens. Hopefully it will be a bit clearer then. So when a fiber is in a readable state, it again can read some data from the socket,
and push this onto the list of messages. And when it's in a writable state, it again gets the messages that have to be written from the list of messages, and writes them back to the client. And then it stores the last write timestamp, so it knows that next time around, it doesn't have to send the same messages yet again.
And then we get to the actual event loop. So this is a fully functioning event loop. It just loops endlessly, and it starts by trying to see if there are any new connections, and storing these in the list of clients.
Then it tries to ask the operating system, do you have any connections that are ready for reading or writing? And then it reads from the readable connections, and it writes from the writable connections. So we'll go over all four of these in a bit more detail now.
In this case, it's again calling server.accept. Only this time we use the non-block version. And the only difference is that server.accept just waits for new connections, and non-block immediately returns. So it just tries to get a connection. If there's no such thing, it just continues.
And we will get, and it will raise an exception. So if there's an exception here, we can just continue with the loop. Yeah, then we get to the next step. We just ask the operating system, please tell us which one of these are writable,
or readable. Like in a real event loop, you would do this in a more scalable way. So this is really a permanent version of how to do that. And we have sort of the same code again as we've seen earlier.
It pushes a message back onto the messages list. And the writable code is also sort of the same. So the upside of an event loop system is that it has a very low over-pro connection.
And it scales up to a huge number of parallel connections for this reason. Downside is that you probably already know this if you ever used JavaScript. But if you get a more complex event loop system, you often end up with something like callbacks to be able to manage everything.
And then the whole thing can get very hard to debug because everything is calling each other and there's a huge stack. And finally, and very importantly, since it's a single thread in a single process, like if it stops, then the whole thing stops. So for example, if we go back to this example,
if the reading in this case would just take a very long time for some reason, nothing would be read from any client at all. So you do need to have a workload that is suitable to being caught up on very small pieces.
So which one to use? Well, as always, the answer is it depends. So if you have stuff that can crash, then the multi-process approach is very good. Multi-threading is a nice one because it's relatively simple and you don't need to convert your whole code
to use an event-based model. And the event loop is nice if you need a lot of concurrency. So let's try it out on my laptop and see how this chat server actually works. So maybe you already checked out the example code.
I know Roy has, so at least I can chat with Roy. You can connect on my laptop by running the command below. If you checked it out at the absolute beginning of before the presentation, please pull because I fixed a bug.
Did everybody who wants to get that?
I should get a different first name. So there's already some people in our Slack. So I'm currently running the evented version.
So if you look at the Ruby processes running on my machine, you can see this at the left bottom side of the terminal. There's just a client which is running on the right side and the server which is running on the left side.
So if we inspect the server a bit more,
and this is a list of all the threads that are active in this process. So you guys are a bit slow. I did this talk in Belarus last week and they hacked this whole thing within five minutes. So I am a bit disappointed.
Okay, so this is the evented version. And as you can see, there's only one thread running in this process at the moment. So let's move over to the threaded version.
Sorry guys, you'll have to reconnect actually once I restart. So here again on the left side there's a list of threads
so you can see that it just boots up a thread for every incoming connection. I've tried to measure the difference between the performance of the event and the threaded version, but that's kind of negligible and I think that's probably because I'm using io.select instead of an actual proper event system.
So we can't really see the difference in any resource usage here unfortunately. Okay, so finally we'll start the multi-process version.
And this is the one that will break the easiest. That's why I'm doing it as the last one because if my laptop crashes I'm like, the presentation is done so who cares.
So this is a different, this is the ps3 command which shows you a tree of all processes and their children. It's, you can see here that at the top
there's the master process and then like one step nested into that we see a bunch of child processes. And, well somebody knows how to write a loop. Yeah. Let me just see if, how many processes we have right now.
Oh it's, so there are about 10 people logged into the server at the moment. Yeah, and this concludes my presentation.
So the question is how did I apply this knowledge? And so I work a lot on a gem for Ruby and Rails which is called AppSignal and it's a monitoring gem.
So basically it hooks into the web server and fetch, gets a lot of information and processes that and sends it back to us. So basically I've been debugging everybody's weird bugs for more than a year and which forced me to learn this. Well, thank you. Thank you. Thank you.