We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

libuv Cross platform asynchronous i/o

00:00

Formal Metadata

Title
libuv Cross platform asynchronous i/o
Title of Series
Number of Parts
611
Author
License
CC Attribution 2.0 Belgium:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language
Production Year2017

Content Metadata

Subject Area
Genre
Abstract
libuv is the platform abstraction layer used in Node, Julia, NeoVim and manyother projects. It features an event loop, timers, signals, as well as manycross platform utilities including threading and name resolution. This lightning talk will introduce libuv and its current development state.
GenderInternetworkingCore dumpJSONXMLUML
Linker (computing)Reading (process)Thread (computing)Formal languageComputer fileKeyboard shortcutSoftwareResultant1 (number)Loop (music)String (computer science)Socket-SchnittstelleSynchronizationSingle-precision floating-point formatAbstractionPhysical systemWindowBenchmarkoutputGoodness of fitUtility softwareOperating systemFinite differenceTheory of relativityCASE <Informatik>CodeLibrary (computing)File systemBitBlogLine (geometry)WikiMereologyStreaming mediaCross-platformProjective planeEvent horizonCartesian coordinate systemSoftware testingOperator (mathematics)Extension (kinesiology)Differential (mechanical device)Connected spaceDiagramQueue (abstract data type)Block diagramSuite (music)Constructor (object-oriented programming)Multiplication signComputer programmingFunctional (mathematics)Video gameEndliche ModelltheorieContext awarenessProcess (computing)ProgrammschleifeDefault (computer science)Image resolutionData managementRepresentation (politics)DDR SDRAMInternetworkingDifferent (Kate Ryan album)Independence (probability theory)Open sourcePole (complex analysis)WordSocial classWave packet
ResultantLoop (music)Computer fileThread (computing)JSONXMLUML
Point (geometry)Event horizonLoop (music)outputProcess (computing)Different (Kate Ryan album)ResultantType theoryPhysical systemSemiconductor memoryThread (computing)Multiplication signInstance (computer science)Operator (mathematics)WritingRight angleReading (process)
Process (computing)Computer fileWindowConnected spaceCartesian coordinate systemLevel (video gaming)Socket-SchnittstelleFunctional (mathematics)MathematicsScheduling (computing)Loop (music)Multiplication signPerspective (visual)Different (Kate Ryan album)BitQuicksortCodeMultiplicationComputer architectureEvent horizonAbstractionSet (mathematics)Queue (abstract data type)JSONXMLUMLProgram flowchart
Cartesian coordinate systemPattern languageServer (computing)Memory managementMultiplication signDifferent (Kate Ryan album)Connected spaceMereologyRepository (publishing)JSONXMLUML
Core dumpCross-platformProcess (computing)Multiplication signThread (computing)Table (information)Buffer overflowSoftwareOperator (mathematics)ProgrammschleifeEndliche ModelltheorieStack (abstract data type)File systemLoop (music)Pairwise comparisonContext awarenessProjective planeCartesian coordinate systemValue-added networkDifferent (Kate Ryan album)Lipschitz-StetigkeitClosed setSimilarity (geometry)MultiplicationCASE <Informatik>Right angleLibrary (computing)Utility softwareSoftware developerWindowEvent horizonKeyboard shortcutMoment (mathematics)Connectivity (graph theory)Coefficient of determinationGroup actionDirected graphWebsiteLine (geometry)Wrapper (data mining)XMLUML
Computer animation
Transcript: English(auto-generated)
Hello everyone, welcome. Saul is giving the next talk about Livuv, please welcome him. Thank you. Thanks. So, I'm Saul or Saghul, everywhere else on the internet.
I'm one of the Livuv core genders and can I get a quick show of hands who knows about Livuv? Alright, so you may learn something, hopefully. For those who don't, it's a cross platform asynchronous IO library with which does a little
bit more. So, we try to do networking stuff but also other cross platform stuff that we need. It's relatively small, that is 30,000 lines of code without tests, which is not a lot for a C library that tries to do anything and everything. We have an extensive test suite which, so I waited and counted for this, but we try
to test everything and we have a vast CI infrastructure kind of donated by the Node.js project, which makes it robust, but of course I would say that, right? It's designed for C programs that means the joy of callback hell in JavaScript, I guess, and is used by many, many projects these days.
It started off as a way to bring Windows support to Node.js like in a good way. So, Windows is a first class citizen here in case you want to support Unix and Windows with a consistent API. And we have a Wiki link there with all the projects that use it. If you want to use it from programming language X, it's very possible that somebody
wrote bindings for it because there are many bindings out there. I personally wrote the ones for Python and that's how I got started into all these things. So, what is it we do in Livuv? It's an event loop, so it's a single-threaded event loop, follows this model and everything
that surrounds this slide, it's actually tied into this event loop. So we can do timers, signal handling, so there is no problem with where the hell the signals get dispatched. We go child process management, TTI, TCP, UDP, name pipes, file system operations. This is, for example, a thing that you don't typically find in a networking library,
but if you want to do a cross-platform application, you will need to access your file system and Windows is a mess and nobody knows how it works. So we take care of that for you and you just have to use it. We have some training utilities and the coolest logo in the open source community.
As for if you approach Livuv, I think that the best way to approach it is from the outside to the inside. So, in a nutshell, we have three constructs. The biggest is the loop. So everything pretty much runs in the context of a loop. So all operations are related to a loop. And it's where all the magic happens, if you will.
And then we have handles and requests. So a handle represents a resource, something that is there to do some job. Let's say a TCP connection. So this is our representation of something. It's a handle. And a request represents an operation that has a starting and an ending. So, for example, writing on a TCP connection is a request.
We use a request for it and this way, so when we do it this way, we can know when an operation ends and if it finishes successfully or not. So we have this differentiation and requests always operate on some handle.
And then we also have a vast array of other utility functions, for example, a way to Let's look a little bit at like a block diagram here. So we do a bunch of stuff, as I said. So network IO related things, file system IO related stuff, other stuff, and then other
OS independent stuff. Because in some cases, we have implemented it in such a way that is not tied to the operating system necessarily. And then we can reuse it. So, for example, when it comes to the networking IO part, we have TCP pipes and TTIs, which we abstract them as streams.
So they have a certain API and they behave like streams. So they get the read callbacks call. You can write to them, send file descriptors over them, and so on. UDP and pole handles are also dealing with network IO or sockets, but they are not streams.
And they are all backed by these internal kind of thing, which is what abstracts us from IO polling in different operating systems. On Windows, we don't have this because Windows works different. But on Unix, we have this layer, and then every different Unix system sits on top of it, and we can implement them easily.
For file IO or related utilities, we have file system requests, work requests, which allow us to get a piece of work, spawn it to a thread, do the work there, and then come back. And we have name resolution functionality as well.
So get ADDR info blocks, but we run it on a thread and give the result back to you. Just a quick word on threads. We use threads just for file system IO, not for network IO. The reason why we do it is very nicely summarized in this blog post by the BitTorrent guys. There is no way to do asynchronous file IO cross-platform in a reliable way.
Our default thread pool size is four, and let me say it one more time, we don't use it for network IO. The internet is usually wrong oftentimes, and I've seen many diagrams of people trying to explain what LiveUV is with incorrect diagrams, looking at queues and
thread pools, and I don't know what the hell. We don't do that. We only use threads for file operations. So it's single-threaded. There is a thread pool, but that's for file operations. And we get the results in the loop thread anyway. So to the eyes of the user, there is no thread pool. We have other stuff that you can use as well.
So we've got timers, some other types of handles that operate in the loop at different points of the event loop execution, and then signals and processes that are operating system dependent, for instance. So how does our event loop run? Well, so we start by thinking, do we have to do anything?
Because if we don't, then we're done. Then we run the due timers. So timers that are due right now, because we scheduled them in 20 milliseconds, and they hit. So it's time to run their callbacks. And some pending callbacks, we also run them at the top. That is, for example, callbacks that have happened as a result of a write operation, we report the result there.
Then we run other types of handles that are loop watchers, these things that run right before polling and right after polling. When we poll for IO, all the read and write operations run. And at the end, we run close callbacks. So when you close a handle, when you want to dispose of it, not use
it anymore, this operation is asynchronous. So you call UV close, and then when the callback hits, you can free the memory. This is because we need to do some work in the background sometimes. Now, I mentioned that LivUV came in because Node wanted to use it. So let's have a quick look at how LivUV is used within Node.
So the Node event loop, in a simplified way, basically runs timers. Now, the thing is, Node.js, for performance reasons, doesn't use one LivUV timer equals one Node timer. They coalesce them. So they have one LivUV timer backing potentially multiple Node timers if they are scheduled at the same millisecond.
So they have different sort of buckets there. Then we ran some pending callbacks. The polling happens, so all the data received callbacks are also fired, and so on, and connection callbacks as well. And then there is two weird things happening.
The first one is set immediate. So set immediate runs on a check handle, which is after polling for IO. So it's called set immediate, but it doesn't run immediately. Yeah. And then there is process.nexttick, which you probably know, which is supposed to run a function on
the next tick. But what the tick is, nobody knows. And in a nutshell, it doesn't. It actually runs. Those callbacks run every single time that we call into JavaScript from the C++ code. There is a helper function called node make callback,
and it drains a little bit of the callback queue from process.nexttick. It's a little bit counterintuitive, and you should never program or architect your application with any of these in mind. It should be transparent to this. Not like, oh, I'm going to schedule this, and then because the realm is going to do that, I'm going to,
no, don't do that. Because hopefully one day we will get this sorted out, and then your application will break. So not a good thing to do. So if we look at it from node's perspective, we follow an onion architecture. So we got your net sockets wrapping a TCP wrap in C++, wrapping a UV handle in C, wrapping a file descriptor on
Unix, or a handle on Windows. And the idea being that you can happily leave at any of these layers, as long as it's above this one, the abstraction levels should be high level enough so that you don't have any problem there. Or you don't need to take into account that Solaris,
that's I don't know what, and that Mac OS behaves as some other way. Of course, a good way to learn all this is to write your chat application. Of course, why not? So I wrote one. It's in this repository. I wrote it to show different usage patterns using
LiveUV. So it's a TCP server. It accepts multiple connections to which user that joins the room gets a Pokemon name assigned. And the idea is that you can see how the different moving parts work together and different patterns on how to deal, for example, with memory allocations.
Because we got a little time, I'm not planning on showing it to you here. So the idea is you go and look at yourself and let us know if you run into any issues or whatever. Only yesterday I learned about two other applications using LiveUV while I was in a different dev room.
So what do you know? If you're already using it, please do come talk to me and let me know. I mean, sometimes some problems may happen in your event loop, so I want to give a shout out to everyone in the core contributors team. It's seven of us at the moment. Five of us are active. And we work on it.
The release cadence is when we feel like it's a good moment to do a release. And sometimes when Node.js asks us, hey, can you please do a release because we want these features in? It's basically, do you want something DOM-driven development? So if you want something, don't you do it? And otherwise, well, things would stay. But we're actively working on it and hoping that maybe
this year we can get 2.0 release with cleaning up some craft like Windows XP support, which was like 2,000 lines or something. That was very nice to delete. And if you want to reach out, our website is just a quick way to arrive at the others.
So our API documentation is docs.liveuv.org. There's an IRC channel, LiveUV, also Google Group. And also we are on Stack Overflow. And I believe I have time for maybe one or two questions, if there should be any.
I have a question about compare with Boost ICO, because it uses a similar model. What is the difference from Boost library? Well, so LiveUV is very small, very self-contained. It doesn't depend on any.
I haven't used Boost myself other than in other projects. So my perception of it is that it's a big library with different components. LiveUV was initially, from the beginning, it was designed to be a small thing that you could use in any project and it would abstract you as much as it can. But it's not the kitchen sink solution.
We're actually thinking about creating a, we're going to create a new project called LiveUV Extras, where we're going to add some more stuff that doesn't belong in core, but can be useful for some people. This one is also written in C. So there's also that difference. But in a way, they solve the same problem. So you want to do some cross-platform networking
operations, and also file system utilities, and different, abstracting all this is our job as well. So I would, of course, say LiveUV is easier to use. But that's what I know. And yeah, so that's pretty much the answer.
They solve the same problem, but they are different. Anything else? Would you recommend LiveUV for a very multithreaded application?
Well, so as I said, the event loop is single-threaded. But the event loop is the context. So you can essentially run multiple threads with multiple loops on each of them, as long as you don't cross-calling, because our API is not thread-safe. That's fine. So many projects do this.
So for example, if you want to do a similar model to what NGINX does, for example, so multiple processes and then some event loops as well, you could do multiple event loops on multiple threads, and that's perfectly fine. So I think that's all. Very quick last one.
I don't know if the alarm clock wants to allow it. I don't want the clock to blow on me. Since we all like schwag here, I'm going to drop some things on the table right outside, FYI, after I leave, in case you're interested. Hello, Jess. I would like to ask for LiveUV XX, whether it's official.
XX. There's a C++ wrapper. Oh, so no. Basically, LiveUV is written in C, and that's the one and only official anything there is to do. However, all the projects there that wrap it or not wrap it, some of them wear clothes, cousins, let's say. So for example, I co-maintain the library, and
I also write the Python binding. But it's not official, let's say. We don't bless any binding. Here goes the alarm clock. Thanks a lot, Saul, for the talk.