We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Writing Redis in Python with asyncio

00:00

Formal Metadata

Title
Writing Redis in Python with asyncio
Title of Series
Part Number
96
Number of Parts
169
Author
License
CC Attribution - NonCommercial - ShareAlike 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
James Saryerwinnie - Writing Redis in Python with asyncio In this talk, I'll show you how to write redis using asyncio. You'll see how you can create a real world application using asyncio by creating a python port of redis. ----- Python has been adding more and more async features to the language. Starting with asyncio in python 3.4 and including the new async/await keywords in python 3.5, it's difficult to understand how all these pieces fit together. More importantly, it's hard to envision how to use these new language features in a real world application. In this talk we're going to move beyond the basic examples of TCP echo servers and example servers that can add number together. Instead I'll show you a realistic asyncio application. This application is a port of redis, a popular data structure server, written in python using asyncio. In addition to basic topics such as handling simple redis commands (GET, SET, APPEND, etc), we'll look at notifications using pub/sub, how to implement the MONITOR command, and persistence. Come learn how to apply the asyncio library to real world applications.
11
52
79
Software engineeringProjective planeBitCoefficient of determinationMultiplication signWritingLecture/Conference
Data structureServer (computing)Computer networkDependent and independent variablesPriority queueRobotBitExpert systemLibrary (computing)Task (computing)Priority queueProcess (computing)1 (number)Pattern languageShared memorySoftware maintenanceAreaGoodness of fitFitness functionMultiplicationServer (computing)Dependent and independent variablesPoint (geometry)Revision controlCartesian coordinate systemType theorySoftwareCentralizer and normalizerSingle-precision floating-point formatForceData structureDirected graphIdeal (ethics)Lecture/ConferenceMeeting/InterviewComputer animation
Data structureServer (computing)Key (cryptography)Set (mathematics)String (computer science)Client (computing)Range (statistics)Dependent and independent variablesComputer networkStandard deviationModul <Datentyp>Library (computing)TelecommunicationSource codeWeb pageComputer programmingRevision controlElectronic mailing listContent (media)Table (information)Module (mathematics)CodeConcurrency (computer science)Task (computing)CoroutineLoop (music)Event horizonCore dumpSoftware developerAbstractionCommunications protocolSocial classSequenceWritingPauli exclusion principleThread (computing)Single-precision floating-point formatSystem callMultiplication signInterface (computing)Physical systemStreaming mediaSoftware engineeringConvex hullEmulationOpen setWide area networkNetwork socketCAN busSynchronizationFunction (mathematics)Process (computing)Logical constantQuicksortDataflowEmailMountain passException handlingTransportprotokollParsingCASE <Informatik>Parameter (computer programming)MereologyLoop (music)Connected spaceParadoxTransportation theory (mathematics)Right angleScheduling (computing)AbstractionMultiplication signElectronic mailing listSheaf (mathematics)Line (geometry)Range (statistics)Particle systemServer (computing)VelocitySystem callInstance (computer science)Subject indexingSerial portCommunications protocolObject (grammar)Unit testingSet (mathematics)Active contour modelEvent horizonSummierbarkeitSocial classNetwork socketSoftwareDependent and independent variablesElement (mathematics)Point (geometry)Client (computing)Factory (trading post)ImplementationHash functionWordLogicLevel (video gaming)AdditionModule (mathematics)Software frameworkFlow separationState of matterDatabaseInterpreter (computing)Term (mathematics)Figurate numberNumberStatement (computer science)Program slicingSoftware testingSeries (mathematics)Message passingCoroutineLink (knot theory)Logical constantVirtual machineString (computer science)Service (economics)CodeQuicksortType theorySlide ruleData structureFile formatGene clusterParsingBitSocket-SchnittstelleData dictionarySubsetStreaming mediaMultilaterationKey (cryptography)XML
Message passingShooting methodJSON
TransportprotokollCommunications protocolException handlingMountain passServer (computing)Factory (trading post)Social classMessage passingParsingDependent and independent variablesLoop (music)Uniform boundedness principleElectronic mailing listClient (computing)Right angleClient (computing)Transportation theory (mathematics)2 (number)Social classElectronic mailing listDependent and independent variablesObject (grammar)Line (geometry)MereologyData dictionaryDifferent (Kate Ryan album)Factory (trading post)CodeCommunications protocolMathematicsKey (cryptography)Logic1 (number)Server (computing)Directed graphInstance (computer science)Data storage devicePhysical systemArmIterationMultiplication signUniform boundedness principleOperator (mathematics)WordParticle systemDivisorBoolean algebraConnected spaceBitData managementXML
Client (computing)Message passingMultiplication sign
Factory (trading post)Key (cryptography)Communications protocolType theoryServer (computing)Social classElement (mathematics)Dependent and independent variablesLoop (music)SynchronizationLogarithmWritingEvent horizonClient (computing)Uniform boundedness principlePriority queueSet (mathematics)MereologyGoodness of fitOverhead (computing)BenchmarkProjective planeState of matterSlide ruleTwitterInformationTransportation theory (mathematics)Normal (geometry)Level (video gaming)Task (computing)BitCorrespondence (mathematics)Priority queueBlock (periodic table)TelecommunicationGrand Unified TheoryConnected spacePattern languageParsingParsingCoroutineThread (computing)CodeDependent and independent variablesParameter (computer programming)2 (number)MathematicsMultiplication signLoop (music)DatabaseEvent horizonElectronic mailing listPoint (geometry)Communications protocolObject (grammar)System callSoftware frameworkKey (cryptography)BuildingGreatest elementDevice driverScheduling (computing)Software testingSerial portMiniDiscFigurate numberPower (physics)Information securitySpacetimeWord1 (number)CASE <Informatik>Message passingSingle-precision floating-point formatProteinSimilarity (geometry)Function (mathematics)Mobile WebStack (abstract data type)Bit rateWeightServer (computing)Right angleUniform boundedness principleClient (computing)QuicksortMultiplicationParticle systemLaptopSemantics (computer science)Food energyOcean currentDirected graphReduction of orderDecision tree learningSoftware developerComputer animation
Transcript: English(auto-generated)
I'm happy to introduce James Sahyarwini. He's a software engineer working at Amazon Web Services and he will present a talk with the title, Writing Redis in Python with Async.io. Give him a hand please.
Hi everyone, welcome. Thanks for being here today. So as mentioned, my name is James Sahyarwini and today I'm gonna be talking about writing Redis in Python with Async.io. A little bit of background, the reason I started this project in the first place was really I just wanted to learn more about Async.io. I had heard a lot about it, everyone was talking about it,
I've read the docs, but I didn't really understand how we could use this to write something a little more realistic. And at the time, I was familiar with Redis and I thought, you know, Redis has some features that looks like it would be a really good fit for Async.io, so I wanted to see how far I could take the idea. So I started exploring writing Redis in Python with Async.io.
And the reason I wanna share this today is because I think there's some useful things that everyone here can take away from it. I wanna show how you can structure a larger network server application that wasn't very obvious to me from reading the network, the Async.io documentation. And then I wanna show you a couple of patterns
that happen in Redis that I think apply in general to various types of network servers. So the basic request response structure, how that looks like, and then also a couple of other ones that are interesting, so publish, subscribe, and then blocking queues. So if you're familiar with Redis, this would be the BL pop and the BR pop, the blocking left pop and blocking right pop. By the way, if you're not familiar with Redis,
that's okay, I'm gonna be explaining all of the specific features we're gonna be implementing, so don't worry about that. But I think that these patterns apply to, say, a chat server or to any kind of your own task queue or job queue, and I think in general these would be generally applicable. Before we get started though, before we dive into it,
I just wanna give a little bit of a disclaimer. I'm not an async IO expert. As I mentioned, I wanted to learn more about it, so a little bit about me. I work at AWS, and so I primarily work on, these are the libraries I primarily work on or have written a substantial portion of. And the point of this though is that I am more, my area of expertise is in Python libraries and supporting multiple Python versions, that kind of thing.
So writing an async IO network server that only runs on the latest version of Python is not something that I would consider myself an expert in, so I'm just a beginner trying to share what I have learned so without warning, as much as I'm gonna try to shoot for this, which is probably what the experts in the async IO maintainers would want us to do,
there's just a chance you might end up with something like this or something crazy like this. So just as a warning. Okay, so the one minute introduction to Redis. It is a data structure server, so it's a network server. You run it on a machine, you connect to it via sockets and you send requests, it gives you responses. The one thing that's kind of interesting is values
can be more than strings. So you have your basic case here where you can set, you know, foo and bar and you get a value back and then I can also get a value and it'll give me that value back. But in addition to that, I can also have lists. So in this case, I can say rpush, which is a right push onto the foo list, the value of a. So this is like foo.append a in Python
and I can do that for three elements, a, b, and c. And then on the right hand side, I can say lpop, which is left pop from foo. That's gonna return me the first element, a. And then I can do lrange from foo zero to two, which is basically a slice. So that's gonna give me the elements from zero to two, which in this case is b and c because we popped a previously.
So that's the first thing I wanna shoot for. And I'm gonna look at how we can do a basic request response. So at the end of this first section, I wanna be able to send a get request, interpret it, figure out how this is gonna work and then send a response back. So I did, I think, what most people would do, start the documentation. If you haven't seen the documentation, this is the asyncio docs.
You start at section 18.5.4.3.2, the obvious place to start, which is the TCP echo server protocol. And so I changed one line here, but this is essentially straight from the docs. And by the way, I'm gonna be showing a lot of code here. I'm gonna highlight the specific parts that I think are interesting,
but it's okay if you don't, you don't have to follow or understand all of this. I'll put the slides online so you can look at it more in detail later. But there's three parts to setting all this up. The first thing is we're gonna do is create an event loop. We're going to then call create server and give us the Redis server protocol. So this is the class we're going to write. And this is going to give you all of the logic for handling the Redis protocol parsing,
figuring out how to call into our database, all that kind of stuff. And then this will set up the listening sockets. The next thing we need to do is call run forever. This will wait for a network IO and then call the appropriate things into our protocol. And we'll look at that in a second. And then the last thing we're gonna do is just clean up. So once we want to exit, we close down the server and call loop.close.
But I want to look at this Redis server protocol. I should mention real briefly that there's, I think, two other ways you can do this. You can use the low level sockets in the async IO loop. And you can also use streams. Streams are a little bit higher level. What I found is streams, while they were really nice to use, the performance for me was substantially slower.
And I think protocols for me was a nice middle ground between still being fairly expressive but having pretty good performance. I'll show some performance numbers at the end. So here's a protocol. So the idea is you have connection made, data received. You get a transport for connection made. And then when data is available, you call data received.
So I looked at this and I kinda understood what it was doing. I think that there's some history there from some of the other frameworks that influenced this. But I still didn't really get how protocols worked. And for me, going one level deeper into async IO and trying to understand how this works really helped illustrate for me. So I'm gonna go kind of briefly into async IO code
to understand how protocol and transports work because that's really gonna drive a lot of this implementation. So first thing, start with the Redis server protocol. And in this loop, what's gonna happen is whenever a connection's made, async IO, notice in the top right, this is async IO slash selector events. So what's gonna happen is you have a callback. Whenever a connection's made,
you call accept connection two. And what's gonna happen here is, notice a protocol and transport are created for every connection. So every time you have a client connection, it's gonna create a protocol and a transport. And this part here with the protocol factory, that was the Redis server protocol class we passed in. And notice it was a class object, not an instance.
So it's gonna call our class that's going to return an instance. But the main thing here is that there's one protocol and transport per connection. So if I had three clients connected, I would have three transport protocol pairs. And the next thing now is, let's see how connection made and data received are gonna be called. So if I look at, this is the selector socket transport
still in the async IO code. When you create it, there's this first line here, which is self.protocol.connection made. Connection made is this thing that we're going to write and we can see how it's gonna be called. The loop.call soon, I found that kind of confusing. It's really just loop.schedule this thing to be run or loop.add this to your to-do list of callbacks. But essentially what this is asking is for this method to be called
and then the arguments to pass. So notice that the last argument is self and the class we're in is a selector transport. So we'll see that again when we go back to our protocol class. The next thing here is the readReady method. So again, another callback. And look at the last argument, it's readReady. And we'll dive into what readReady is, still in the async code again.
Just kind of showing you some highlights here. And the main part of this is this protocol.dataReceived. Again, this is the method that we're going to write. So what's happening here is whenever there's data on a socket, we'll read from it via socket.receive and then call dataReceived. So the main thing here is that these are just callbacks. So they're not coroutines, they're not anything fancy
like that we just saw. They're just simple callbacks that get scheduled whenever a new connection is made. And so if an event loop was like this, going from left to right, if I had four connections, as data comes in, I'm going to call dataReceive onto those protocols. So hopefully that gives you an idea of how protocols and transports work. So we can start fleshing out
the basic get and set response here. This is how more realistic dataReceive might look. So the first thing we're going to do is we're going to get data over the wire. We're going to call into our parser to parse the wire protocol. I'm simplifying things a bit and I'll discuss it at the end, but we're going to parse the wire protocol. And so this will give us, say, a list where the first element is the command set,
next one is a key, the next one is a value. And then once we do that, we're going to look at what command we were provided. And then after that, we'll call into the db layers. So we'll say self.db.get and self.db.set and we'll look at how that looks like. And then finally after that, we're going to take our response, ask our serializer to serialize it to bytes, and then use transport.write.
So a transport is really just an abstraction over a socket. It allows you to write data back to the client. So whenever we have something to send to our client, we say self.transport.write. And so this is the basic overview for the first part. And also notice in connection made, I should mention, we're storing the transport. So that's how we're able to write back to that client. So it's one connection.
And then just to give you some concrete data, this is, you don't have to know what this is, this is the Redis serialization protocol. So it's a text-based format. This is just how it would actually look like. We'd get data over the wire that looks like this. Star is like the list type and it's three elements. And you can kind of see there's a set foo bar there. And then this is what we would return. So just to give you an idea of what we're actually sending back over the wire.
Okay, next thing, the db class. So this is where all the logic happens for the Redis commands like get and set and all the list manipulation. As you can see here, it's just essentially an abstraction over a dictionary. But the main things here are that I found the db being a separate module, so it doesn't know anything about asyncio. That's really helpful. So you can imagine how you'd write a unit test
for this, right? You create a db, you call set foo bar, and then assert db.get is gonna equal bar. So very straightforward to test. Here's how some of the list commands would work. Again, just hopefully you can kind of just get the gist of this right. For our push, we're essentially gonna manipulate a list. For the l range we looked at, it's looking up that key and then slicing based on the start and stop.
And the same thing with lpop or popping from the front of the list. By the way, use collections.dex so you get the constant time pop left. But just as an example, we're gonna use a list here. And you can see how you can start to integrate that into the protocols here. So realistically, you'd probably extract this into a command handler class. But here I'm just adding if-else statements here that are gonna say, if it's our push,
we're gonna figure out how to construct the appropriate call into the database layer. And we keep that nice separation between the async code and our logic. Okay, so at this point we've covered the basic stuff. We know how to respond to a get and a set and a list command. You could use this to flush out the other types. So sorted sets, hashes, all that kind of thing.
You can get a lot of the commands implemented this way. But now I wanna cover two more interesting cases that I think took me a while to figure out. So the first one is publish subscribe. What we wanna go after here is, we're going top down. So we have a client that connects and says they wanna subscribe to a channel called Foo. And we have another client that wants to subscribe.
And what's gonna happen is, at some later point, if another client comes along and says, I'm gonna publish on the Foo channel this message, we wanna be able to write to every subscribed client for that channel that message. And then just to give you something a little more concrete here, it's that same example that's actually using Real Redis and Real Redis CLI. So what you'll see here is I'm gonna subscribe
to two channels. And then on the third channel here, I'm gonna publish. And the main thing I want you to see is when we publish in the bottom thing here, you should see the top two get the message. And that's what we're shooting for. So we're gonna publish two messages. Here's the first one. You can see how both got that message and then we do it again.
And so then they both get the next message. So that's ideally what we're trying to implement here. So how do we do this? If we remember that we had one transport and protocol per connection, we need something a little bit different. We need to be able to, from a given transport protocol pair, somehow communicate to other transports that are interested and write data to them. And what we're gonna take advantage of
is this protocol factory. So I showed you some asyncio code where we were doing protocol factory and instantiating it. We're gonna take advantage of that in a second. But the actual PubSub part is pretty easy. We're gonna create a new class and whenever you call subscribe, we just have a dictionary of the channel name and a list of transports. So you give it your transport when you wanna subscribe to something.
And then the publish is equally straightforward. You look up the channel, you have a list of transports and you iterate for every transport and transports, you just call transport.write. And hopefully that's pretty straightforward. There's not really a lot of async code there. It's actually pretty simple. And the way we integrate this, same thing, we're going back to our Redis server protocol. So in this command here,
we're just calling our PubSub object.describe and notice we're passing our self.transport. And then same thing with publish. So this part's pretty straightforward. The thing that was kind of tricky is how do we actually get all of this stuff wired up together? And so you remember this very first line we looked at, which is we're passing a class object here, not the instance. We're gonna change that slightly into a protocol factory.
And now we're gonna pass the factory to the create server. And all this is gonna do is just store a reference to the class. And it's gonna store args and keyword args to pass. So this is also basically like functools.partial, if you're familiar with that. This is just kind of making this a direct concept in the code. So now, whenever this gets called, we're able to pass in a shared object.
And so the thing that was useful for me here is that instead of having transport know about other transports and having to coordinate all that, you just tell the PubSub object and the PubSub manages which ones it needs to call out to and so that again keeps your logic very simple, very easy to test. So that's publish.subscribe. Last thing I wanna look at, that's probably the most interesting one, is blocking list pops.
So this is what we're shooting for. I'm gonna subscribe to blpop, which is a blocking left pop on the foo key and zero is just a timeout, which means wait forever. So if I do that with two clients, notice I don't get a response right away. But now if another client comes along and does an rpush on the foo list, I'm gonna pick that value, the bar value, and send it to whatever client's been waiting the longest.
So this is essentially like a queuing system, right? And again, just to give you something more concrete, we're gonna do the same thing here. Create two clients. We're gonna have both of them block and we're gonna publish two messages and this time you'll see one client gets the first message and another client gets the second message.
So hopefully you get what we're shooting for. We only wanna give it to one. So now we need to manage which one's been waiting the longest. So how do we do this? This one also took me a little bit of time to figure out. Wasn't intuitive to me, but we're gonna use the same idea, where we're gonna use a shared object. So this is a key blocker object.
And the way we do this, we'll start at the bottom and kinda work our way up. So remember that database object, which is not supposed to know anything about asyncio. So what I did here was the same idea, where if there's something in the list, we don't have to block, we return right away. But if there's something that we do have in the list, or if there's nothing in the list, then what we need to do is block. And instead of having the database object know how to block
and start integrating and coupling asyncio code, it just returns some sentinel value that says, you have to wait, right? I don't have any data available. Whoever's calling me needs to figure out how to wait. And if you go up one level of the stack, we can see in our data received, same kind of thing here. We're gonna pass in our database and our key blocker and our loop. But here we're gonna call blpop, same stuff as before.
But if we get something that indicates we need to wait, this is kind of the new part where we're gonna look at async in a way here. So I'm gonna call key blocker, this new object, I'll wait for key. I'm gonna get this thing back, really it's a coroutine. And then we're gonna say loop.createTask, this coroutine. So remember earlier I showed you
how the data received was a callback. And one of the important things there is that you cannot block that. If you block that method, you block the entire event loop, right? And everything stops. So the best that we can do if we wanna block for something is create a new task. So if you're familiar with threads or something, create a new thread conceptually and ask the event loop to run it. And that thing can block. So that's essentially what we're doing here.
We're creating a new coroutine and we're asking the event loop to run that in its loop. Okay. And then now that we have the corresponding blocking part, we need the push part. So whenever data comes into their push, we're gonna tell the key blocker about it. That's this part here where we're gonna say create a new task with data for key. And so now let's look at how the key blocker looks like.
And so there's a couple of new things here. There's the new async and the new await stuff. And we'll go over how this works. So the first part is the async def, wait for key. This is new to Python 3.5. This is creating a native coroutine. And the important part here is that we're gonna use an asyncio.queue. So if you're familiar with the queue.queue
that's used for multi-threaded, in a multi-threaded scenario, it's kind of the same idea. And essentially what we're gonna do is block. So we're gonna say value equals await queue.get. And what this is gonna do in this coroutine is it's gonna sit here and wait until there's actually data available. And because we're using the async queue which gives us the FIFO semantics, it will unblock the one
who's been waiting the longest, right? First in, first out. And so once we get our value, we can then do the transport.write for that single, only that single transport. And that, and then we have a corresponding thing for data for key. We don't technically need to use it. There's a queue dot put no wait. But I just wanted to show how you could also do that.
Okay, and I found async and await, while conceptually it's kind of easy, you just put await where you would wanna block normally. I wanted to know kind of how that worked. So here's a very, very high level overview of how async and await work. So what's gonna happen is we did the call soon, right, for our task. So we have wait for key happening in the event loop. So the event loop comes along and says, all right, we're gonna call this method.
And then remember, we called queue.get. And one of the things that I found, I realized, was that whenever there is an await somewhere deep, buried in the guts of async IO, there is a yield somewhere that is the only way to stop Python code from executing and essentially save the state so you can resume it with a coroutine.send, right? So there is a yield somewhere deep in async IO.
And at this point, we go all the way back up thanks to the magic of yield from, and we have this object future. It doesn't really matter. This is kind of like what it's called. But what happens at this point is this is essentially frozen. So the event loop knows about this call stack and it's frozen. And then something else comes along when data actually is available for the key. And we say data for key, which is gonna do the queue.put.
And that queue.put is then gonna have a value associated with it. And again, I'm kind of glossing over the details here to simplify things. But what's essentially gonna happen is this is going to unlock the future. So this is gonna say that this future is done, which will then schedule it to be run in the event loop. Then this value here goes over to the yield and resumes.
And then now you get that value back from your queue.get. So at a very high level, you can kind of see how that would work with this async and await here. That's essentially what's happening here. Okay. So that is basically how you would do blocking list pops. You do the same thing for BL pop and BR pop from the end of the list. There are a few additional considerations.
There are things that I didn't really have time to cover that actually changed a lot of how the internals work. So first thing, the real parsing is more complicated. So I made a big assumption that we're getting all of the data as a single request. Realistically, you would get partial data, you would feed the parser, and it would tell you when it had a complete command
for you to run. And you could potentially get more than one command if you're doing pipelining or that kind of thing. Pub-sub can handle clients disconnecting, right? So there's also another method on the protocols that we didn't look at, connection lost. And that's how you can handle disconnecting. And there's also some advanced pattern matching, that kind of stuff. And then the last thing is that blocking queues
can actually wait for multiple queues. So I could say BL pop, foo and bar, and then when data's available on either of those, it will unblock. And so for that, I couldn't use an asyncio.queue. We actually had to go a little more low level and use the asyncio.futures. Okay, last thing, performance. I was curious. So there's a Redis benchmark program you can run
that comes with Redis. These are the parameters it did, so just the basic set and get. And on my laptop, Redis server, we got about 82,000 requests per second. For PyRedis server, I got 24,000 initially. I was at Yuri's talk with, he was talking about uvloop, so I thought, let's try it out, see how well it does. And just plugging uvloop with no changes
brought it up to 38,000 requests per second, so I thought that was really cool. We're a little more than two times as slow. I'm very unoptimized code. When I profiled it, it was mostly in my parsing code, which can be optimized quite a bit. It's inefficient how I'm doing parsing now, but I thought that was really cool, so we're pretty close.
So just a summary of what I covered here so far. Looked at transports and protocols, hopefully showed you a little bit more in-depth how they work, how they pair to a single connection. And then we looked at request response and then some other patterns for how we can share state or how we can communicate state across various transports. So looking for publish, subscribe, and then blocking queue-like behavior.
So I'm gonna put these slides online. I don't know where they are, where they're gonna be yet, and I'll put the code online as well. I don't have links, so I guess for now, I'll tweet them out eventually. So that's probably the best place for more information. But once again, thank you.
Thank you very much, James, for this talk. We take a few questions. Hi, great talk. Why did you write your own path instead of using a pre-existing path? There's, for example, the high Redis path, or is there C interface to the C part of Redis?
Good question. Mostly just to see what the overhead would be, just kinda learn more about, I mean, mostly this was a project to learn more about how Redis would work and how to implement it, but I think if I was actually going for performance, that would be the next step, is either try to clean up the parser myself
or to just use the high Redis, which is the C library, to do the parsing for me. And I think that would get pretty close performance to the real Redis. So yeah, that's something I'd wanna look into. Okay, some more questions.
Hi, great talk. You made the tests with a Redis with persistence or not? Oh, are you asking about if we persist the data to disk like Redis? Yes, yes. So that's not something I'd looked at. I don't actually know the best way to do that. So with Redis, it just does the fork and exec
and then writes it in the background. That would be my first attempt to do something similar, try just the fork exec and figure out the serialization, the RDB serialization and see how well that works with async.io, but I haven't actually tried it. Okay, thanks. Okay, one more, one last question.
There it is. And what do you think about async.io? Did you like it, the experience, or what's your conclusion of? So I think one of the biggest problems for me
was just it was hard to figure out how to do things and I think, you know, not coming from a background of writing a lot of async.io code, I think the docs, and I think I heard this mentioned earlier, you know, the docs need a little bit of work. It was not, I could understand how building blocks worked and how coroutines worked and tasks and all that, but I didn't really get how to fit things together and I think just having more examples, having more documentation about that would really help.
As far as the internals, I found it a little confusing. For the people familiar with async.io, how you have futures and tasks, which really should be called coroutine driver and how that schedules things and you get futures that then have callbacks and the way that kind of works, I found that a little bit hard to understand. I've been looking at some of the other ones. The other, I think, frameworks like Curio was one
that seemed a lot simpler for me to understand, but I think for me the biggest problem is just having more examples of how to do things and then it's a great framework. So thank you very much for your talk again. Give him a hand. Thank you.