PEP 683: Immortal Objects - A new approach for memory managing
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 131 | |
Author | ||
Contributors | ||
License | CC Attribution - NonCommercial - ShareAlike 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this | |
Identifiers | 10.5446/69486 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
00:00
Object (grammar)Pauli exclusion principleRead-only memoryPauli exclusion principleProcess (computing)Object (grammar)Open sourceSemiconductor memoryThread (computing)CountingMereologyProjective planePosition operatorScheduling (computing)Data managementSoftware developerNeuroinformatikFront and back endsGrass (card game)Computer animation
01:30
Systems engineeringSemiconductor memorySemiconductor memoryObject (grammar)CASE <Informatik>Type theoryFunctional (mathematics)Link (knot theory)Revision controlDifferent (Kate Ryan album)CodeDampingMultiplication signUniverse (mathematics)CountingVideo gameIntegerArithmetic meanElectronic mailing listInstance (computer science)NumberCodeComputer animation
03:06
Semiconductor memoryPrime idealSemiconductor memoryObject (grammar)Mechanism designSpeicherbereinigungSlide ruleCountingPosition operatorBitOperator (mathematics)Process (computing)Thread (computing)NumberBoiling pointSoftwareCodeComputer configurationMultiplication signInterpreter (computing)Shared memoryOperating systemSampling (statistics)Computer animation
06:15
Object (grammar)IntegerOperating systemSpeicherbereinigungField (computer science)CountingFormal languageObject (grammar)Semiconductor memoryThread (computing)LeakComputer animation
06:43
Graphics processing unitCache-SpeicherObject (grammar)Reading (process)Computer programProcess (computing)Key (cryptography)Pauli exclusion principleMeta elementBefehlsprozessorValidity (statistics)CountingCondition numberComputer programmingRight angleMereologyCache (computing)Context awarenessSemiconductor memoryMultiplication signMathematicsLeakObject (grammar)Game controllerProcess (computing)Order (biology)Thread (computing)Inheritance (object-oriented programming)Operating systemCASE <Informatik>CodeBitInverter (logic gate)Term (mathematics)Video game2 (number)Server (computing)Cartesian coordinate systemInformationMedical imagingSpeicherbereinigungOperator (mathematics)Front and back endsComputer animation
12:05
Shared memoryObject-oriented analysis and designDegree (graph theory)IntegerObject (grammar)Inclusion mapImplementationExtension (kinesiology)FlagSystem programmingLimit (category theory)Equals signFormal languageSemiconductor memoryObject (grammar)Right angleLogical constantCountingMathematicsCodeBitField (computer science)Process (computing)Variable (mathematics)Operating systemCASE <Informatik>Presentation of a group2 (number)Water vaporImplementationShared memoryMeasurementComputer programmingState of matterSlide ruleInformationType theoryOptical disc driveNumberMereologyLevel (video gaming)Line (geometry)String (computer science)Video gameWritingTupleComputer animationDiagramSource code
17:11
Extension (kinesiology)Object (grammar)Limit (category theory)FlagSystem programmingEquals signOvalFluid staticsInclusion mapSample (statistics)Multiplication signNumberField (computer science)CountingThread (computing)MathematicsRevision controlImplementationObject (grammar)Semiconductor memoryShared memorySpeicherbereinigungLine (geometry)Presentation of a groupIntegerCASE <Informatik>String (computer science)Basis <Mathematik>Source codeComputer animation
18:58
Integrated development environmentPauli exclusion principleComputer hardwareMathematicsSemiconductor memoryComputing platformNeuroinformatikCartesian coordinate systemDifferent (Kate Ryan album)BitObject (grammar)Physical systemOperating systemFinite differenceCompilation albumBefehlsprozessorCodeComputer programmingLeakType theoryCrash (computing)Computer animation
21:02
Shared memorySemiconductor memoryGraphics processing unitGraph (mathematics)Pauli exclusion principleObject (grammar)AdditionReading (process)Sheaf (mathematics)Interpreter (computing)Computer architectureRevision controlModule (mathematics)MereologyState of matterNumberCodeMultiplication signWordLink (knot theory)Representation (politics)TwitterData storage deviceRadical (chemistry)Context awarenessCodeRight angleProcess (computing)Different (Kate Ryan album)Field (computer science)AuthorizationLibrary (computing)Focus (optics)Macro (computer science)Moment (mathematics)Semiconductor memoryCondition numberSemaphore linePresentation of a groupFigurate numberCASE <Informatik>ImplementationConfiguration spaceComputer programmingThread (computing)BitScaling (geometry)BefehlsprozessorComputer fileSet (mathematics)Computer animationDiagram
26:19
Dean numberData structureLibrary (computing)CountingProcess (computing)Lecture/Conference
27:06
Total S.A.System of linear equationsLibrary (computing)Pauli exclusion principleAuthorizationPresentation of a groupInterpreter (computing)Meeting/InterviewComputer animation
27:32
Process (computing)Semiconductor memoryMultiplication signInterpreter (computing)Parallel portMeeting/Interview
27:58
Roundness (object)Lecture/ConferenceComputer animation
Transcript: English(auto-generated)
00:04
My name is Vinicius, Vinicius Guggeni-Fehera. I work at Ozone Technologists. He's a serverless edge computing company. I just became a tech manager at Ozone, but before that I worked for two and a half years as a team lead QA, and before that for about three and something years
00:23
as a Python back-end developer with Django. That picture with the guy in the scissors in the grass talks a lot about how I feel about cold quality. That's why I changed into a team lead QA position. And I also like to work on some open-source projects,
00:40
the most amazing project that I worked on in translating the Python documentation for Brazilian Portuguese. And among my hobbies, I like craft beer. Vice is my favorite one. And riding a bike around the park. I don't even know how to bike myself. I rent it per year. So our schedule for today, we're going to talk about memory, of course,
01:01
like Python objects, garbage collection, JU, threads, processes, and lots of things related. Then we're going to discuss this PEP, why it was actually created, the problems that it was actually proposed to solve, then the hard parts about implementing this PEP,
01:21
and after that, a bit into where this PEP was actually suggested to be used, considered, or can be used in the future. All right, so let's get started. Memory from the very beginning. Everything we see in Python is an object, and I'm going to quickly demonstrate that by using Python, of course.
01:43
If we ask Python, is the number 42 the meaning of life, the universe, and everything else an object, the answer is going to be yes. And if we do the same thing for other stuff such as an empty list and a type such as float, and even a method, they are also going to be objects.
02:03
The answer is also going to be yes. And we can even go further into different types of methods such as an anonymous functions, and even the is instance method, if you pass it to the is instance, is also going to be an object. But what exactly is an object anyway?
02:20
So we found a ton of SQLs to actually explain it. It boils down to something like this. Let's say we have a piece of code such as a equals 50, then we have to consider it as a name, in this case a, pointing to a value, in this case 50. And it boils down to three things,
02:40
a type, so in this example an integer, a value 50, and a reference count, how many things are actually pointing to this value of 50. In this example is just one. And just in case if you actually want to go for the nightmare version of what I just explained, you can actually follow this link below, which is going to take you to the CPY to implementation,
03:00
and you're going to have an awesome time just in case you actually like SQL. And let's go a bit further than that simple code. Let's say now we actually have three objects such as a equals 50, b equals a, and c equals 50. If we actually use the id method to check the memory position of what you actually wrote,
03:21
then we're going to notice something rather interesting, like it's very same memory position. So what exactly is going on over here? And it turns out that Python didn't really create it three new objects, it just reused an existing object and set those names a, b, and c to the very same object. So now the reference count,
03:42
the number of things that are actually pointing to this value of 50 is now three. We can also use the is operator to make sure that it is indeed the same, very same object. Now let's go the other way around. Instead of actually incrementing the reference count, let's try to decrement it. Let's say we set a to none and b to false,
04:00
and once again check the memory position. So a, b, and c are going to be three different objects with the reference count setting to one. In practice, a and b, one is actually b, one, and we're getting to that in two or three slides ahead. And let's go a bit beyond. Let's try to make the reference count reach zero. So let's say we call explicitly the delete method,
04:24
the delete method to remove the reference. When we try to check the memory position for a, b, and c, the c is actually going to fail because c is no longer pointing to anything else. And since nobody is actually pointing to the value 50, then the reference count is going to be zero, and the garbage collector is going to kick in
04:42
and claim back this memory. Garbage collection is actually the mechanism inside Python which grabs the memory and returns it to the operating system. So it kicks in in one of three possible ways, when we're actually reassigning a variable, when we're getting out of scope, such as exiting a method, or when we call explicitly the delete method
05:01
over here like we did. And here's some simple code that you can actually use to check how many references you have for your objects. You can also use the sys.gethalfcount method, but I actually prefer this piece of code because the sys.gethalfcount actually creates a reference
05:20
and gives you the wrong number. You actually have to decrement it by one. And I mentioned before that if you try to do this with non-files and some other objects that you check later on, we're going to have interesting values, unexpected values. And this is actually because the Py2 interpreter, when it is loading up, it sets a lot of references for values
05:42
that are commonly used like non-files and many others. Great. Now, let's say we actually want to speed up our code as best as we can. And to achieve that, we have mostly two options, which are threads and processes. Threads, they are very lightweight and easily share memory. And processes are a bit bulky,
06:02
have a hard time into sharing memory, but are more isolated. There are other ways such as async.io and distributing jobs across the network, but they mostly boil down to these two, threads and processes. And what's the problem with actually sharing memory when creating threads and processes?
06:20
It is possible since Python is not actually a thread-safe language to a mistake to happen in the reference count field. And if this reference count field never reached zero, we actually have a memory leak, like a memory that is never returned to the operating system because it never reached zero and the garbage collector is actually going to be ignoring that object.
06:41
All right, so after this introduction, now let's talk about the PEP itself. PEP 683 was actually created by Matt inside Instagram, and they had three very interesting reasons to create this PEP. They had the issues about CPU caching validation
07:01
that they actually wanted to avoid. They had issues about data races condition that they wanted also to avoid. And finally, most interesting, they had issues about copy on write, or just call for who's familiar with that term, that they actually wanted to reduce as best as possible. So let's give each one of them individually.
07:23
So caching validation. In order to improve the performance of the code that's actually running, the CPU likes to keep the memory, the objects that is using the memory in the near part of the hardware, in the L1, L2 memory, which is a faster access.
07:41
So it actually goes to the main memory, to the SSD, to the hard drive, and fetch that data, and keep into the cache memory. However, when we change something in the memory, such as the reference count, this memory is actually invalidated. So you eliminate the cache, and the CPU once again needs to go back to the main memory
08:01
to fetch that data again. And it doesn't sound a lot, but if you actually do that quite often, then you pretty much end up wasting CPU, and therefore time in the execution of your program. And for things that are actually supposed to be constant that didn't really change at that value, this is kind of a bummer,
08:20
because the value didn't change, and the reference counts did, but it ended up invalidating the cache. If we had immutable objects, then this wouldn't be a problem. Another thing that I wanted to work on is actually dealing with data races condition, which is something that can possibly happen with threads.
08:41
So to explain data races condition, let's discuss first the happy scenario. Let's say we have an object which is being shared with two different threads, and they're working using this object, and they don't need it anymore. So what's going to happen? They're going to decrement the reference count. So the first thread is actually decrementing
09:00
the reference count from two to one, and sometime later on, a few seconds, an hour later, who knows, the second thread does pretty much the same thing, doesn't need it anymore, the reference count. So it's actually going to decrement from one to zero. And now that we actually reach at zero, then the garbage collector is going to kick in, claim this memory back,
09:21
and release it to the operating system. Everything is fine, everyone is happy, and life goes on. Now let's discuss what happens when this doesn't quite happen exactly as we thought it was supposed to happen. Let's say we still have the very same object, once again, shared with two different threads, working on it.
09:41
But let's say we don't have the JU, because the JU was actually the thing that was protecting us from this case we're going to be discussing to happen. So there's no JU over here. And the first thread and the second thread, once again, decide to release this object. But this time, this is actually going to happen pretty much at the same time. So the first thread is in the process
10:02
of actually releasing this memory, and it is going to decrement the reference count from JU to one. But all of a sudden, it gets interrupted by the second thread, which is going to do pretty much the same thing. It's actually going to decrement the reference count from JU to one. And it's going to give control back to the first thread.
10:20
And now the first thread is actually going to continue right where it was before being interrupted. And it's going to decrement the reference count, but not from one to zero. It's actually going to decrement from JU to one, because it was the value it had when it was halted all of a sudden. And the thread is done. Both threads are done. So what's the problem over here?
10:41
We never reach zero, and unfortunately, we cause a memory leak. So this memory is never going to be released to the operating system, making our problem consume more memory than we're actually thinking it should consume. So once again, this value did not really change. So if we had immutable objects,
11:02
then this wouldn't be a problem. We wouldn't have to worry about this data races condition. And now let's discuss about copy on write, which is, I think, the most interesting case to discuss about this pack. So I'm going to need to explain a bit of context over here just to make sure, because this picture doesn't explain a lot.
11:22
So I'm going to go by parts. Instagram, they actually use Python Jungle as their backend for some parts of their application. If you didn't know that, you're knowing by now. And they have an application server which starts to create child processes to keep up with all the requests that are coming in
11:44
when someone requests stuff inside Instagram, such as images, information, that kind of stuff. And for each child process that is started, the parent program actually hands out a copy of the memory, of the parent's memory,
12:01
to the child process. And in this memory, they have a lot of objects that are supposed to be constant. It's the same thing. It's not going to change through the whole execution of the program. So ideally, ideally, you don't need a lot of memory because it's the same memory and it's going to be handed out to the child process. However, there is a big deal over here.
12:23
This memory is actually shared with the child processes in a read-only state. This is something from the operating system. So in case you actually want to change this memory, then you actually need to make a copy of it so you can actually write it. That's why we have the name copy on write.
12:42
I didn't know that. I learned that during this presentation. So you're actually expecting to have kind of a lower-consuming memory, but they monitor it up, measure it up, and they notice quite the opposite. As long as the number of requests were going up, the memory usage was still being increased
13:01
and the shared memory was actually going down as the number of requests were going up. So this is kind of odd, right? Since there is a lot of memory that is constant, why is this actually happening? So I'm going to have a drink of water. I'll let you guys think for five to 10 seconds.
13:23
All right. So here's Mark, folks. You probably figured that by now. The issue over here is that even though the memory was supposed to be constant, the reference count was not, because when a copy of that memory was made, because you set variables,
13:41
you remove variables during the code execution, then you change the reference count field, and due to that, we end up making a copy of the memory, even though the value didn't change. So I'm going to do a more practical example just in case somebody didn't get it. Let's say we have this object with the value 50, and it is being shared with just
14:01
a simple true child process. So they both have a reference to this object. The reference count is actually true, and one of the child processes doesn't need this object anymore. It's actually going to dispose this object. So it is going to change the reference count from two to one. And now I'm going to ask you, the value of 50 in this object,
14:22
the value changed? It didn't, right? Is it still the very first same value? So pretty much for anything who is coming in new in IT today, coming from JavaScript, from Lua, from, I don't know, COBOL or any other language, they're going to look into the code. They're going to see how this value didn't change.
14:41
Why am I consuming more memory? But turns out that the operating system actually disagree with us, because the operating system looks at the whole memory itself. So it considers that the memory did change. So it made a copy. So that is annoying, right? To fix all of these three issues, the engineers inside Instagram
15:01
actually wanted to achieve truly immutable objects. But before we get any further into this presentation, we actually have to get to a common ground of naming stuff. They wanted to achieve immutability, but they ended up calling it immortality. And why is that? Probably because immortal objects
15:21
actually live throughout the whole execution of the program, because from start to finish, they don't change at all. Probably also to not make any confusions about things that are said to be immutable, like strings and tuples, which, by the way, they are not. I've been lied to my whole life. Strings and tuples are mutable,
15:41
at least on the lower level of the language. On the higher level, it is okay, they don't change at all, but on the lower level, yes, they do change. And probably because immortal objects sounds awesome. It's an awesome name, at least I think so. So this is where we get to the C part of this presentation. There's also a lot of C code, and it's fairly straightforward, I believe.
16:02
So you're going to get it all right. So this is actually what an object in C, implementation of a Python object looks like in C. Here we can see that it is instruct, and there's the reference count field and the object type. And the data field is not present over here because it's going to be a lot of extra information.
16:21
It wouldn't fit the slides, so bear with me. There is a data field over here, but it is summed. And what engineers inside Instagram wanted to do, they actually proposed to change the way the reference count field work. If we set it to a specific constant, then we're going to consider this object as an immortal object.
16:42
And which constant exactly is that? It was this really, really, really high value on the top right over here. I'm not going to even try to pronounce that. I'm going to be ashamed myself because English is not my native language. But you get this really magic constant by actually setting the lower 32 bits implementation in C.
17:02
And you can also see that this value was actually set to this specific constant defined on line 110. And they also had to change the way that the pyincrement and pydecrement methods in C actually work. Because as I mentioned before,
17:20
once you reach that value, this is actually an immortal object and the reference count field should not change it anymore. So you can see on the first, these are very straightforward methods, you just increment and decrement depending on the method that they're actually using. And they added like a small if on the top of each line. Oh, is this an immortal object?
17:40
Yes, it is. Do nothing. Pretty much return without incrementing or the decrementing. And some examples of immortal objects that I found when creating this presentation are known, true, false, empty string and small integers. If you do that for Python on versions, checking the reference count on versions 3.12 and 4,
18:02
then you're supposed to get just one as the reference count field. And there are other objects, of course, for specifically Instagram's case, but this is more common that you see on a daily basis. And after this implementation, what they got was something like this. They noticed a decrease in memory usage, once again, as the number of requests were going up,
18:23
and increment in shared memory. But most important, for the very first time, we actually have truly immutable objects that wouldn't change at all. And why exactly is that a big deal? Because these objects, they can bypass the garbage collector,
18:42
the JU, threads, processes, pretty much anything you'd like in Python. You can try to change them as you like. You're not going to be able to date. So they pretty much survive anything you like. So that's a big deal. So everything sounds awesome so far, right? So this is where we start to talk about sad stuff, the problems that they had about implementing this path.
19:03
So Houston, we have a problem. Let's keep calm and carry on. The issues that they have to worry about are pretty much like backwards compatibility. So what exactly does that mean? It means ensuring that your Python application wouldn't crash without any explanation at all,
19:21
because anybody who has programming C code knows that it is a fairly acceptable risk, like just changing C code, your application may break without pretty much logical explanation. So we're not actually dealing with a Python application. We're dealing with a C application over here. We're using the C Python itself.
19:41
They also had to worry about accidental immortality, which is possible to happen. The issue over here is that when you get an immortal object, you're literally causing a memory leak, which is a memory that is not going to be returned to the operating system unless you actually close your application. So what you have to do to achieve an accidental immortal object
20:03
is just set a reference to an object, like a variable, and set another one, and another, another, another, and just keep doing that for a while, a very, very long while, and eventually you will get to that really high value that we saw before. But luckily for us, it takes about four or five computational days of effort,
20:23
so we're safe. It's unlikely that you will have an accidental immortal object. We also had to worry about platform compatibility, which is ensuring this change actually works on 32, 64 bits, ARM, different operating systems, different hardware types, compiling with different compilers,
20:42
such as GCC, CLang, that kind of stuff. And of course, performance, because everybody complains about performance in Python. So they measured it up and made Python worse by roughly around 2%, which is acceptable. You win in memory and losing CPU, so it is a trade-off.
21:01
And now let's talk about where this PEP can be used or what's considered to be used. I'll show this graph again. I'm going to show it before, because I find this pretty awesome. Like, you're going to have a lower memory consumption when you're actually doing parallel processing on a large scale with a lot of requests. But there were at least two PEPs
21:21
that actually considered using immortal objects, PEP-680. And those are, of course, related to Jew itself. So let's talk about each one of them, running out of time over here. PEP-684, a pair interpreter Jew. So who's the main focus of this PEP over here? It is, of course, the Jew itself. Is the Jew a bad guy?
21:42
I don't think so. I like to think as him, he's like Batman. He is the guy, the hero that lived long enough and unfortunately became the villain. And why is that? Because he actually prevented us from not wanting to deal with semaphores, with race conditions. But unfortunately, it holds us back and doesn't allow us to do parallel processing
22:02
using multi-core with Python. And what this PEP actually did, created the idea of sub-interpreters. And what are those? They are lightweight as threads and isolated as processes. So it is actually the best of both worlds. And this PEP actually achieved that by stop sharing the Jew itself,
22:23
created isolated interpreter execution, which we had that for a while. Every time you open a new terminal and a new Python, it is an isolated interpreter execution. Reduced the number of isolated states. It's still going on even after this PEP was actually closed on Python 3.13, 3.12.
22:41
I'll mistake the number right now. And reduced the number of global states, which we had more than 4,000. I think it's roughly around 2,000 right now. And there's a section in this PEP I find really amazing how to teach this. It sounds a bit complicated, but it was a joy reading this. In addition to PyNewInterpreter, you can also use PyNewInterpreter from config
23:02
to create an interpreter. And kept reading it on, figured out that this PyNewInterpreter is actually a struct in C. It has a field called onJew. Set it to true. You get yourself an isolated Jew to isolate your stuff. And files is pretty much like the very same old boring Python from 300 years ago.
23:24
And the last one that I'm going to talk about before wrapping it up is the PEP of the moment. Everybody's discussing about this PEP 6.0.2. You probably have another presentation today just on one or two presentation before discussing this. So this PEP, comparing it to PEP 6.8.3.
23:42
It, yes, except the idea that we will have something called immortal objects, which are true, false. Those are objects that are going to live a while through the whole execution of the program. And, yes, we will have to change the way that the pyincrement and pydecrement macros actually work for doing nothing on top of those objects.
24:01
So you're probably wondering, oh, this is actually going to use PEP 6.8.3, right? Turns out it is not. It only borrowed the ideas because this is actually going to change a lot of codes in the Python implementation itself. So the author of this PEP actually decided to just borrow the idea and add a new field to mark objects as immortal objects along with a different bit representation.
24:24
So this PEP actually had three main parts, which are pretty much like immortalization, which we've been discussing for the last 20, 30 minutes. It also has the idea of deferred reference counting, which is postponing when you are releasing the memory for things as methods and modules.
24:41
And also the idea of bias reference counting, which is something like saying, who's the current owner of this object right now? And some thoughts about this PEP, in my opinion, of course. We will have two versions of Python, one which tells you and another one without it. This is fairly noted. A lot of people know about that. But the next three, maybe some people don't know.
25:02
You're going to have an awful performance depending on which architecture, which CPU you are actually using. You may reach even 8 percentage, so you have to measure carefully to check if it is actually worth it to not have the JU at all. A lot of code will need to be recompiled, and a lot of code that is actually not thread-safe
25:21
will show up. And in this last JU, I'm actually talking about libraries. There will be a lot of libraries that will need to be recompiled to make sure that they're actually working with the JU'less version of Python. And just in case somebody actually wants to go deeper into this subject, the subject of immortal objects, memory, I always like to leave some references.
25:41
The second link is even the original implementation of the pull request itself, which implemented immortal objects. And there's a lot of fun in this link, so you're probably going to have an awesome time just in case you want to go further. I'm going to leave my context just in case somebody actually wants to reach out to me in any social media. So I don't have Twitter mostly. I use Telegram and LinkedIn.
26:01
Feel free to reach me on any time you like. And I want to thank you so much for staying until the end of this presentation. So thank you. Obrigado. Gracias. Grazie. Thank you. Thank you. Thank you. Thank you. Thank you. And that's about it. We have some time for questions.
26:26
So... Okay. Thanks so much for a fantastic talk. We have a few minutes for Q&A. If you have questions, please go to the microphone in the center.
26:41
Thanks for the talk. Thank you. In Python, you've got the multiprocessing library that gives various data structures. Do they solve the problem of genuinely being able to share the data between the different processes despite the reference count? And if so, can those just be used?
27:07
I'll have to check the library itself just to be sure. I'm going to be speaking on top of my head, but at least I don't think so. For actually sharing safely, I would recommend to check it out, the subinterpreters itself on PEP 634.
27:23
There is an awesome presentation on PyCon US 2023 from the author of the PEP itself. It's really, really worth checking out. It is amazing what you can do. You can, using these subinterpreters, achieve multi-core parallel processing. So every time you actually use threads,
27:42
it is possible for a mistake such as that, memory overlapping or memory mistake to happen. So I would always recommend either to use processes or checking out the subinterpreter itself. Any other questions?
28:00
We have about another minute or two, so if people want to know more, they can go to the microphone and ask. Okay, thanks very much. Let's give another round of applause for this great talk.