Green threads in Python
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 160 | |
Author | ||
License | CC Attribution - NonCommercial - ShareAlike 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this | |
Identifiers | 10.5446/33811 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
EuroPython 201714 / 160
10
14
17
19
21
32
37
39
40
41
43
46
54
57
70
73
85
89
92
95
98
99
102
103
108
113
114
115
119
121
122
130
135
136
141
142
143
146
149
153
157
158
00:00
Computer virusSpherical capObject (grammar)Green's functionLecture/Conference
00:48
SummierbarkeitTrailParallel computingGreen's functionProcess (computing)MehrprozessorsystemSpeech synthesisGenderXML
01:18
Process (computing)Control flowKernel (computing)Level (video gaming)Type theoryWordProcess (computing)Level (video gaming)Operating systemTable (information)Context awarenessAxiom of choiceSpacetimeKey (cryptography)Kernel (computing)Stability theoryInferenceJSON
03:06
Computer programDisintegrationInterpreter (computing)Kernel (computing)Library (computing)Drop (liquid)Process (computing)Revision controlSingle-precision floating-point formatField (computer science)Computer programming2 (number)ForestRight angleMultiplication signTrailGame controllerGoodness of fitWordComputer animation
05:02
WindowDemo (music)MehrprozessorsystemCodeArtificial lifeNumberProcess (computing)XML
05:40
CodeComputer fileView (database)NumberMultiplicationElectronic data interchangeWindowGastropod shellOnline helpCodeFibonacci number2 (number)Video gameBit rateXMLSource code
06:21
MultiplicationCodeServer (computing)Sanitary sewerWeightView (database)Gastropod shellMusical ensembleDifferent (Kate Ryan album)XML
06:53
Gastropod shellView (database)WindowDatabaseProcess (computing)Source codeLecture/Conference
07:15
CodeServer (computing)View (database)Process (computing)NumberOrdinary differential equationGastropod shellWindowSynchronizationProcess (computing)CodeMehrprozessorsystemCoprocessorArithmetic meanSemiconductor memoryLevel (video gaming)3 (number)Cartesian coordinate systemXMLComputer animationSource code
08:10
Run time (program lifecycle phase)Green's functionLevel (video gaming)Kernel (computing)Run time (program lifecycle phase)Kernel (computing)Level (video gaming)Multiplication signJava appletGreen's functionNormal (geometry)SpacetimeSoftware developerTable (information)Parallel computingIntegrated development environmentProcess (computing)3 (number)TrailProgram flowchart
09:47
Green's functionWindowGamma functionHecke operatorLibrary (computing)Server (computing)Video gameSynchronizationKernel (computing)Hydraulic jumpService (economics)Process (computing)System callString (computer science)
10:31
Server (computing)Dependent and independent variablesView (database)Computer fileMaxima and minimaCodeElectronic data interchangeGastropod shellWindowAuthenticationInstant MessagingVariable (mathematics)Multiplication signServer (computing)XMLSource code
10:59
EmailWindowView (database)Gastropod shellVacuumMenu (computing)Wechselseitige InformationMetropolitan area networkRule of inferenceLaceMaxima and minimaNormed vector spaceKey (cryptography)Online helpInformationCountingSource code
11:33
Server (computing)CodeDependent and independent variablesView (database)Computer fileWindowRange (statistics)Sanitary sewerMeta elementGastropod shellOnline helpService (economics)Multiplication signCartesian coordinate system40 (number)XMLSource code
12:00
Demo (music)Menu (computing)Meta elementWindowDivision (mathematics)Artificial neural networkNormed vector spaceSanitary sewerLibrary (computing)Computer configurationInformationCASE <Informatik>
12:35
Gastropod shellDirected graph2 (number)Computer configurationInformationMoment (mathematics)Process (computing)Source code
13:07
WindowDemo (music)CoroutineSoftware testingDifferent (Kate Ryan album)Bit rateDirected graphGreen's functionFood energyField (computer science)
13:45
Gastropod shellOnline helpCodeMultiplication signCondition numberInformationVideo gameProcess (computing)Directed graphSource codeComputer animation
14:28
Compilation albumCodeWindowGastropod shellView (database)Meta elementMenu (computing)Process (computing)SummierbarkeitProcess (computing)Information securityEvent horizonTask (computing)Condition numberBefehlsprozessorSource codeJSONXML
15:26
Gastropod shellView (database)CASE <Informatik>Process (computing)Multiplication signInformation2 (number)ImplementationSequenceSource code
16:23
View (database)Gastropod shellFitness functionBound stateBefehlsprozessorAreaInformationSource codeXML
16:50
Loop (music)NumberTask (computing)SynchronizationView (database)MultiplicationServer (computing)CodeTwin primeCodeView (database)Goodness of fitProcess (computing)BefehlsprozessorEvent horizonDefault (computer science)Computer animation
17:25
Gastropod shellView (database)WindowHill differential equationBefehlsprozessorBound stateMultiplication signSource codeLecture/ConferenceMeeting/Interview
17:56
WindowView (database)Demo (music)WordCodeNormal (geometry)Classical physicsSource code
18:18
Gastropod shellNormed vector spaceWindowView (database)DialectThetafunktionSanitary sewerServer (computing)SummierbarkeitQuicksortGene clusterKey (cryptography)View (database)Metric systemSource code
18:41
Sanitary sewerView (database)Event horizonComputer fileGastropod shellWindowDynamic random-access memoryRing (mathematics)Metric systemComputer file2 (number)DataflowMultiplication signParallel computingXMLComputer animationSource code
19:33
Demo (music)WindowProcess (computing)Workstation <Musikinstrument>Sound effectField (computer science)Event horizonServer (computing)Ring (mathematics)Green's functionLecture/Conference
20:25
Gastropod shellMach's principleWindowComputer fileBookmark (World Wide Web)Local ringSource codeComputer animation
20:46
User interfaceRegulator geneGreen's functionSoftware testing2 (number)
21:42
Chi-squared distributionComputer fileParallel computingConcurrency (computer science)NeuroinformatikParallel computingComputer animationJSONMeeting/Interview
22:14
Task (computing)Parallel computing
22:53
Concurrency (computer science)Service (economics)Parallel computingTask (computing)Number
23:24
Parallel computingConcurrency (computer science)Green's functionRing (mathematics)Parallel computingTask (computing)NeuroinformatikMultiplication signNumberElectronic mailing listResidual (numerical analysis)
23:56
Control flowTelecommunicationTask (computing)Kolmogorov complexityConcurrency (computer science)Bound stateWindowLaceEvent horizonException handlingReduction of orderLoop (music)Parallel computingComputer configurationCartesian coordinate systemMultiplication signNormal (geometry)Process (computing)TelecommunicationMathematicsGreen's functionComplex (psychology)Bound stateStandard deviationArithmetic progressionVideoconferencingMehrplatzsystemSheaf (mathematics)CircleForm (programming)Game controllerTask (computing)JSON
25:48
Event horizonGreen's functionSpherical capLecture/Conference
26:32
Computer animation
Transcript: English(auto-generated)
00:05
Thank you. At first, it is a Star Wars talk. So, I will put my cap. Now it's better.
00:22
And I would like to send a picture to my mom. She is in Brazil. And, please. Okay. Hi. Hello. Yeah. Thank you. So, I know the background is orange, but the subject is green threads.
00:45
Someone know what is green threads? Your hand? Okay. So, do or do not. There is no try. Master Yoda said that. This is our agenda. At first, we will understand what is threads and processes.
01:05
And the threads and multiprocessing. After that, understanding green threads, applying the green threads, speak about concurrency and parallelism, and why, when, and how use that. Okay. This talk was intended over the CPython.
01:24
So, the behavior in PyPy or Cython could be different. Threads and process. In Python, the threads are real. What it means? It means the threads are in the kernel level.
01:44
So, the key LT. It represents that the threads are p-threads using POSIX. They are totally controlled by the operational system. It represents for us a few problems as the context switch at the operational system level,
02:03
and the choice priority about which thread will be running first. So, there is our key LT space. This is our user space.
02:22
Our kernel space. It is a process. Inside the process, I could have the threads. I have the threads. And my process table and my thread table are in the kernel level.
02:44
So, the context switch and the thread priority is totally controlled by the operational system. You can suggest that your operational system choose a key thread first, but it does not mean that the operational system will work as you think or as you want.
03:08
And the Python has a specific thing that changes the behavior. It's the girl. The girl is not so bad. She works for us in every single thread program.
03:23
It's good, increases speed, and it's good to work with C libraries. It's not so bad. Many times, the guy told, okay, girl is bad. No, it's not totally true. It works for us, but for multi-processing, for multi-thread programs,
03:44
for multi-threading could be a problem. It's the Python threads behavior. Remember, when you create a few threads, I have three threads now, the first thread started, but the other threads are stopped.
04:03
And why? Because the girl will start a process to release and acquire. So, the girl will control the thread, and every time, it will run just one thread and not multiple threads. In the Python 2, it's changed.
04:21
And now, after the Python 3, it's changed, yeah. After the Python 3.2, it was implemented the drop girl. And what does it mean? It means that the Python now, in version 3, has a different behavior. This behavior is, in a field time, normally in five milliseconds,
04:44
maybe, around that, the girl starts a graceful process to change, to start the process to release and acquire. It could be a solution for a few things, but it's not a solution.
05:04
Threads and multi-processing. And I once told you, I once said to you, if you see IODA, there is a live code. So, oh, IODA! Great. Let's see. I have a very usual code here, a Fibonacci code.
05:24
Hey, Fibonacci! So, why Fibonacci? Because it has an Italian name, and the Python is in Italy, and I once showed that you're working. Okay. It's my code. I have the number 34, and I will run the Fibonacci 434.
05:42
But twice. And why you will do that? Because I want. Let me see now. At first, I will run in the Python 2. And why Python 2? Because if you are denied that it should work over legacy code,
06:04
you should know what happened in Python 2. Fib 01. And after a few seconds, the code comes back. Five seconds.
06:20
Great. Now, what I will do? I will execute the same Fibonacci, but using threads. Okay? I will run twice, but in two different threads. And it could be faster. Should be faster. But I don't know what happened.
06:47
Doo-doo-doo-doo-doo-doo-doo-doo-doo. Oh, so slow. Okay. It's not so fast. And why? Because the release and the query process creates a gap
07:02
to change it. What thread are you working now? So the two threads don't start together. And it's a problem for us. But I can run that. Just as a solution. the same code, using multi-process. And, python-fib-3.
07:28
And using multi-process is faster. But what the problem to try scaling your code or your application using process? The memory consume could be really big.
07:42
The threads are not so hard, but not so heavy, but it's heavy too. And with multi-process it could be roast. So, let's come back.
08:01
Understanding green threads. What is a green thread? A green thread is a ULT, so the user level thread. The user level thread are controlled by the runtime or your VM. The name green threads comes from the Java. And why Java?
08:20
The Java developers worked on the team and the teams was named green team. Not green team like basketball, green team. So, and they start to apply the threads in the runtime level. And because that the name is green threads.
08:42
But it's a lightweight thread. So, the same environment and here we have the user space. Again, the kernel space, the process, the threads. But now the threads are controlled in runtime.
09:03
So, we can control the switch and we can control the priority and the thread tables there too. So, it is the normal behavior for green threads.
09:23
The green threads were working just one real thread and could start together and start a concurrence process by the resource, for the resource. And many times a green thread could wait or not or they finish earlier than other.
09:41
But it is the behavior, the green threads behavior. Applying green threads. Oh, so I think we have a life goal. Here I have a little server that I wrote in.
10:04
Ja pronto. I like Ja pronto because Ja pronto is Portuguese name for the library. It's a more like Flask library and works with Pico server and it's very fast. So, it is my servicee, okay.
10:24
And I will consume this servicee at first in synchronous mode, okay. I will make a request, translate the JSON, get the variables, show the variables and I will consume this 10 times.
10:45
Okay, at first, run my not push, not push, not not. Okay. Python server, server. And I start a few workers to get my,
11:02
to send me the information. And now let me see what is, 0.005. Python, Python, 0.005. Okay, I will consume that.
11:21
One, two, three, four, five, six, seven, eight, nine, 10. Oh great, I know count. So, why did it too slow my service? Because I put a time slip here, so it's a bad service.
11:43
Okay, and it could happen, I don't know and a few APIs could be very slow. But now, I can run the same application using another asynchronous application.
12:01
And for this case, I will use gvent. And why, gvent is, in my opinion, the better option for Python 2 users. Okay, and you can use the same library for the Python 3.2. In this case, I will get my information,
12:23
the same method, the same fetch method, but now, I will create green threads here, the green lads. So, and after that, join now. Okay, now, and oh, as I feel fast.
12:45
So, they start to concur the process and spend two seconds to get the information. Maybe, in another moment, could spend just one second.
13:01
No, two seconds, okay. So, but in Python 3, a good option is use the sink.io. Oh, but the sink.io are green threads too. It's not coroutines. Yeah, but green threads, coroutines,
13:23
and lightweight threads, all these things, green lads, all these things are like green threads, works like green threads with a field difference. And at first, I will show a field difference from a sink.io for gvent. And I think it's the big difference
13:41
between a sink.io and gvent. To run this code, it's Python, what is the example? 007, oh, 007. And he start, and start, and run, and was slower than gvent, and why?
14:04
Because by default, the sink.io implement prevent the race condition, and it's create a fill, spend more time than gvent. Gvent doesn't prevent the race condition to get information.
14:21
But, oh, but I want to see sink.io faster again, and how can I do that? You could use an executor to split your processing. Okay, now I will run, I'm going to run the event loop,
14:42
but I'm executing, I run in an executor, and it should be the eight. Was fast, was like gvent, because now I drop
15:02
the race condition prevention. So, if you want to assume the risk, you could use the executors to do that, to do these tasks But, Vinicius, would you show the behavior, the guild behavior in the Python two?
15:21
And the Python three, how it works for CPU bound, for example? We can run again our Fibonacci here. And we'll see the time, and spend five seconds.
15:43
Okay, what's the sequential code? Now, with threads. It will spend basically the same time.
16:01
In this case, because the new implementation, the new implementation in the Python two, in the Python three dot two, the guild are not so hard to create the release and the cry process. And this process is the same, as far as I recall.
16:25
But, okay, I can use asyncio for CPU bound. Normally, we use asyncio for IO bound, but with executors, you could use the same API
16:41
to get information from the, using CPU bound. Okay, show me that. I don't believe in you. So, it is the same code with 34, the Fibonacci code. And now, what's the synchronous? I have two green threads.
17:04
And I set the process view as a cuter. By default, asyncio use the thread view as a cuter. And it's good for IO. But, for CPU bound, you should use the process view as a cuter and set this in your event loop.
17:25
So, let me see. Okay, fast. So, we can use asyncio for CPU bound two,
17:41
and not just for IO bound. It's not so usual, and I don't recommend this, but if you want to play with that, could be a good idea to play, just to play. I want to use this all the time. No, just to play, please.
18:01
And here, I have a few thing that I like to show. There is a flask, okay. Normal flask called reload word. Okay, it's nice. And let me see what happen with my flask code.
18:23
I will start here. Let me see where it is. Flask and flapp.py. So, the flask is running. And now I will use, they do R key
18:41
to get a few metrics. Let me see if the door is okay. Yeah, I will get a few metrics with creating a concurrence with 62 guys and the 10 threads, two threads, sorry, byte per 10 seconds.
19:02
And running 10 seconds. And okay, near to 1,000 requests per second. But if I run in second C, a few problems could happen. Let me see.
19:21
Jo, jo, jo, jo, jo, jo, jo, jo, jo, jo, jo, jo, jo, jo. 10 seconds, a lot of time. Looks now is just 400 requests. Hence why? Because in fact, the flasks don't finish at the work before. The first process with the R key.
19:42
They keep trying to answer my requisitions, my requests. Okay, thanks. And now, I have a field called here. I use gvent to monkey patching the server.
20:01
And where gvent is used, for example, unicorn use gvent and the unicorn use green threads to work. And it is my code with gvent just to play with flask. And here, I start with the server. The flask, the same flask app, the same flask method,
20:22
and now I will put on the gvent. Okay, I will stop and app, gvent. Let me see, it starts. I don't believe that it start. And I use opera.
20:42
Okay, local. Yeah, that will work. Now, the same test with 10 seconds.
21:02
Okay, 2,500. And if I run again, it's the same. Hence why? Because the green threads, my requisitions
21:24
was catch by my green threads and start to work with that. So it's not just one thread or just workers or kind of this. Because of this, sanic is too fast and japrento is too fast too because they used green threads to work.
21:41
And okay, I will continue. Concurrency and parallelism. Are concurrency and parallelism the same thing?
22:03
Ah, you are not my daddy. Yeah, exactly that. No, no, it's not the same. And to explain what is concurrency and parallelism, I will use the strike against the Death Star. And because I think it's pretty good to explain that.
22:21
Now, I have a few tasks. My X wings, okay. And these X wings will start together to try destroy the Death Star. But if you remember the episode four, the Death Star has just one flight path to throw themselves against the target.
22:46
And the tasks start together, but concurrency by the same resource, it is concurrency. So Star Wars teaching concurrency. So Star Wars is a pretty good movie.
23:01
And now I want to have parallelism. To do that, I should have the proportional number of tasks to resource. And now it's not more a Death Star, but a Death Constellation because there is two Death Stars now.
23:22
So it is parallelism. What is our conclusion with that? Multiple green threads could provide parallelism. And always will provide concurrency. And parallelism doesn't depend just our side.
23:41
So our task numbers, but the resource that will be explored. Many times, you think, okay, it will be parallel, but not will be concurrence because the resource is not able to answer you. And why, when, and how use that?
24:01
Why? This is to control the communication between tasks. When you try to use processes or threads, it could be harder than use green threads because all the things are compact label and are split and divided in the same event loop. Thank you.
24:20
It's easy to control failures. For example, the event inform an exception and what kind of exception and the messenger if it's already or processing or processed. And reduce complexity concurrence applications. When to provide asynchronous solutions, obviously. And every time that concurrency could be applied,
24:41
perhaps for your bound. And how? In Python 2 and Python 3, you could use gvent, eventlet, greenlets, and I think these three options are the better options to do that in the Python. But in Python 3, I strongly recommend Curio or AsyncIO.
25:03
Why I recommend strongly AsyncIO more than Curio? Because you can change the AsyncIO in giant, for example. We can change here from a normal event loop
25:20
to start to use UV loop and the change is pretty easy. Obviously, this example is not a better example to see the performance about the UV loop. But UV loop is AsyncIO in giant faster than the standard
25:41
in giant and AsyncIO. And now, may the green threads be a few. Venicius Pacheco Canobi. It's me. I like Pacheco Canobi. Questions? I should remove my cap because I can't hear.
26:19
Some question?
26:20
Any questions? No, okay. Thank you. Thank you.