Communication Break Down | Coroutines
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 490 | |
Author | ||
License | CC Attribution 2.0 Belgium: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/47482 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
00:00
Information and communications technologyControl flowTelecommunicationMultiplication signCoroutineView (database)Mobile WebWeb-DesignerThread (computing)Control flowSlide ruleFunctional (mathematics)Computer animationLecture/Conference
01:22
Default (computer science)Thread (computing)CoroutineLimit (category theory)Virtual machine2 (number)Multiplication signState of matterFunctional (mathematics)QuicksortNumberRule of inferenceCore dumpProteinShared memoryLecture/Conference
03:23
Thread (computing)CoroutineDefault (computer science)Thread (computing)Multiplication signCoroutineInheritance (object-oriented programming)Java appletCache (computing)Functional (mathematics)Context awarenessLetterpress printingOnline helpVotingProteinWordColor confinementProcess (computing)Key (cryptography)Statement (computer science)Computer animation
05:28
CoroutineThread (computing)Universal product codeCASE <Informatik>ResultantLetterpress printingContext awarenessFunctional (mathematics)Different (Kate Ryan album)Point (geometry)Computer animation
06:27
CoroutineDefault (computer science)Local ringThread (computing)CoroutineMatching (graph theory)Multiplication signDefault (computer science)Stress (mechanics)Operator (mathematics)PlanningArithmetic meanEntire functionSystem callContext awarenessComputer animation
07:29
Local ringDefault (computer science)CoroutineThread (computing)Local ringDifferent (Kate Ryan album)Thread (computing)Arithmetic meanProof theoryDisk read-and-write headCodeFunctional (mathematics)Multiplication signGraph coloringSynchronizationComputer animation
08:42
Thread (computing)CoroutineBlogDefault (computer science)Multiplication signThread (computing)Functional (mathematics)ProteinComputer animation
09:06
CoroutineDefault (computer science)Thread (computing)Information and communications technologyString (computer science)Complete metric space2 (number)Functional (mathematics)Thread (computing)System callWeightMultiplication signSuspension (chemistry)Block (periodic table)PRINCE2Directed graphSheaf (mathematics)Core dumpShared memoryMechanism designObject (grammar)TelecommunicationLevel (video gaming)State of matterStandard deviationResultantChemical equationOrder (biology)Goodness of fitComplete metric spaceCoroutineCodeLetterpress printingDirection (geometry)Computer animation
13:11
String (computer science)Complete metric spaceInformation and communications technologyHeat transferStreaming mediaDefault (computer science)Buffer solutionFunctional (mathematics)CoroutineLine (geometry)Radical (chemistry)Pattern languageBuffer solutionQueue (abstract data type)BitCodeDifferent (Kate Ryan album)Default (computer science)Type theoryMultiplication signHeat transferSystem callAeroelasticityHand fanStreaming mediaElectronic mailing listSuspension (chemistry)TelecommunicationPerimeterProtein2 (number)Computer animation
16:47
Information and communications technologyLoop (music)CoroutineResultantElectronic mailing listRoundness (object)ProteinFunctional (mathematics)Hand fanWater vaporMultiplication signComputer animation
17:57
Information and communications technologyDefault (computer science)Wechselseitiger AusschlussDataflowDigital filterTelecommunicationWechselseitiger AusschlussMultiplication signCoroutineExtension (kinesiology)MultiplicationThread (computing)Block (periodic table)Range (statistics)CodeDataflowStreaming mediaClosed setFunctional (mathematics)Radical (chemistry)Different (Kate Ryan album)Algebraic closureLoop (music)Operator (mathematics)Arithmetic meanContext awarenessException handlingResultantLoginCausalityPRINCE2Process (computing)Moving averageSimilarity (geometry)Electronic mailing listComplete metric spaceComputer animation
21:45
Digital filterInformation and communications technologyDefault (computer science)DataflowLetterpress printingCASE <Informatik>Level (video gaming)Line (geometry)Operator (mathematics)Default (computer science)Filter <Stochastik>Computer animation
22:48
DataflowDefault (computer science)Digital filterInformation and communications technologyDataflowMultiplicationContext awarenessFunctional (mathematics)Error messageComputer animation
23:15
Information and communications technologyDataflowDigital filterDataflowContext awarenessYouTubeProgram slicingComputer animationLecture/Conference
24:10
Point cloudFacebookOpen sourceLecture/Conference
Transcript: English(auto-generated)
00:05
Now it's time to introduce Bob, who is going to talk about communication breakdown coroutines. So please, welcome him. Thank you. So first, mic check. Can you hear me?
00:21
Yeah? Okay. I'll try to entertain you during this lunch break. I'm going to talk about communication on coroutines. I'm Bob, and I'm a mobile lead developer at Quik, with headquarters in Finland, and I'm based in Stockholm, Sweden.
00:41
So, coroutines. We had a lot of talk about them, so I'm not going to go into them. I'm going to talk about the communication, and basically one of the main problems I see is think of them as lightweight threads. So we're going to see why we might not take that too literally.
01:04
And we're also going to see what we mean by lightweight. Should we treat them as threads? Because they might be. And also, let's see if this works.
01:21
Whoa, it's fast. So, this function main run blocking is going to be in every slide, but it's going to be invisible. So you have to stare at it, so it's locked into your eyes for two more seconds, and now it's gone. But it's always going to be there, so we're always going to have the main scope to run all the coroutines on.
01:45
And by lightweight, we've seen this example before. Here we create 100,000 coroutines with the launch builder, coroutine builder. And it takes around 150 milliseconds to run on my machine.
02:02
And now we switch to threads, and it's up to five seconds to run this. So yeah, they're lightweight. Let's go back to coroutines. And now we're using a dispatcher, the default dispatcher, which comes on my machine on eight threads.
02:23
It's usually the number of cores on your machine times two. So it's very effective. And this takes around 800 milliseconds to run. And if we switch to IO, that doesn't really have a limit. I cranked it up to about 84 threads on this machine.
02:43
And the more threads you're going to use, the more time it's going to take. And how about thread safety? Dispatchers, you can think of it sort of like thread pools, but not really. There are rules deciding on where you can run your coroutines.
03:06
And thread safety, just a quick show of hands. Is this thread safe? Yes, show of hands. No, it's not. Because we run on possibly eight threads, and we have a shared mutual state.
03:25
And the same goes for IO. It's just another thread, pool-ish. How about now? Remember that function main, run-blocking, we're running on the main thread. We launch 100,000 coroutines.
03:41
But this is actually thread safe, because we all run on main. Because coroutines inherit dispatches from their parents. But you might have a coworker who does this, and now it's not thread safe anymore. So, we have to be careful.
04:04
And just to make sure, when I started learning Java a couple of years ago, and threading a bunch of years ago, I actually thought this would help to add a volatile keyword or annotations to it. It doesn't. The volatile only makes sure that you don't use the cache value.
04:23
You read it every time. But it doesn't prevent anyone else from reading the same value and writing the same value. So, volatile doesn't help us. So, can we treat them as threads and define?
04:42
Someone mentioned the unconfined dispatcher as well. And I think it should be whatever instead, rename to whatever. I just follow along. Because what it does, here we have our function main, so we're running on the main thread. And when you get into a coroutine with the unconfined, it says,
05:01
OK, you're on the main thread, I'm just going to tag along. I don't care. So, the first statement, print A1, is going to be on the main thread. And then we do a delay, which has a different dispatcher. And when it comes back, the unconfined coroutine is going to say, OK, you're on a new thread, new context.
05:21
I'll tag along you. It doesn't preserve your context. So, the A2 is going to be running on a different thread. Or possibly, often. And unconfined is for a corner case. I haven't seen them being used in production code yet.
05:43
But you can have the similar result with this example. We launch on the dispatcher I.O. We print A1 on thread worker 1, say. Then we call a suspension function, which declares a with context,
06:02
and a different dispatcher. And here you can possibly end up in a different thread. You're going to end up in a different context, but you can switch thread, and you can stay in the same one. And when you come back to the coroutine from the suspension,
06:22
and print A2, you can just follow along the other thread. So, you might get this, and it's perfectly safe. But you can also get this. The thing to be aware of is that after the switch context, you're going to continue on that thread instead of coming back.
06:44
So, threads and coroutines aren't exactly one-to-one match. And just to make it even clear why it's kind of confusing to think of them literally as threads, is if you have a thread local, which contains your value on a thread.
07:06
So, if you switch thread, you're going to have a new value that you can share on that thread. So, given the same example, we set the local to IO first, then we read it, we call switch context, we read it again,
07:25
we set it to default, and we come back and read it one third time. So, we can have IO, IO default, meaning we're all running on the same thread. The thread local stays the same during the entire operations,
07:40
but we can also switch threads. And now, thread local are thread safe, they're going to stick to their thread. And this is a proof of that, because we switch threads, and when we're coming back to print A2, we're on the new thread with a new thread local value, but it reads wrong in your head when you have a coroutine with different thread local values.
08:05
So, I would suggest, just for your own sanity, to not combine those, because you are not, it's not readable code to have this different.
08:20
So, we should treat them as coroutines. And another example from Dan Leb is if you use synchronized. This, actually, when you have a thread like this, or we create two threads, and call a synchronized function, we're going to have starting ending, starting ending.
08:45
The annotation is going to help us synchronize the code, so only one thread is going to be allowed in it at any given time. And if we change to coroutines, we launch two coroutines, and we call the same function, but now it's a suspending function, and we do a delay.
09:05
This is actually going to print out starting, starting, ending, ending. And to understand that, we have to understand how suspending mechanism works in coroutines. What it actually does on a high level is when you call from a coroutine,
09:22
when you enter the critical section, it's going to acquire the lock. So, it's going to do the same thing. It's going to lock the function. It's going to print starting, and then it hits the suspending function, and going to put the state into a continuation, and suspend it,
09:40
and then it releases the lock. So, it actually divides this function into two, and when the suspension is done, it's going to acquire the lock again, and print the ending, and then release the lock. So, that's why we can have this order,
10:00
because during the suspension, the other thread comes in and takes over. So, never use suspending with the synchronized annotation. You can do it if you don't call any suspension functions inside of it, but it's not safe. So, someone will put the suspension function in there eventually.
10:25
So, let's do the communication like we should in coroutines. Deferred, we mentioned it and talked a little bit about it. It's often used with the async builder, which launches a coroutine,
10:44
and the last value of the coroutine, or if you return explicitly, it's going to be the deferred value. It's kind of like a future, and when you need a value, you call a wait on it,
11:02
and then it's going to wait until the async block is finished, and return the value for you. And the async is executed directly. Just to prove that, we have this code. This entire block actually takes two seconds.
11:23
The second async block is going to finish before the first one, but when we await the results, it suspends, so the second await won't be called until the first two seconds are done, but that suspend is going to release directly, so it's going to be like a regular call.
11:45
We could also do this more manually with a completable deferred that you actually control yourself, so you don't have to use the async builder. You can use like an actor or just a regular launch on a coroutine,
12:02
and what you have to do then is you have to call the complete on it to say this deferred is completed, it's done. You don't have to await anymore. This example is just to show that even though deferred,
12:21
or completable deferred, is a safe way of communicating, we still can't share a state, because this can alter the object that we send to complete after we send to complete will actually still alter the object,
12:40
so it's not safe to assume that whatever we complete is going to stick through forever, so we should use a val instead. We don't want to do like this. But one thing that's good with a completable deferrable is that if you call complete multiple times,
13:04
it's only the first one that's actually going to complete it, so in this code, we're always going to have Bob sent, never going to have Charlie. You can still call it, and you can call complete how many times you want.
13:20
The first one is going to return true, that it completed the deferred. All the others are going to return false, because they don't do anything, and just to be explicit that it's communication between coroutines, and it's perfectly safe.
13:41
The second one we're going to talk about is channels, and they provide a way to transfer stream of values, deferred are for one value, channels are for multiple values. So let's get familiar with it. Here we launch on the dispatcher default, and we send two values on a channel,
14:01
and the send function is a suspending function, so it's going to suspend until someone calls receive on the same channel, and here we send two times, and we receive two times. Just to be clear, this code will never terminate, because we call send, and then it suspends,
14:24
it will never get to the receive line of code, so we have to do it on different coroutines to be able to complete the code, or we can alter the channel by adding a buffer.
14:42
So now I have a buffer of one, so you can send to the channel, it's going to buffer the value and release the suspension. So there are a couple of different types of channels, we get the buffered one, you decide how big the buffer is going to be before send suspends,
15:03
and then we have unlimited, when send never suspends, you can just keep on hitting it, and it's going to be like, essentially a blocking queue. Conflated, we talked about a little bit in earlier talks,
15:21
that's actually going to store the recent values, so send will never suspend on this one either, if it has a value, it's going to replace it, so it's always only going to keep the latest one, and if someone receives that value, it's going to be empty, so receive can still suspend until there's a value there,
15:44
and rendezvous is the default way, one send and one receive function, they have to meet to transfer the value. Yeah, and also there are terminal operators,
16:02
or functions to channels, like to list here, that's actually going to wait for the entire channel to complete, and then we're going to make it a list, and this won't terminate either, because it doesn't know if the channel is done sending or not,
16:22
we have to close the channels to be able to have terminal functions on them, so let's see where they excel, there's a pattern called fan-in, you have many producers and only one consumer,
16:42
so here we launch two, we launch two coroutines, and we're going to send them to a race suspended function, that's just going to randomly release them in 0 to 5 seconds,
17:00
and then we're going to have one coroutine that's listening or receiving these functions, or these channels, and we can either get Charlie Bob or Bob Charlie, it all depends on the random function,
17:21
the other way around is called a fan-out, where we have one producer and many consumers, like if you want to do concurrent work on a huge list, you can put them out through one channel and have multiple coroutines work on it at the same time, here we just loop 30 times and send 30 items,
17:44
and then we close the channel, and then we can actually iterate over the channel with a for loop, so we create three coroutines that all have a for loop, and we're going to get a result something like this, and what I've noticed is,
18:02
it isn't always ordered, this is, so I don't know why, if you put a delay in the suspending function, it's always ordered, so I have to look into that, we can also have, there are builders for these kind of things,
18:21
like a produce builder, that actually creates the channel for us, you can call send instead of channel.send, and it closes the channel once the closure is complete, and you can also call consume each on the channel
18:43
instead of having a for loop or multiple receives, so it's going to keep on consuming and receiving until the channel closes. Next up is the mutex, mutual exclusion printed for Kotlin,
19:01
so if you remember this code, it's not thread safe, but we can make it thread safe with a mutex, it's kind of a reentrant lock, so you can lock a mutex, you can unlock a mutex, and this function with lock, actually first locks it,
19:21
then it builds a try block, and inside the try block is your code, and on the final block it's going to unlock it, so it's a safe way of using these locks, and now this code is thread safe, it's fine-grained and it's custom, but it's safe. The same thing is with the synchronized example,
19:44
we can actually make that work as well by using a mutex instead of synchronized, so this actually works, and it prints starting ending, starting ending every time. We still have some time to go over flow,
20:02
which is the Kotlin way of reactive streams, and it's kind of similar to channels, but not really, you have an emit function instead of send, and you have collect or other terminal,
20:21
but the most common one is collect, instead of consume each, but they're basically doing the same thing, the big difference is that a channel is hot, meaning you have a coroutine behind it, or multiple coroutines feeding data that is active all the time, and the flow is actually cold
20:41
until you call the terminal function, or a terminal function on it, so it's not going to do anything until you collect, and you can collect it multiple times, and hopefully get the same result, you get the same execution, the result is up to you.
21:02
So here we get value 1 to 10, and we also have operators, like coming from the stream world, like filter, map, and also extension functions, like here we have a range as flow, you can have a list as flow,
21:22
but the more important thing about communications with flows and threads is that the collect or the terminal operator always determines on what context we're going to run, on which dispatcher, so now we still have that function main run blocking,
21:43
so we're running on main, both flow on main and collect on main, but usually we don't want to do that, we want to have like a background job, so we can add the flow on, which decides all the preceding operators
22:03
that don't have their own context, is going to use this one instead, so in this case we're going to flow on a worker 1, but we still collect on the main thread, or main dispatcher also, and just to show that it's only preceding,
22:23
we add the map on print line as well, so map on is on main, filter is also on main, if we want to change that we have to move the flow on, so we can move it down to under the map, and now everything about it is on the dispatcher default,
22:42
and in this case on worker 1. We can also just jump around if we want to, we can have multiple flow ons, and one final thing, the with context inside a flow is,
23:02
don't use it, you can get away with it if you're lucky, if your collect or terminal function is on the same dispatcher, you might get away with it, otherwise you will have a runtime error, so use the flow on,
23:20
it's to preserve context. Thank you. Questions?
23:47
Are you going to have available the slides? What? Are you going to have available the slides? Yeah, they're available now at FOSTM. Thank you very much.