We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Async / Await - Break the chain asynchronously

00:00

Formal Metadata

Title
Async / Await - Break the chain asynchronously
Title of Series
Number of Parts
96
Author
License
CC Attribution - NonCommercial - ShareAlike 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Different variations of the Chain of Responsibility pattern can be found in middleware like OWIN and ASP.NET MVC. They all share a common approach: nesting functions inside functions, also known as functional composition. In this talk, we’ll build the Chain of Responsibility from scratch, apply it to a message pump for a service bus library and combine it with Async / Await to unleash the full power of asynchronous I/O. Join this talk to learn how async and recursion plays together. Discover how composition allows changing behavior at runtime. Finally, float state into your Chain of Responsibility so that you don’t sacrifice thread safety. Break the Chain of Responsibility with me.
Multiplication signImplementationGroup actionLogic gateAnalytic continuationFunctional (mathematics)Pattern languageWordRaw image formatDependent and independent variablesComputer fontArithmetic meanCodeWeightContext awarenessMathematicsGame controllerTask (computing)Message passingLevel (video gaming)System callWeb 2.0Process (computing)Software design patternInsertion lossVideo gameSound effectDirected graphData conversionSheaf (mathematics)Shared memoryToken ringSingle-precision floating-point formatSet (mathematics)Element (mathematics)Game theoryModule (mathematics)BitMetreCASE <Informatik>TwitterAuthenticationBlock (periodic table)Different (Kate Ryan album)Software developerLogicLine (geometry)Centralizer and normalizerFilter <Stochastik>Bounded variationSpeech synthesisType theoryMereologyTerm (mathematics)Product (business)Lambda calculusRevision controlClient (computing)BlogDomain nameSoftwareException handlingExtension (kinesiology)Fluid staticsCodeVisualization (computer graphics)OvalGoodness of fitComputer animation
OvalGroup actionFluid staticsFunction (mathematics)Goodness of fitCore dumpGroup actionPoint (geometry)Type theoryElectronic signatureLogic gateSystem callSynchronizationSheaf (mathematics)Function (mathematics)Line (geometry)Element (mathematics)Dependent and independent variablesCondition numberExtension (kinesiology)Run time (program lifecycle phase)Video game consoleVisualization (computer graphics)Computer animation
Multiplication signSystem callElement (mathematics)CodeGeneric programmingExtension (kinesiology)Stack (abstract data type)Product (business)Real numberReflection (mathematics)Pattern languageException handlingStatement (computer science)Closed setRight angleDependent and independent variablesFunction (mathematics)WeightConfiguration spaceDiagram
SynchronizationOvalHill differential equationGroup actionEmailExecution unitStatisticsTask (computing)Fluid staticsLandau theoryMilitary operationLinear partial informationPresentation of a groupPentagonWechselseitige InformationElectric currentMetropolitan area networkTouchscreenRevision controlFunction (mathematics)Group actionOcean currentRecursionSystem callType theoryWindowElectronic signatureLogic gateTask (computing)Exception handlingOperator (mathematics)State of matterElectronic mailing listMessage passingGraph coloringPattern languageSubject indexingMetreDependent and independent variablesComputer configurationResultantDefault (computer science)Stack (abstract data type)CodeGoodness of fitVideo game consoleFilter <Stochastik>BitInheritance (object-oriented programming)CodeExtension (kinesiology)Point (geometry)Observational studyWater vaporExpressionBlogProcess (computing)Sinc functionElement (mathematics)Unit testingCondition numberLambda calculusParameter (computer programming)DebuggerControl flowOvalFluid staticsObject (grammar)Run time (program lifecycle phase)Execution unitSoftware testingSoftware bugHidden Markov modelComputer animation
Graphic designLine (geometry)OvalFluid staticsGroup actionSynchronizationGamma functionFunction (mathematics)Execution unitTask (computing)Software testingHill differential equationMenu (computing)WeightPattern languageState of matterTask (computing)Message passingElectronic mailing listElectronic signatureCompilerFunctional (mathematics)Slide rulePoint (geometry)System callElement (mathematics)CASE <Informatik>Revision controlVisualization (computer graphics)Social class2 (number)Domain nameSoftware testingGroup actionRun time (program lifecycle phase)Dependent and independent variablesBitSynchronizationMereologyResource allocationOrder (biology)Statement (computer science)Structural loadException handlingMathematicsPhysical systemType theoryFinite-state machineLambda calculusContext awarenessStack (abstract data type)Mathematical optimizationDirected graphLogic gateTunisMultiplication signWorkstation <Musikinstrument>Uniform resource locatorExpressionCodeRoutingSoftwareSinc functionLimit (category theory)Computer animation
Fluid staticsGroup actionOvalSummierbarkeitMaxima and minimaQueue (abstract data type)Message passingCountingSynchronizationTask (computing)Function (mathematics)Execution unitInformationMenu (computing)Thread (computing)RecursionException handlingBitCodeMessage passingData storage deviceElectronic signatureMotion captureDependent and independent variablesTask (computing)Level (video gaming)Element (mathematics)Group actionOvalInterprozesskommunikationSubject indexingPhysical systemLoop (music)Fluid staticsRevision controlOcean currentCompilerOperator (mathematics)Computer architectureMassPattern languageFreezingLine (geometry)Condition numberInformationElectronic mailing listResultantInterior (topology)Point (geometry)Service (economics)Multiplication signQueue (abstract data type)Ferry CorstenComputer wormBus (computing)Order (biology)CodeVideo gameImplementationRight angleLogic gateWeightSeries (mathematics)Sign (mathematics)MathematicsSequenceQuantum stateRule of inferenceAxiom of choiceDataflowSystem callDirected graphFunctional (mathematics)Casting (performing arts)Context awarenessBoss CorporationMeasurementDemosceneDatabaseCompilation albumQueueing theoryComputer animation
Group actionCountingSynchronizationFluid staticsNormed vector spaceLimit (category theory)Physical systemPresentation of a groupTask (computing)Cue sportsExecution unitSpacetimeTime zoneRatsche <Physik>Lipschitz-StetigkeitDivisorMessage passingContext awarenessInheritance (object-oriented programming)EmulationLink (knot theory)PhysicsLevel (video gaming)Level (video gaming)ImplementationMessage passingFilter <Stochastik>Object (grammar)Type theoryContext awarenessDependent and independent variablesCodeEmailRaw image formatElement (mathematics)Software frameworkString (computer science)Exception handlingData dictionaryStreaming mediaBoss CorporationReflection (mathematics)Phase transitionBuildingTerm (mathematics)Thread (computing)Order (biology)Service (economics)QuicksortSystem callStatement (computer science)PhysicalismBridging (networking)Operator (mathematics)Content (media)Physical systemGroup actionModal logicTask (computing)AbstractionInterface (computing)outputPattern languageState of matterWeb 2.0Execution unitFunctional (mathematics)Category of beingMereologyBitSerial portComplex (psychology)Student's t-testLogicServer (computing)Sinc functionDirected graphRoboticsLine (geometry)Heat transferExtension (kinesiology)Right angleGraph coloringObject-oriented programmingUnit testingElectronic mailing listNormal operatorLocal ringComputer wormInformationRevision controlWeightLattice (order)Computer animation
Physical systemInstance (computer science)Data typeContext awarenessPattern languageSeries (mathematics)Queue (abstract data type)Dependent and independent variablesComputer wormMessage passingContext awarenessInstance (computer science)Type theoryInheritance (object-oriented programming)Run time (program lifecycle phase)Bus (computing)Open sourceProjective planeWeightTask (computing)CodeLogicImplementationComplete metric spaceSlide ruleConcurrency (computer science)Pattern languageLevel (video gaming)Data dictionaryStatement (computer science)Stack (abstract data type)String (computer science)Vector potentialElement (mathematics)Generic programmingState of matterMultiplicationThread (computing)Revision controlDecision theoryReflection (mathematics)Set (mathematics)Domain nameBitExplosionPhysicalismError messageMereologyNetwork topologyOperator (mathematics)Right angleComplex (psychology)QuicksortDifferent (Kate Ryan album)Event horizonPartial derivativePoint (geometry)NumberProcess (computing)Bridging (networking)Block (periodic table)Multiplication signSoftwareMathematicsSystem callRoboticsForestFreewareInternet forumMedical imagingPhysical systemInformationRecursionComputer configurationPurchasingPerspective (visual)Density of statesDirected graphBoss CorporationVideo gameXMLComputer animation
Transcript: English(auto-generated)
Okay. Cool. Can everybody hear me? Cool. If with the font size later, I'm going to do a lot of coding. If the font size is not big enough for the last row, please tell me. I'm going to increase it. It's currently 22. It should be enough, but we'll see.
Okay. A warm welcome from my side to my talk about the changes of responsibility pattern combined with async await in IO-bound domains. Who has been in my talk on Wednesday? Okay. A few. Okay. Cool. So my name is Daniel Marbach. I'm a solution architect and engineer at Particular Software. I live in central Switzerland in Lucerne, not Lausanne. That's
the French-speaking part. I'm in the German-speaking part of Switzerland. If you want to know more about me and what I do in my free time, I suggest you listen to David Riehl's podcast, Developer on Fire, episode 77. You can reach me on
Twitter under at Daniel Marbach, and I blog on planetgeek.ch and also on the particular blog, particular.net slash blog. I hope you subscribe on these two blogs. This talk is divided into three sections. The first, I'm going to show the pattern,
the chain of responsibility itself. Then I'm going to build it live on stage. First, a synchronous version of the chain of responsibility and then an asynchronous version of the chain of responsibility. And then we do a little wrap-up at the end. And, of course, questions if you have any. I think since we are a pretty small group, if you have any questions
and you feel it's blocking you to understand what's happening on the screen, please feel free to shout in, and I'm going to answer the question. And if other questions are raising, then just please ask them at the end. So who has ever used NancyFX? Hands up. Okay. Who has ever used NancyFX before and
after module hooks? Okay. Just a few. Okay. Who has used FUBU MVC from Jeremy Miller before? Okay. Who has used behaviors in FUBU MVC? Just one. Okay. Who has used Owen
before? Hands up. A few. Okay. Who has written that type of code I'm showing here on the slides? Okay. A few. So what this code does, it registers a function on the Owen middleware, which is an async lambda function, and you get passed in a context
and a next delegate. And if you want to continue the chain, you call await.next. And if you want to do something before all the other things in the chain, before this function is called, you can then just wrap code on the do something thing is here, and if
you want to do something at the end of the chain, then you add your code after the next call. So who has used Web API or MVC action filters before? The async versions? Hands up. Okay. Only a few. So this is a bunch of code from Web API action
filters. It's also pretty similar. You inherit, you implement the I action filter, and then you implement a method which is called execute action filter async. You get in an HTTP action context, you get in a cancellation token, and you get in a function func of task
which returns an HTTP response message, which is called continuation. It's not called next like in the Owen middleware, but it's called continuation. And then you do whatever you need to do in your action filter, and when you feel it's time to call the continuation function, then you call await continuation, and you can, for example, wrap it in a try
catch or whatever. So this in the end is going to invoke your API controllers and your other stuff that is currently in the HTTP pipeline with the Web API stuff. So all these variations we just discussed, Nancy ethics, before and after module hooks, Owen pipeline, Owen middleware, and of course also action filters, these are variations
and implementations of the chain of responsibility pattern. So the goal of this talk today is I want to show you how the chain of responsibility pattern works. I want to show you how you can build it yourself. This sounds a bit weird. Why do we want to go that deep? I'm
a firm believer that if you're using something like action filters, Owen middleware, it's crucial to understand the patterns behind it, because if it's going to fail in production, and you don't know how it actually works under the hood, you're going to be screwed pretty much. That's my opinion. Of course, you can make up your own opinion, but that's
why I'm currently standing here and trying to explain this pattern to you guys. I hope you find it useful, and if not, tell me at the end. The chain of responsibility pattern is a nice pattern. It allows you to extend behavior during run time, for example. Like the action filters, you can plug in your own action
filters into the whole HTTP pipeline and do whatever you need to do for your own business logic, like authentication and so on. So the pattern itself, it's defined by you have a client which calls the chain, and then the chain has a few different link elements
in that chain. What's important in this pattern is that each link element has an abstract notion of the next element in the chain. And we saw that it's the next delegate or the continuation delegate that is passed into the action filter
or the next delegate which is passed into the Owen pipeline. So they usually all share a common approach. They nest functions inside functions. So it's kind of a functional design pattern. A brief explanation how you can find the chain
of responsibility in your own life, or at least in my life. So when I empty the dishwasher, I sometimes play a little game with my son and with my wife. My son has about that type, maybe a little bit more than a meter. And then my wife is 163ish,
or maybe a bit more, and I'm one meter and 92 centimeters. So we can build a chain of responsibility in our kitchen. So my son is the first element, the first link in the chain, then my wife is the next element, and then I'm the last element. And what we want to do is we want to empty the dishwasher. And since we have different cupboards
in the kitchen on different heights, so when my son takes something out of the dishwasher, he can look at it and he knows that, oh, it's something in the cupboard which he can reach, and then he puts it into the cupboard. If he can't reach a cupboard, he can hand it over to my wife. And if my wife can reach the cupboard, then she puts it into
the cupboard, and if she can't reach the cupboard, she hands it over to me. For example, our wine glasses, because we don't use them that often, we have it in the highest cupboards in the kitchen, which is only reachable by me. So if she finds wine glasses in the dishwasher, then she hands it over to me and I put it into
the cupboard. So that's a chain of responsibility pattern. So what's important that each person in this chain of responsibility in our kitchen fulfills the single responsibility pattern. So my son has a responsibility, like I said, the lowest cupboards, and I have a responsibility, maybe the
highest cupboards in the kitchen. And like I said, each element, my son knows that he has to hand it over to my wife, and my wife knows that she has to hand it over to me. So if we would translate this to code,
it would basically look like this. We have a method static void person, which gets an action delegate here called next, and then we have an implementation, and then whenever we are done, we call the next delegate. So we're calling the next link in the chain.
So the dishwasher unloading process in our kitchen at home would probably look like this code here. So my son would declare a delegate to call my wife. My wife would declare a delegate to call me, the husband, and at the end, the chain is done. So I
would call the done delegate. So let's build this thing in Visual Studio. Can you all see it? It's font size good
enough? Okay. So I have here prepared son, wife, and husband. So, like I said, if my son wants to call the next, we declare an action delegate, and then we call it next or whatever. And at some point, when my son is done executing
the chain of responsibility, he's going to call next. The next delegate. Okay? And, of course, the same applies for my wife. Passing an action delegate, and she calls the next action delegate next. And then the same applies for
me, the husband. So I'll call next or next action. It doesn't really matter how we call it. And I call next again. So now all the individual elements are prepared to actually be hooked together. So let's call this thing. So, like I said, we need to call son. And since
it's a method, which accepts a method again, which fulfills the signature of returning voids, and getting in an action delegate, we can then declare an action delegate of type action. And call the next one, which is wife. And then we, again, an action delegate, and we call husband.
And now we need to have a dump condition. Right? So we can do this by, for example, declaring an anonymous action delegate. I call it here done. And let's implement it pretty quickly. And let's just output done. Output is an
extension method which does nothing more than just calling console.WriteLine in a simpler way. And then I just pass in the done delegate. And, of course, I need to execute it. So this is simple synchronous chain of responsibility. Let's execute it in Visual Studio.
So what we see now, we get son, wife, husband done. Really simple. So let's go back to the slides and do a brief recap. So what's going to happen during runtime is
that we get a pretty deep call stack. So we get here a depth of three. Three elements. Well, done as well. But I omitted this on this drawing. But basically, we go down. My son is the outer element. And the outer element
wraps all the inner elements. And then calls wife next and husband. So what happens is during runtime, we basically go down the call stack and then we go up again. So the benefit of this pattern is because each link element has access to the next one, each link element can wrap the execution of the entire pipeline which is coming after
that element. So, for example, we can wrap it in try catches. We can use using statements around the next execution and so on. But, of course, if you would need to write the code I just wrote in the previous code examples by hand, each and every time we build the
chain of responsibility, this would be really, really cumbersome. And in a real production scenario, we want to be able to compose this by some maybe abstract notion, by maybe applying some reflection stuff to load all the link
elements and compose it dynamically together. Because we want to benefit from the open-close principle or the extensibility of this pattern. So, how do we write this pattern in a more generic way? I'm sorry. I'm missing
something. I'm going to fix this later. So, okay. So, let's build this one in code. So the more generic version. So the goal is we want to call, we want to get the same output. We want to see on the screen son, wife, husband done. So how do we do that? So what we need to do, we need to
have a container for these actions. What would be a good container for these actions so that we can call them in a more generic way? Any ideas from the audience? A list? Yeah. Cool. So I call it actions. Yep. Yes. Thanks.
You're my hero. Here it is. Okay. So I call a list of actions. So, but what kind of types would we pass in here? Any ideas? Probably action. Is this enough? No, it's not.
Because we want to put in into this list methods which accept a method of type action. So what we can do is we declare this one as action of type action. So we have a
list which is of type action of action. So what does that mean? That means we have delegates in there, which return void, which accept the parameter of type action. Okay? Everyone still here? Good. So then, of course,
we can add to this list our stuff. So we can enter here. We can add, for example, my son. And then we can add my wife. And then we can add myself. And, of course, also
we can declare a done delegate. So I'm going to copy paste this one. Forgive me for copy pasting code. And I'm going to add this one as well. Here. So of type done. Of course, unfortunately, this doesn't really compile,
right? Because this method doesn't have the signature of action of action. So what we need to do is we need to basically insert here a lander delegate, which then calls the action. And then it should work. It doesn't.
Sorry? Yes. Exactly. Because what we're doing is this delegate actually gets in again, for example, the next one. And but here we don't want to call the next one. We
just ignore it and we just want to call the done method. Correct. So now, how do we execute this? Well, I have here a method invoke. And I pass in here the list of actions of actions. So I call this actions. And the
simplest way to do this is I declare a current index. And assign it to zero. Sorry. Okay. And then I fix my spelling mistakes. Thanks. And now, what else do I need?
Well, I have now a list of actions of actions. And I have a current index. So I'm going to apply here recursion. So of course, for the recursion, what I need is I need a recursion. So I need a done condition or a next condition to abort the recursion. So one way to do it is we could
say, for example, if current index equals, equals actions dot count. A really simple one. Then I'm just returning from my list. I'm returning from my method. And then the next thing, what I want to do is I want to get the first thing I need to execute. So I do here actions. I get
over the indexer. I get the element of the current index. Now I have the action. So what do I execute right now to basically spin off the chain of responsibility? Any ideas? What do I need to execute? Current with the
next. So current, like you said here, is action, right? So I'm executing this one. And action again accepts an action, right? So what we can do is we can basically declare an inner delegate again. And then we invoke ourselves. So we apply the recursion inside the lambda
expression here. So we invoke the invoke method, pass in again the action lists, and then we pass in current index and increase the current index. Okay? Any questions so far? Is that clear for everyone? Because we're
going to build on this pattern. Cool. So now what we have to do is we have here this chain of responsibility. And now we can call this invoke method. We pass in the action list. And since current index is already set to zero by default, we don't need to pass
in anything else. But when we call it internally, we increase the current index by one. So if I didn't do wrong, that should work. Let's try it out. Let's execute this test. And as we can see, surprise, surprise, we have the same result as before. And now we have
a generic pattern to basically extend it. So assuming my wife and I get more kids, more sons, or wishful thinking, another wife, let's see. As you can see here,
we now have a much deeper call stack. Oh, it's not ending. It goes forever until we have a stack trace exception. Hmm. Any ideas why? So, but where? Like
this? Just fix the bug. Luckily we have unit tests.
Okay. So, but what does this allow to do for us? Well, what we can do is we can now, for example, introduce filters into that one. I just showed you how we added more sons and wishful thinking, another wife. Let's
reduce this a little bit again. So we could also, for example, say, okay, if anything happens in there, we don't really want to abort the whole pipeline. We just want to, for example, catch exceptions. So we can do this by declaring another method. Let's call it static void filter exceptions. And we accept an action
delegate of next. And then what we can do is we can do a try catch, and we can call the next delegate. And, well, instead of refrowing, we can just basically, whatever we want to do, but let's see,
we just output the exception message into the console window. But we don't want to raise the exception to the caller. We just abort it. So what we can do is assuming we have somewhere, let's call it static void evil method. And again, this one gets an
action delegate of next. And this evil method just basically throws a new invalid operation exception. And we plug it in somewhere. Let's say we plug it
in here. Just before the done. What we can now do is when we execute this code again, well, of course, we will see the invalid operation being raised to the caller, ergo the unit test. But if we enter at any place here in the pipeline, we add this
filter exception. What's going to happen is that the test now is green. We just log out. The operation is not valid due to the current state of the object because we filtered it out, but the exception will not be raised to the caller. Okay. So let's see what happens during runtime in
this piece of code. So I'll set a break point in this thing here. And let's see if I execute it in the debugger. And we step into the whole process. So as we can see, we have now six items in that list and current index is null. And by
the way, this is OSCODE. There's a booth down there. I'm using that as well. So now I'm going into the sun one. And then I'm going into next. Going back again. Recursion. Picking the next element. And going to the next one and next one. Let's see.
I'm diving here. The call stack. Sorry. And deeper and deeper into the call stack. So if you would look into the call stack window, let's see. It's a bit small. But what you're going to see is the
call stack, the more elements we have in that list, the deeper the call stack gets. So at some point, this pattern has a limitation, because we're going to explode the call stack. And of course, if something deep down in the chain of responsibility happens, that exception is going to be raised, the call stack
up, and therefore each link element in that call stack can influence the behavior but also can influence the call stack of that exception. Because, well, that's the size of the thing we're going to see. So you should try to avoid to put too many things into the chain
of responsibility. It's hard to say how many things you should actually put in. But, for example, in answer response, we have maybe roughly in an incoming pipeline, we're going to talk about that later a bit, in an outgoing pipeline together, we probably have 40 elements or so, not really much more. Because each individual element in that chain of responsibility will also influence the
throughput of a message handling pipeline, of course. Okay? So let me abort that one, and let's briefly go back to the slides. So I showed you that we can essentially just add filter elements like
exception, catching, and so on. But since you know me and I've been talking a lot about async await, and I'm also going to talk here in this talk about message handling, well, the best thing to
actually use in an IO-bound domain is async await and task-returning APIs. So what I'm going to show you is how you use this synchronous version and refactor it to an async-enabled chain of responsibility pattern. So we're going to build this together. So let me switch again to
Visual Studio. So let's do the generic, no, so let's do the step-by-step approach first. So we have again here, son, wife, and husband. So in order to not clash too many things, what I'm going to
do is I'm going to copy-paste these methods here, and we're going to call them sonAsync and wife async and husband async. So in order to have a fully async chain of responsibility, what we need to do is
we need to change the return type of this method to what? Task, exactly. So we return a task. So in order, so that we can float the asynchronous execution context through everything, we need to be
async all the way, remember? So in order to do that, we also need to change the next delegate. So the next delegate in this example needs to become a func of task. So next needs to return a task. And what we can do now is we can call
awaitNext. So let's do that. We call awaitNext. And of course, because we have now here an await statement, we need to mark the method as async. So now what we can do is, and that's really beneficial if you have deep chain of responsibilities,
is if you only do something before the await call, and you have only one single await call in a method, the best thing what you can do is return the next instead of awaiting it, and then you need to remove the async prefix from the method. So what's going to happen here is the compiler will not
generate the state machines under the hood, and therefore it will not generate the necessary classes which do the allocations and the other for the state machine, so you will just return the task here. And it's a bit more efficient during runtime. But of
course, you can no longer wrap the next inside the try catch or a using statement. That only works if we only have one next and we only do something before. Okay. But now let's change this on all the things. So here again, we return a task, and we call wife return. And then here again, the same
thing. Okay. And also task returning, and then return next. So that's it. So how do we call this one? Well, first thing, await, right? So again, we call son, and then we call wife, but of course
async. And wife async. And then again, husband async. And then, of course, we also need a done async. So let's copy paste and let's briefly change
the signature of this done thing to be async compatible. So we do, we make again a funk of task here. And we call done. And now we need to return a task. So since we're not doing anything async in here, what we can do is instead of
marking this one as async, we can just return, if you're on, for example, .NET 4.6, you can use task.complete the task, which is a cached, already completed task you can return. Or if you're under, if you have a lower .NET version, you can use task.from
result, zero, one, false, true, whatever you want to use, right? So as long as you make it consistent. And then, again, we call that done delegate inside this pipeline. And that's it. As you can see here,
of course, again, we could mark these delegates as well, async, and we could, again, await inside these lambda expressions. Again, if we do that, we would also increase the call stack unnecessarily, because we have a lot of await statements in there. We have a lot of compile and generate the code, which then during runtime bloats the allocations. So this
optimization is really important if you do high throughput systems. But if not, it doesn't really matter for most of the cases. So now we have son, wife, husband, done. So the same is now async. So what we can do is, well, this took maybe 60 milliseconds, so let's do a
quick demonstration that you actually believe that the thing is async. So let's do, for example, await next, and then let's mark this one as async. And we can do an await task.delay, let's say of one second, 1,000 milliseconds, and now if we execute this one,
again, what you're going to see at some point is it's going to take more than one second. But it's not blocking. We're not blocking any threads. Completely async-enabled pipeline. But again, we want a generic version of this one, because we want to write this
every time again. So let us copy-paste the previous synchronous version, because surprise, surprise, it's not really different. So let us do this. So what do we need to change? Let me introduce, let me copy-paste parts of this test for the infrastructure up here.
And let's call it async automatic dishwasher unloading. Of course, we need this done delegate again. So any ideas what we need to change? Again, we have a list of action of action. What do we
have now for the async version? Sorry? I need to await my invoke. Yes. Correct. So you're saying we need to return here a task. First step. Cool. Thanks.
A list of what? List of action of func of action. Okay. Let me read that to you. I have a list of methods which return void, which accepts a function which
returns a function which returns void. So what is the signature of this method? Remember? Have a look at it again. So the signature is it returns a task. So it's a
func of task, which accepts a func of task. Okay. But, okay. So let's change it. So we call this one invoke async, because we are now returning a task. And now we're saying, okay, we have a function which gets a
function, which returns a task, and now what we return is a task. Okay? That's the signature we want here. And again, the same current index and stuff, but now we can use the same trick, because we want to avoid to
await unnecessarily. So what we do here is we have two exit points in this method. One is the exit condition for the recursion. So we do return task from result. We grab the first one. I'm still calling this one action. It's not entirely accurate, but please bear with me. And I'm passing in, again, the list of funcs, but, of
course, now I need to call the asynchronous version. And now it's important I need to return the task. Okay? Remember, with async voids, task returning becomes the new
void. So now we are able to entirely float the asynchronous execution through to everything. So now we mark this one as async task. And, of course, now we need to mark this list here as well of list of func of task. And
then what we need to do, we need to pass in the asynchronous versions of the methods. So son async, and, of course, also wife async, and, of course, also husband async. Of course, the filter methods, evil methods, and so
on would also need to change in my example. But I'm not doing this for this specific example. So now execute this. And as we can see here is we have son, wife, husband, and it also takes a bit more than a second because we still
have a delay inside the son method. Okay. So that's an asynchronous chain of responsibility pattern implemented live on stage at NDC. Are there any questions here so far? None. Okay. So let me briefly go back to
explain what happens in a message-based architecture. So, for example, with answer response or maybe also other message-based systems like mass transit, Rhino ESP, whatever,
you usually have an architecture like this. You have a queue which contains your messages. And then you have something what we call, for example, message pump. Well, if you don't have a reactive transport which pushes messages towards your code, then you have to poll the
queue for messages. For example, in Azure storage queues, you need to poll. On Azure service bus, you have a more reactive style of communication. Azure service bus pushes the messages to your code. But this is a generic approach which works for both designs. So we have a message pump. And then this message pump calls the chain of responsibility. The chain of responsibility
is these lines here. Sorry. The lines here. And in this chain of responsibility, you can have multiple elements. For example, retry on failure. So if anything happens during the message handling, like, for example, you have a lock on your database, you might think, okay, this
is a transient failure. Let us retry the execution of this code because potentially the next time or the third time it's going to be successful. So this could be a filter in this pipeline or a link which then does the retry. Then we pick something, some additional information maybe from the queuing system. Then we deserialize
the message payload. And then we determine the code we need to execute. For example, we call that in answer response message handlers. Then we call this code. And this code, of course, contains the customer code and is executed. So everything inside this chain of responsibility,
including the customer code, is completely wrapped in this chain of responsibility. So any exception that is happening inside that chain is going to ripple up. And each individual element has the possibility to intercept the exception, do some logging, do retry logic, and so on. So we can build, for example, a retryer for our pretty
simple in this chain of responsibility. So let me do that together with you again. So how would we do retries? Well, since we are already in an asynchronous context, we can, for example, say let us do a retry and
let's wait for maybe a second or so before we retry again. So I'll have a method which returns a task again. I call it retryer. And this method gets, again, a funk of task. And we call it next. And, well, the
simple thing we need is potentially a try catch, right? Because we're saying if anything happens, like an exception, we need more for doing retries. Let's say we want to try it three times. And if we still see an
exception, we're going to referral. Sorry? Yeah, for example, so I'm doing a pretty simple, I was planning to do a really, really simple implementation, just a for loop or something, right? I'm saying, okay, let's hard code it to three times. And what I need to do
is I need to do a try catch here. And then, of course, I need to call here the next delegate. So I'm wrapping the next delegate with an await inside the try catch. The compiler generated code will make sure that
any exception that is happening is inside the compiler generated code wrapped and catched in an exception dispatch info and refrown in this exact line so that we can wrap a try catch around it. That's all done by the infrastructure. We don't need to worry about that. So we have an exception. So what we can, for example,
do is we can create an exception dispatch info ourselves. Let's call it info. Because I want to capture the exception. So when an exception happens, what I'm going to say is exception dispatch info dot
capture. And I capture the exception that was happening here. So what this is going to do, it's going to basically freeze the stack trace for us. Because my idea is I don't want to modify the stack trace unnecessarily by the infrastructure. So I'm freezing the stack trace so that later I can refrow it without
influencing the stack trace. And then, for example, I am saying since I want to retry, and at some point I want to refrow, but only if nothing else, if there
wasn't an exception anymore. So what I'm doing is right after the await call, I'm basically nulling this again. Because I know that when I reach this line, no exception happens, right? And the next thing I need to do, because I don't want to retry unnecessarily, I'm going to break out of the loop, a bit nasty code, but
let's keep it simple. I'm breaking out of the loop so that I don't do unnecessary retries. And at the end, what I'm saying is, well, if I still have an exception, then I'm going to refrow. And if not, I do nothing. So what I do is here, info the L with,
and then I do a throw. So the exception operator has the possibility to basically assign this, so I can assign this info local when I do a capture of the exception, and of course I need to remove this
throw here. And then I have now a retry function. So let's try this. So let me code again an evil method, but a bit more sophisticated evil method, an exception. I'll add here an evil method, again, which
returns a task, gets a funk of task passed in, and it just throws an exception. Well, since we want to see if our exception, our retryer actually works, let's do a static int, static int counter equals zero, and
then we just say, okay, if counter is less than two, let's throw, and let's increment the counter briefly. And then of course now, because I modified the code flow, I need to here, because I'm not doing
anything in here which is async, I need to return task dot completed task. So let's plug in this evil method somewhere in our call stack. Let's call the evil method async so that we don't get confused. So I'll plug it in here. So let's execute that code.
Okay. Well, I forgot to introduce the retryer. So let's introduce the retryer on the top level. So I just, because the pattern is so extensible, I just add the retryer to the list and execute the thing again.
So what we're seeing now is we see son, wife, husband, wife, husband, and then it's done. And if we remove the if statement and the counter, let's do this. And execute it again. What we're going to
see is an invalid operation again raised to the caller. Okay. So we now fully async chain of responsibility. We have a retryer, we have exception filters. So we basically have everything to
build robust service bosses or robust HTTP, web API things, or all pipelines. We all understand that. So, but, what usually what we have is, I mean, I did that everything with methods, right? With action
delegates, with function delegates. And since we are coming usually from an object-oriented world, we might want something a bit more object-oriented. So I have here on GitHub an example. So what we can do is we can declare our own link elements. So
an implementation of a link element would, for example, look like this. An interface called a link element which accepts something like a message or whatever, which gets passed into that method. So that's the message which is coming from the queuing system. And then again, we're just calling a funk of task. And then we just need an infrastructure which
manages these implementations of these linked element interfaces, and we can basically almost copy-paste the code we just wrote together on stage into what I call here now, chain. And this chain, as you can see, is just holding an enumerable of linked elements, and we
have an invoke method which applies exactly the same code, which we just wrote in the unit test, here to a more object-oriented design. And we just pick the first link element, call on that interface the invoke method, pass the necessary transport message in, and
then call the inner invoke. So what you've just seen here is that the transport message itself, that's the state, right? The state we're passing into this chain of responsibility. And that state is really important because, I mean, a message handling pipeline without a
transport message wouldn't make sense, right? But in a more generic content like a web API or an O-win, we actually want more than just the raw HTTP call. We want, for example, access to headers. We also want that in a service bus, for example. So what we can
do is we can introduce what most frameworks out there call a context. A context is basically a generic container which holds all the necessary states. And I'm showing you a brief implementation of such a thing,
how it could look like. So here is a full implementation of such a chain of responsibility which also contains state. So this looks now like the following. So we have these elements, these linked elements, and
they implement an interface called a linked element. And this interface now gets an input context and an output context. So we have something that is going into the element, and we have something that is going out of the element. So and what we say here is that this
thing needs to inherit from an interface and abstract base class. Here I have just the base class, which I call context. And what we, for example, in answer response do is we have a dictionary of string of objects. It's not the most efficient implementation in
terms of allocations, but it allows us to basically, in a really easy and simple way, to compose anything our users need to float into the pipeline into that dictionary, and this is then flowing through the whole asynchronous context without us needing to use things like thread
static, which don't work with async await or async locals, or even per thread scope or other weird scopes on the user containers. So that's a really simple approach. And here, for example, in this linked element, what we do then is we create inherited
things from the context. So, for example, what we call, we have here an incoming physical context. So this incoming physical context will then contain the transport message and everything else which is important for the context. So I'm briefly showing you this implementation. We have now here property of type
transport message, but users can then add all the necessary stuff to this incoming context. But the thing, we have a little problem here in our pipeline. It's because when we start to generically extend this
chain of responsibility with linked elements, we have basically something, the context, which contains the state, and all these elements are going to basically mutate that state, right? So that's, well, not really a pure design, but we decided, well, we were going
to use this, but still we need a way for the end users to know in which part of the pipeline is the linked element placed. Because when we're saying, when we want to access things on the message, when the message is deserialized, well, how does the user know where to
put the element in? The user would need to have a lot of knowledge where it needs to put himself besides the framework. So what we decided to introduce is we're saying here on a high level, we have actually two phases. We
call them stages. We're saying, okay, we have the physical phase. The physical phase is when the transport message comes from the wire or the HTTP payload, and then we have everything executed in that physical phase which gets only access to the raw message information, like headers and like the raw stream payload, but not the
deserialized stuff. And then we have the logical phase, and the logical phase is when the message is deserialized. And like I said, we call these stages. And now we have a strongly typed way for the end users to extend this pipeline. And what we're going to do,
for example, we detect based on reflection what type of context you have, either it's a physical context or a logical context, and then we're doing a topological sort of the linked elements we have in the pipeline, and we put the user automatically into the right stage of the
pipeline without the user's need to worry about the exact placement in the pipeline. But in order to do that, we need a thing what we call stage connector. A stage connector bridges from one stage, the physical stage, to the logical stage. So, and we can do that
pretty simple. I already showed it a little bit. By, from a generic sense, just declare a linked element of type T-in context to T-out context. So now a normal
element, which is placed, for example, in a physical stage, will have T-in context of type physical context and T-out context of type physical context. In a logical state, you will have T-in context of type logical context and T-out context of type logical context. And
the connector then has physical context to logical context. And we can see this here in my code example. Hold on a second. So, as you can see here, the physical to logical connector now inherits from element
connector incoming physical to incoming logical. Normally, we would, for example, here call our chase and deserialize or whatever. Then we would deserialize the payload. And then we create an incoming logical context. And we pass in the logical message, so the
deserialized payload. And we pass in the parent context. And because this one is a dictionary of string options, we can essentially merge together all the state in the pipeline into that context. And then we have an inheritance hierarchy of context information, which then automatically floats through all the pipeline. And, of
course, all the other elements, they just need to say, okay, I'm a link element of incoming logical context. Or I'm a link element of incoming physical context. And then you will be automatically placed in the right part of the pipeline. The complexity of this thing
becomes a bit apparent when you look how the behavior chain now becomes. Because we now have a lot of generics and we don't have that many flexibilities like in F sharp, we need to do a little trickery with reflection. So what we are going to do is we have here now an abstract notion of
a linked element, which I call here in this code example element instance. And what it does is it does a little bit of reflection. It creates a generic thing called step invoker, which then contains the T in context and T out
context. And this one automatically basically casts the step or the link element to the right generic type during run time. This is one of the drawbacks of this pattern and the generics. But I think it's worth so that our users can have a better code flow. And, of course, it also
needs to cast the context. And, well, that's it. We now have a completely asynchronous pipeline, which our users can use a wait statement in there. We can call into persistence. We never block the code. But also, we are strongly typed so that our users don't need to care that
much about where they're getting placed. Yeah. And, of course, what the end-service bus pipeline, for example, is even a bit more complex. Because depending, for example, we do a number of retries. And if we can't retries for, let's say, five times, what we're going to
do is we're going to move the message into the error queue so that that message doesn't block your other messages in the queue. And for that, we, for example, need to fork off another pipeline or another chain of responsibility, which we call the forward or the error pipeline. And for that,
we have a thing what we call fork connector. And fork connectors, from a generic perspective, they're not different than all the other elements. But the implementation internally is a bit different. But the code is completely the same. We invoke it with this generic step invokeer. But this then has the possibility to say,
okay, I'm bridging from context to from context, so the stage remains the same. But I'm forking off into a subchain of responsibility. And for even more complex scenarios, we have what we call stage four connectors, which bridge from context to two contexts and fork off into another
pipeline. So in the end, what we have in the end is that the answer response is not a chain of responsibility. It's actually a tree of responsibilities during runtime. And, yeah, well, that can, of course, lead to head explosions. But like I said, that's all
abstracted in a pretty simple way from the end users. And we take care that by using topological sorts and so on, before we actually execute messages, everything is in the right place. And it's going to work during runtime in the most efficient way possible. Little information, answer
response V6 implements the pattern I just showed you. That's the next major version that's coming out soon. It's async all the way. It uses the chain of responsibility pattern heavily. If you're curious, you can go to docs.particular.net and you'll see how this
pattern is implemented. We call them behaviors, not link elements. And, of course, the code is also on GitHub. And a brief recap, I think the chain of responsibility pattern, or some others call them also Russian dolls coming from JavaScript, is a really flexible pattern,
ideally suited to build robust pipelines and domains. The pattern is used as I showed in many open source projects and infrastructure things. And I'll say, know it, learn it, love it. I hope that's not copyrighted by the .net rock guys, but maybe it is. So, yeah, my slides and links are available on
github.com. You can have a look at it. I have also other implementations of a potential approach to reduce, for example, the nesting in the call stack. I call that partial dolls, or partial chain of responsibility, if you're curious how that's implemented. And a few other
attempts to make it, for example, recursion-free and so on. So if you want to deep dive into that stuff, feel free to have a look at the code. And if you want to do a brief recap on async await and how, for example, a message pump works, I haven't shown that. I implemented in my webinar on go.particular.net. I
implement with TPL and async a complete message pump live in this webinar. You can just register on this link. And yeah, if you have any questions, feel free to shoot me any questions. I will stay here until roughly 6 p.m. at the particular booth. If you have questions
afterwards, just approach me. Or right now. Any questions? Sorry, which one? This one? Any other questions? Why we move it out? Okay. Well, so, for
example, well, if we have per endpoint what we call, it's basically a thing that consumes of a queue, we have one message pump, which consumes messages. So if you
have a thousand messages, let's say you consume five, and then if one message in there, which is going to fail, and we retry and retry and let's say you have just concurrency set to one, right? You're going to retry this one indefinitely, right? So what we do is, instead of at some point we say, okay, this one has now failed n number of times.
It doesn't really make sense to retry it again, because it's more of a permanent failure. So we move it out of the queue so that the processing can continue, and all the other messages in the queue are continued. And then we basically raise an event to the operators of the system, saying, hey, there is a message in there which you need to have a look at and decide for yourself what
you want to do with that message. Yes, you can. Yeah, of course. Yeah. I have it inside. Yes, yeah. But that's an implementation detail, yeah. Any other questions?
Yeah. How do you get the concurrency? In parallel. OK. So what I like to say, it's concurrent. So it's not parallel, because we are doing async await.
It can be parallel, but it doesn't necessarily have to be. But what you can do is, inside, for example, if you have a tree of responsibilities, each fork connector can, for example, to decide to create multiple forks. And instead of awaiting each fork,
you can basically spawn off all the forks and then do a task when all on these forks. And then you have concurrent execution of forks. But what, for example, we also do, let me briefly show you this, here in this picture. So what the message pump does is it has a concurrency setting.
For example, it can be 100, right? So what we do is we basically peak 100 messages, and then we have 100 concurrent chain of responsibilities with potential multiple forks inside the chain of responsibility running on one or multiple threads.
Does that answer your question? Cool. Any other questions? Cool. Then, thanks.