Concurrency in .NET, 2013 version
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Alternative Title |
| |
Title of Series | ||
Number of Parts | 150 | |
Author | ||
License | CC Attribution - NonCommercial - ShareAlike 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this | |
Identifiers | 10.5446/51474 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
NDC Oslo 201367 / 150
3
4
5
6
8
11
12
15
17
22
26
27
31
32
39
40
41
42
44
47
51
53
56
57
59
60
61
63
64
66
67
68
69
71
72
79
80
81
82
83
85
87
89
90
93
94
95
97
98
99
100
101
102
103
106
108
109
110
114
118
119
120
122
125
126
130
132
133
134
135
136
137
138
139
140
141
142
145
00:00
WeightRevision controlConcurrency (computer science)Principal idealService (economics)Information technology consultingSystem programmingInterface (computing)BuildingGamma functionNP-hardWeightSoftware frameworkConcurrency (computer science)Parallel computingService (economics)Different (Kate Ryan album)NumberGoogolBitOpen setSlide ruleExploit (computer security)Direction (geometry)Point (geometry)Integrated development environmentUser interfacePresentation of a groupCartesian coordinate systemVirtual machineGroup actionType theoryDependent and independent variablesNatural numberBuildingDatabase transactionTouchscreenTheoryNetwork topologyComputer configurationCombinational logicStandard deviationInterface (computing)DataflowWave packetEvent horizon2 (number)FamilyXMLUMLComputer animation
02:34
Library (computing)WeightConcurrency (computer science)Library (computing)Task (computing)MereologySoftware frameworkDependent and independent variablesDemo (music)Concurrency (computer science)Cartesian coordinate systemPattern languageDataflowComputer programmingThread (computing)TheoryGoodness of fitCombinational logicCovering spacePresentation of a groupComputer animation
04:33
Extension (kinesiology)Library (computing)Concurrency (computer science)WeightComputer programmingExtension (kinesiology)Point (geometry)Multiplication signType theoryData storage deviceAutomatic differentiationLimit (category theory)WeightConcurrency (computer science)Software frameworkMobile appComputer animation
05:37
Extension (kinesiology)Library (computing)Concurrency (computer science)WeightThread (computing)Event horizonElectronic visual displayMessage passingSynchronizationMessage passingDemo (music)Link (knot theory)CodeMultiplication signGoodness of fitThread (computing)DataflowInformationSemantics (computer science)Cartesian coordinate systemConstructor (object-oriented programming)Type theoryDifferent (Kate Ryan album)SynchronizationDependent and independent variablesEndliche ModelltheorieExecution unitContext awarenessEvent horizonWeightBuildingMetric systemUser interfaceSoftware frameworkExtension (kinesiology)MereologyFunctional programmingImplementationPattern languageTask (computing)AbstractionLibrary (computing)TouchscreenConcurrency (computer science)ProgrammschleifeArithmetic meanFinite-state machineSystem programmingTerm (mathematics)SpacetimeComputer programmingCompilerThomas BayesPower (physics)Water vaporState observerScripting languageAnalytic continuationNumeral (linguistics).NET FrameworkVideo gameServer (computing)State of matterSoftware testingJava appletComputer animation
10:11
DataflowEvent horizonSimulationStreaming mediaDistribution (mathematics)StapeldateiComputer programmingTask (computing)FlowchartBlock (periodic table)Computer architectureComputer networkTouchscreenWave packetSet (mathematics)Functional programmingEvent horizonMathematicsMultiplication signPoint (geometry)Computer simulationCartesian coordinate systemBuffer solutionDataflowDemo (music)Type theoryQuicksortCuboidSystem programmingComputer animation
11:21
SimulationDemo (music)Block (periodic table)Task (computing)Exception handlingComplete metric spaceAnalytic continuationData modelOvalThread (computing)Keyboard shortcutResultantDemo (music)Counting2 (number)Cellular automatonElectronic visual displayGraph coloringTask (computing)MathematicsEndliche ModelltheorieCondition numberClient (computing)Type theoryMultiplication signConstructor (object-oriented programming)Entire functionSystem programmingBuffer solutionExecution unitComplete metric spaceLoop (music)MereologyCategory of beingEvent horizonNumberInformation securityWindowConcurrency (computer science)Normal (geometry)State of matterDifferent (Kate Ryan album)Square numberPoint (geometry)Computer programmingRight angleRandom number generationCodeFront and back endsSampling (statistics)Thread (computing)CodeTouchscreenElement (mathematics)Dependent and independent variablesRow (database)Software bugAnalytic continuationSet (mathematics)FreewareCartesian coordinate systemCheat <Computerspiel>Stress (mechanics)System callTheoryQuantum stateSource codeUser interfaceWeightSound effectData miningBitGradientScheduling (computing)BuildingProgrammable read-only memoryArchaeological field survey
19:28
Representation (politics)Task (computing)Thread (computing)Complete metric spaceAnalytic continuationData modelException handlingThread (computing)Scheduling (computing)SynchronizationTask (computing)Context awarenessData conversionException handlingMobile appStructured programmingTraffic reportingOvalComputer animation
20:37
Thread (computing)CodeTask (computing)Type theoryoutputWindowWeightThread (computing)NamespaceMobile appContent (media)Right angleTask (computing)Computer programmingComputer animation
21:24
CodeTask (computing)Type theoryThread (computing)Task (computing)Web serviceDataflowComplete metric spaceExtension (kinesiology)System callMessage passingWeb 2.0FlagComputer programmingParallel computingError messageEvent-driven programmingEvent horizonServer (computing)Library (computing)Service (economics)Similarity (geometry)ResultantDirected graph1 (number)Programming paradigmComputer animation
22:50
Task (computing)Thread (computing)SynchronizationExecution unitSemantics (computer science)Menu (computing)Task (computing)DataflowWordSoftware frameworkWorkstation <Musikinstrument>Context awarenessSynchronizationState of matterSemantics (computer science)Computer animation
23:44
SynchronizationContext awarenessThread (computing)Execution unitSemantics (computer science)Task (computing)Latent heatDefault (computer science)Queue (abstract data type)Mathematical singularityThread (computing)Different (Kate Ryan album)Context awarenessSynchronizationTask (computing)Normal (geometry)Row (database)Scheduling (computing)WeightSemantics (computer science)Computer animation
24:33
outputQueue (abstract data type)SynchronizationTask (computing)Semantics (computer science)Default (computer science)Thread (computing)Process (computing)Block (periodic table)Demo (music)Mathematical analysisLink (knot theory)Execution unitScheduling (computing)WeightNormal (geometry)Task (computing)Thread (computing)Pattern languageWave packetFunctional programmingResultantSource codeFunction (mathematics)1 (number)outputGraphical user interfaceDifferent (Kate Ryan album)Cartesian coordinate systemBitDesign by contractType theoryMobile appMultiplication signBlock (periodic table)Code2 (number)Set (mathematics)Fluid staticsFlagComplete metric spaceSoftware frameworkMessage passingConstructor (object-oriented programming)Demo (music)Dependent and independent variablesExtension (kinesiology)Quantum stateFreewareEvent horizonProjective planeMereologyFrame problemLevel (video gaming)Staff (military)Computer networkOnline helpCuboidSoftware testingBit rateState of matterFilm editingComputer animation
32:13
Process (computing)Block (periodic table)Task (computing)Demo (music)Mathematical analysisEvent horizonSynchronizationAnnulus (mathematics)Convex hullClique-widthCanonical commutation relationControl flowMessage passingLibrary (computing)Concurrency (computer science)Level (video gaming)Similarity (geometry)Message passingMultiplication signType theoryTask (computing)Extension (kinesiology)Optical disc driveStatement (computer science)Group actionFigurate numberView (database)Object-oriented programmingNumberScheduling (computing)FrequencyBroadcasting (networking)WeightEndliche ModelltheorieCategory of beingContext awareness1 (number)DataflowFlagQueue (abstract data type)Demo (music)CodeSequenceCartesian coordinate systemFunctional programmingEvent horizonOpen setMobile appRight angleVariable (mathematics)Orientation (vector space)Bus (computing)2 (number)WindowMathematical analysisCASE <Informatik>Pattern languageCasting (performing arts)Social classFactory (trading post)Revision controlUniform boundedness principleComputer networkHypermediaState of matterParity (mathematics)Computer animation
39:42
Task (computing)Canonical commutation relationControl flowLibrary (computing)Message passingProcess (computing)Computer programmingSimilarity (geometry)Level (video gaming)Concurrency (computer science)Compilation albumComputer clusterComputer programmingBlock (periodic table)Constructor (object-oriented programming)Buffer solutionElement (mathematics)Programmable read-only memoryoutputFunction (mathematics)UsabilityGreedy algorithmLocal GroupBroadcasting (networking)StapeldateiContent (media)Source codePropagatorLattice (order)Functional programmingBlock (periodic table)Group actionComplete metric spaceSequenceQueue (abstract data type)Arithmetic meanNumberDataflowImplementationForm (programming)Flow separationComputer networkSoftware testingChemical equationDefault (computer science)outputResultantHand fanCASE <Informatik>Slide ruleException handlingBuffer solutionService (economics)Task (computing)Message passingBroadcasting (networking)Process (computing)CodeCuboidMultiplication signVideo GenieComputer clusterType theoryComputer programmingStapeldateiMedical imagingData centerFlagHypermediaOrder (biology)Right angleBlogCountingFood energyFunction (mathematics)BuildingDifferent (Kate Ryan album)IntegerDemo (music)Operator (mathematics)LastteilungIntegrated development environmentScheduling (computing)Control flowRevision controlAnalytic continuationConcurrency (computer science)Constructor (object-oriented programming)Lambda calculusThread (computing)Computer animation
46:45
Alpha (investment)StapeldateiFrequencyContent (media)Broadcasting (networking)Query languageState observerImplementationBuffer solutionMultiplication signTask (computing)Thread (computing)Buffer solutionConstraint (mathematics)Group actionBlock (periodic table)Queue (abstract data type)Decision theoryTouchscreenDegree (graph theory)Hydraulic jumpPattern languageState observerMaxima and minimaMathematicsLink (knot theory)Demo (music)Distribution (mathematics)Hand fanChannel capacityMessage passingComputer configurationBoom (sailing)Interior (topology)DataflowLimit (category theory)Scheduling (computing)Bound stateBroadcasting (networking)Instance (computer science)outputStreaming mediaFunctional programmingDisk read-and-write headType theoryElement (mathematics)Proxy serverOrder (biology)Metropolitan area networkMultilaterationDifferent (Kate Ryan album)Constructor (object-oriented programming)Default (computer science)Electric generatorPoint (geometry)Computer animation
53:47
Event horizonImplementationState observerPattern languageBuffer solutionData conversionQuery languageElement (mathematics)Menu (computing)Java appletThread (computing)Buffer solutionEvent horizonSystem programmingDemo (music)Real numberMultiplication signTrailCodeState observerFault-tolerant systemStatement (computer science)Denial-of-service attackDifferent (Kate Ryan album)Functional programmingFluid staticsElectric generatorType theoryBlock (periodic table)2 (number)Link (knot theory)Goodness of fitStreaming mediaSystem callSocial classDegree (graph theory)FrequencyNumberField (computer science)Group actionFerry CorstenInformation securityRow (database)TheorySampling (statistics)Impulse responseBoss CorporationMetropolitan area networkProcess (computing)Endliche ModelltheorieQuery languagePoint (geometry)SequenceSubsetComputer animation
01:00:29
XMLUML
Transcript: English(auto-generated)
00:05
Anyway, my name is Michael Haidt. I'm glad to be here and that everyone showed up for my talk here. I work for SunGuard Consulting Services. There's global services up there. We changed our name again on June 1st back to consulting services.
00:23
And what I want to talk about today is a number of different things that I've been doing with a lot of the frameworks for concurrency and parallel programming at .NET.
00:40
This first opening slide here has got a little bit of the intro to some of the things we're going to look at, but I'll give a little more detail here. I'm going to run. I'm not doing this full screen because of all things, it turns out that you can't swipe between a PowerPoint presentation and another desktop with Mountain Lion.
01:01
But I got the new one, Navrix yesterday, but I was tempted to install it, but it said virtual machines don't run on it yet. So I was like, I can't do that. So anyway, what I do for SunGuard is I run a capital markets advanced technology user experience group out of New York City. So I do a lot of work with a lot of investment banks,
01:26
sitting on trading desks, working building responsive trading applications for traders in these environments. What I have today is a lot of the stuff that I've built over the years,
01:40
I've kind of been consolidating into some presentations and kind of giving these and revising it. And to be honest, this one turned out to be completely rewritten from when I gave it three weeks ago at another user's group. So we'll see how it goes. But it does a lot more than what we were doing before. But I built trading desktops for these banks. I also do a lot of stuff with seamless applications and natural user interface and different types of things.
02:06
But the only bread and butter kinds of get around to building these desktops that can support a lot of transactions coming in and updating the screen very frequently. And this involves, over the years, a lot of nice things have been added to .NET to help you out with these problems.
02:27
So what I'm specifically going to talk about are the concepts of concurrency and responsive user interface in .NET. Various combinations of task parallel library, data flow, RX, and async await.
02:42
I'm going to try not to drive too much into the theory of any of these. It's not a how did we implement this in the framework kind of talk, which it was a couple weeks ago. And then everybody gets mad at me, well, we didn't really get to some good stuff. So this is primarily going to be demo driven, some slides, some talks about how some of the stuff works,
03:03
but shows you how you can use these different things to get things done in your application. And there's some patterns for each of these. I was talking about patterns and various capabilities. But we'll look at some of these common patterns for each library and how each library supports concurrency and responsiveness in your applications.
03:30
There's also one question I always got over the years, too, is data flow versus RX, where do you use one or the other? Well, I finally figured that out like two weeks ago. It was kind of part of the getting the presentation retooled because it changed the way I thought about everything,
03:45
which is kind of nice when you have an epiphany like that. And I'll show you this and then anything else that you want to talk about. So that's why I was kind of asked, like, who's done much programming with task parallel library? Can you raise hands?
04:03
Okay, it's a few people, not a lot. Okay, which surprises me because this stuff's so neat, especially if you're a guy trying to do threads over the years and years and years trying to deal with this. You know, it's a small amount, so I'll cover some of the concepts in task parallel library to show how this works because these things build up on each other.
04:22
Anybody use data flow? Good. That's kind of the bread and butter of what I do and a big part of this demonstration. The end demonstration is mostly RX and data flow to get that we'll be looking at. Reactive extensions.
04:42
Okay. Those raise your hand. Have you used it for anything other than tracking the mouse and stuff like that? Okay, so one that I could tell. So it's got some really kind of neat things in it that can help you out with these types of apps. Let's see, what else is there?
05:03
Async await. Anybody programming regularly using that? Getting there, yeah, because it's what? I always get it confused if it's .NET 4.5 or C sharp 5. So it's like I can't use it where I'm working right now at an investment bank because we just went to .NET 4. So I can't use that, can't use finding out the limitations of TPL.
05:23
You know, I've been doing 4.5 programming for a long time. So these are the gist of the things that you'll use in a concurrency toolkit. These are the things that are built into the framework. There are millions of, not millions, but I shot over the last couple of weeks to see how many NuGet extensions there are for every one of these libraries
05:42
that you can go and get and help yourself out with. So the task parallel library, I've got some bullets here. It abstracts units of work for concurrency. I like to think in that unit of work type of model because I'll get a little more depth on this, but it provides you basic flow of data through your application.
06:04
A lot of times when you're building applications and you want to do something in the background, it's simplistic to think it's done in the background, I'm just going to put it on the UI. Well, a lot of times you want to take it from a background task to another background task to another background task, do a whole bunch of different things, and then put it up on the UI.
06:22
So it starts to give you this type of capability, which threads do not. And uses the concepts of promises and continuations, and I guess the .NET world called futures, promises is kind of the JavaScript world. Who's familiar with promises?
06:41
Okay, we'll talk about that. Because to me, tasks are just promises. So async await turned out to be compiler-based support for some of the semantics in TPL. You can go to Lucian's talks, and he'll give you gory details on state machines
07:03
and all kinds of different things on how async await works, and it does a lot of different things. But there's some of it in some of my examples. We'll see how it comes in. Data flow gets into agent-based programming, where you start thinking of your system in terms of messages
07:20
and then pieces of code that the messages can be routed to. Each message contains data. And then they get routed to these agents that you can say, you know, run this function on this data, run it on this many threads, you know, and then when you're done, maybe pass it down to another agent, send it to another agent. And this is the ultimate way, if you ask me,
07:42
a way of getting concurrency in your .NET applications, whether they're UI or server-side and stuff like that. It just makes a lot of things very easy. The reactive extensions started out years ago with, you know, being an implementation of the observable pattern in the .NET framework.
08:01
So basically what RX ends up doing is, you know, you can take any IEnumerable, make it an IObservable, and then from that, as new events come into the IEnumerable, you can call a function and have code executed on it. And that's the very, very basic part of that RX world.
08:23
There's a lot to it. I'll show you some practical things in the context of, you know, getting to use it. Parallel extensions, those have been around for a while, you know, parallel for, parallel link. Anybody done parallel link work? A couple.
08:40
Yeah, it's pretty neat. I'm not going to focus a lot on that in this talk, but, you know, we'll see it in some demos. It's great for parallelizing for loops, and there's some problems with parallel for loops and stuff with synchronization and pulling data back in order, which it provides some nice constructs for. We're not going to get into those in here.
09:03
And to be honest, I'd like to summarize all this. All this tech that's in there, it's just a means to an end, the user's experience with the application. How responsive is the application? Does the application get data to the screen for the user to see in the time that it's needed? You know, I always see these metrics that,
09:22
like, you know, these trading desktops needs to support thousands of messages per second being put up on the screen, okay, for the trader to be able to tell. I've never seen it that fast, and two, no human can process the information that fast anyway. But is it a challenge? A nice technical challenge to try to solve that problem?
09:41
Yes, but there's some practical problems, practical ways to solve it, which I'm going to show you. The main demo stepping through all this is a practical way of using all this stuff to solve that and provide a good user interface. And it's to keep the UI responsive.
10:01
So let's go over. I'm going to start some demos here. The demos I have today are one that I show for the reference example, bad UI performance, okay? Everybody's kind of seen one like this, but we're going to evolve and fix it through the program. Some brief demonstrations on TPL continuation, doing some task scheduling, what that's about.
10:23
Data flow, it's going to end up being the bulk of this. Some RX for generating events and doing buffering. And then the main demo is a simulated training application that pulls all this together. So let's go over. Well, actually, I have a flow chart here. Why don't we get to that out?
10:42
Each one of these blocks is a set of piece of functionality that needs to happen in a training application, if you ask me, kind of as a reference architecture. You're going to have a lot of streams coming in that you have to capture all those. Then we're going to batch them together into blocks instead of processing every single event
11:02
all the way to the screen every time. We'll capture them into little blocks. Once we have blocks, we'll collapse the data into what's known as conflation into just the changes that are needed. Then we'll flow it down the network to a distribution point where it's going to say, we're going to take those changes that came in from the market exchanges, let's say,
11:21
send the data this way because we're going to go update the screen right away with that. But a lot of times in these types of applications, I got data in from the market or back-end systems. I now have to go look up some more data because they might be representative codes. I need to look up data. If it's not in the system, bring it in.
11:41
So there's other work to be done. But when that's done, you hop on over and you follow the path over here in Richmond, go down through flattening that all back out, then rebunching up for user interface, aggregation for buffering to the display, and then getting up onto a WPF or Win8 sample application,
12:00
which I've got a Win8 sample in this. So let's look at my try at a bad trading desktop. It's not really a trading desktop. It's just a program that I've run, I wrote over the last month to try to demonstrate, you know, different ways of going and working with performance in your UI.
12:23
And this actually does not do any threads or concurrency. Just trying to figure out how to change what's on the display effectively with like your XAML. So this has 1,000 squares pretending to be cells on a grid.
12:42
You know, in theory, we think of this maybe as securities and then the attributes, you know, price, all this kind of stuff. So this grid over here then has different ways of running this demo. The typical one, what this demo goes, it's going to go, it's going to generate, I forget how many numbers
13:01
of thousands of, actually, it's going to 100, it's going to loop 100 times and change either the text or the color of the cell 100 times. And it's going to tell you how long it took to do it. And it provides an ability to do it in either using data binding or a more direct model is changing like the color property
13:23
on the brush that's in the UI element. It's without data binding, which is a preferred way. Whether you want to change the text and color. So there's all kinds of different things. But let's just say we're going to go change every one of these thousand cells to text and color using data binding on a high-priority background thread.
13:44
You can probably guess what's going to happen with this. It looks like nothing's happening. As I hover over any other buttons, you don't see it highlighting or anything like that. Then you wait around for seven seconds and it went to 99. You didn't see, it just displayed,
14:01
it told the display to show zero, one, two, three, four, and every one of those cells, but just jumped right to the end. Well, that wasn't a very satisfactory thing to do. So if we change this around a little bit, and I've been needing to refresh this here. And let's go down and change the text and color, but let's remove data binding and see how long this takes.
14:22
And I'm going to use a normal priority thread. So I can see that now this is going, and a lot better of experience kind of going through that. And while that's running, the other buttons highlight. You can move the window around and different things like that instead of being completely blocked.
14:41
So this is kind of the end state of that application, is let's not use data binding. Because you see here, this took three seconds, and it's technically, it's a lot faster than that, at least compared to the data binding. I've seen the data binding on this take like 17 seconds or something like that, but this stuff usually runs pretty fast.
15:00
So that's generally one of the problems. I've walked into clients where I've been asked to redesign their trading grid because it takes eight minutes to start with no responsiveness in the UI. Thus, they're just sitting there binding things for 20,000 rows and not paying any attention to what's going on. So, you know, get rid of the binding,
15:21
know how many things you're updating on the display. Only change what you need to change on the screen. So it's kind of, the first way is how not to do it. The last part's part of the way of doing this. So let's go over to the next demo here. It's all part of this one application,
15:41
is a better trading desktop. This is actually running the code of the example that we'll walk through this entire time through this. And I'm going to start this. And what this one's going to do, I forget the count. I think right now it's going to generate 5,000 events over five seconds
16:00
and then update the display based upon what's coming in. Random number for the value of the cell and different colors depending on what's going on, which we'll see. So you see, this is going pretty good. So this should come up, get them all done. Five seconds. I might have it on 10,000.
16:22
Or maybe we're getting the bug here again. But it's updating pretty well. So let me, I noticed this dude out once last night. The demo will always get you at some point. So let's start that again. And it should stop in five seconds. But this is handling this pretty well.
16:40
And it's updating all these cells. And to be honest, it's still more than any real person can try to try. I'm working with more different metaphors, like with these desktops now. Just show what the trader's currently tracking and only show them the updates on that and different niceties like that. So this is going through. So this is, we're going to work to this
17:00
because this is going slow right now. We're going to crank the speed up on this as we go through the talk. So I'm going to turn that off. If there's anything, any questions, please feel free to ask. So the TPL overview in a nutshell, okay.
17:24
Tasks to me are promises. They're basically, it's work that's going to be done in the future. It may be on a thread. It may not be on a thread. It's going to be done in the background or in the foreground when you're not noticing it happening.
17:40
They have results. That's what's kind of neat about it. That's what makes it a promise. I'm going to go do something, and at some point in the future, I'm going to have a result for you. It could be void, but usually, most of the time, usefully, it's a piece of data, okay. And that's returned through a result property. And the tasks have state, like waiting for activation, running, completed.
18:03
If you tried to get a result from a task that isn't completed yet, like through the result property, it blocks until it's done. That's one of the semantic things you have to worry about when you're doing this. And one of the big tricks becomes, especially with threads, but tasks make this very easy, is once that work's done,
18:24
since you're doing it and I don't know when you're starting it, I don't know when you're done with it, how can you let me know that it's done so I can get the result? Well, in threads, you were programming wait events or shared buffers to signal or a callback, and you got in all these kind of race conditions.
18:41
Well, you don't have to do that with tasks. Tasks have a continue with method. So you call a delegate when it's done. But in general, the concepts, and we're going to jump into a TPL example here in a second. It makes this a lot simpler. Tasks represent the unit of work.
19:00
They have continuations, as I like to call them. It's fairly common, you know, continue with, continue when, all different types of constructs around these, let you basically go and build compositions of work. How to do this. When that's done, do something else. When that's done, do something else.
19:20
And then when eventually it ever meets the final end, give me the result of all that. And keep me asynchronous and responding on the UI and all that kind of stuff while doing it. There are schedulers, which we'll look at that, which basically the task, any task created, run, always use a scheduler, which basically defines the scheduling of those tasks relative to each other
19:43
on a thread pool or whatever. And I have some examples of how to change that around. Synchronization contexts. You should all be familiar with the UI context. That's the big thing, get it back on the UI. But you may also want to keep it in some other context. So there's full support for this.
20:01
Cancellation, big topic, won't get into it too much in here, but it just ends up being, I've created 100 tasks, they're all out and running, now I need to stop them all because I'm shutting the app down. Well, how do I do that? Well, with threads, it was a nightmare, right? So thread a board, set global flags, wait for things to finish, single that to finish.
20:21
You can do it real easy with this stuff. And then aggregate exceptions. It always was a problem. If I started 10 threads and three of them threw exceptions, how do I get that figured out? So there's nice structure concepts in this for that. So the TPL, what, .NET 4 and on,
20:43
I don't know, maybe it was 3.5, I forget. I think it's just 4. It's in these namespaces. It's more declarative than threads. And to be honest, like WinRT, Windows 8, doesn't even have threads. Anybody done Windows 8 programming, Windows RT? Yeah, this was an experiment I was doing with porting this over to this Windows 8 app.
21:03
And I tried to do a thread sleep at once and it was like, there's no threads. Very, they don't even expose threads to you. You have to do everything through tasks, okay? It's all different. But this is a good thing, to be honest. But it took me a couple of minutes. Well, how do I do that?
21:21
Well, there's good content. And it can wrap a whole bunch of other things. And the gist is, they provide concurrency, which can be like, if you've done an asynchronous programming model, iAsyncResult, and the event-based programming model, you can actually convert those to tasks. Because the problem with those
21:40
is they're not very orchestratable. So when, you know, like, I started, you know, async programming a lot with, you know, Silverlight and every web service calls async and you gotta, well, I need to make that call. When that call is done, I need to call another web service because I need the data out of that web service, the pass of this web service. Next thing you know, you're, like, going insane with iAsyncResults and trying to handle errors.
22:01
Well, you can just wrap those up nicely in a task and just kind of orchestrate the stuff going together. And they have status, like I was saying. You know, waiting to run, running, completed. It's very important with all these libraries. TPL, async await.
22:22
You know, async await that much. But data flow and Rx to a lesser extent to understand the concept of completions. Anybody who worked with that concept, if you haven't done TPL, then you probably haven't really gotten into completions. But completions are, in a way, construct building the tasks that are, in a way,
22:42
I guess similar to, like, having a wait handle. And there's a flag in a task called completed, which basically says whether the task is run to completion yet. And you can use that completed variable to tell if you're done, to wait,
23:02
or to signal to move on and do some more work at the end. And, like I said, it's not threads. I could be wrapping an EAP thing. The data came back. Well, the framework sets the completed state so you can go and continue on with things. And there's sync contexts.
23:23
If you're familiar with synchronization contexts and using them directly, which is really kind of the way to do that stuff is you'll see that you use the word post a lot, synchronizationcontext.post. If you're on a dispatcher UI that's actually putting something on a dispatcher, the data flow stuff completely uses post semantics.
23:41
So if you're used to that stuff, data flow will look a little familiar to you. And there's different synchronization contexts. The primary one is, you know, dispatcher sync, the thread pool. So if you're running something in the thread pool, a task in the thread pool, you need to get something on the UI. You can just, passing a variable to, you know,
24:02
the dispatcher sync context, just go up and, you know, it'll run automatically on that. So, got task schedulers. They kind of define the, quote, unquote, air quotes, thread pool semantics. But basically, tasks in .NET right now,
24:22
by default, go to the thread pool if they're considered normal tasks. Long-running tasks are not in the thread pool. But you can run your own schedulers. The task scheduler in .NET puts them in the thread pool if they're normal tasks.
24:41
But you can write your own. I wrote my own in one of these to show an example of mediation, which we'll look at. So let's, we're going to look at a couple quick TPL examples here. Basically, the one I like to demonstrate is scatter-gather, which is a common pattern, like in a training app or doing other things. You know, a lot of times your app starts up,
25:01
I got to go get data from, say, Reuters, I got to get it from Bloomberg, you know, from all these different sources. And then once that, they're all in, do some more work. But in the meantime, I want to do some other stuff on the UI. So this is pretty straightforward with TPL to set up this type of contract versus doing multiple threads.
25:21
So let's go over and bring up our GUI again here. And I'm going to go to scatter-gather. And there's five different examples of this. So I'm going to show some code here real quick. Let me bring this up. So if everybody can read that code okay, big enough?
25:43
Yeah, I thought so. This first example here, execute-naive. This is creating two tasks. Task.delay creates a whole other task who doesn't start running until the timeframe
26:01
in milliseconds that you specify. So it's a neat way, it's kind of like thread.sleep, except it's not immediately sleeping that thread, it's starting another task which will not start for that amount of time. And then you can go do some things. So this puts them in an array,
26:20
I've got two tasks, task one, task two. One will not start for 2,500 milliseconds, the other one 5,000. Then I use the task.waitall function, static function, passing in that array. So what this task.waitall does is it waits for the completed variable on each task in that array passed in
26:41
to be true. And then it continues on work. So without writing all these auto-reset events, I just did this very nicely right here. So if we go over to here, we'll run this. And I say this is naive because it's blocking. You'll see the button's not changing back
27:02
until five seconds in, and we're done. That blocked. Because task.waitall, the thread coming through here, executing this, this blocks that thread. So that's probably not something you want to really do. So there's a better example of this. This is the execute better function.
27:25
And this is showing a little bit more. Same thing as before, two tasks, one running the same amount of times, but you can see now I added a continueWith method on this. So when that starts, after 2500 milliseconds, this one,
27:41
and completes, then call this function to run. So this is going to show task one completed, task two completed, when you're done. But also, when we do task.whenall, whenall is different. The previous one, we did waitall blocks.
28:00
Whenall doesn't block. It says when all those get completed, then run another function. So run this code when it's blocked. So this is going to fall straight through. When this completes, we're going to get some output. And then when everything completes, we get some output. So if we do this, and task started,
28:22
see the UI's responsive. 25... One second, come on. Oh, sorry, I ran the wrong one again. Sorry, wrong button. Task one completed. Three tasks completed. Everything's all completed.
28:45
So... And we remained responsive all the way through this. So by not using blocking constructs, using continueWiths and such, we kept everything kind of going nicely in the application. So... Yeah.
29:06
Well, it used to be just TPL. It's now part of .NET, whether it's considered framework. I think it's more of an extension, but it's 4.0 on. It's in there for you to use. So... And what's cool with this stuff,
29:21
maybe you see a little bit of... I've got to watch time, but... It's... You can create your own task and just wrap another piece of code, and if you want to include code, to orchestrate it in with continuous, you can use these constructs called task completion sources, which say, hey, you know, just create one of these
29:41
and then you can say setResult, which then when you set that properly, it sets the completed flag. So it could just be old code, legacy code that you wrapped with the task real easily that then you can now orchestrate through other tasks, the input and output of those, which is really neat. That's what I'm saying. It's like, tasks don't mean threads.
30:00
They most usually mean that, but they can be almost any piece of code that you write that you want to involve in, you know, it's running in the future or in the background and I need a result and I need to do something with the result or I need to know when it's done. And you just go use these constructs then instead of, you know, all this low-level
30:20
threading stuff. I'm gonna look at one last example here, because this is one here. This I'm using the scheduler, so this execute with timeout. This is kind of doing the same thing, except
30:40
I'm just playing with the scheduler here. It doesn't really do much in the demo to be honest. So doing the same thing, but what I'm gonna do here is because one of the things here you might want to do is go out, get something, try to get it from a couple places and then the first one that comes in, the first result I get, that's fine.
31:01
I don't care about the other ones. But I need to go and, you know, if a certain amount of time expires, let's stop looking for things and go do something else. So, like in the one before we had the tasks array here, which did those two tasks, I made another array where I took
31:21
a when all of those, but also put in a task delay of a timeout. So, I forget what I pass in here. It's a few seconds and what it'll do then, because right here I'm doing task when any, which is on this array, the time tasks. When either of those completes, so either when all
31:42
of those other ones going out and getting stuff complete or the timeout one completes, do some code. So what we'll see here is this will go and run. Task one completed, timeout, status waiting for task two completed.
32:01
This should be scrolling, but it's not. Task two completed and, unfortunately that's not showing everything. It's coming up in showing that what's going on here when it came up is when we got through that timeout and got through here, we could check the is completed flag on the task and say, if it's completed
32:21
on the first one in this array, then it was a timeout. So, this one was then showing that, hey, task one's still running or maybe it's shut down. Task two was still running. You might want to go and terminate those, because you don't need them anymore. And that's the purpose of the next demo, which was cancellations
32:41
and be able to do that, but there's a lot of data flow stuff I want to get to. So, if we have some time at the end, we'll come back to these. Let's go back over. mediation.
33:00
Okay. One more thing with the TPL. This shows using schedule. This is a pattern, which basically you create an object that decides what's like a priority on actions relative to other actions that you give to it. This uses a custom task scheduler, so let me bring this up here.
33:21
And window. Didn't open that one ahead of time. Let's open it up quick.
33:42
Mediator. This is using custom tasks. Okay. I've created two subclasses of task in this example. We have we have a doo doo doo doo
34:00
where is that one? Let's go to, we can see it in here. Periodic task. Right here. Here's one. These are very simple
34:21
subclasses of a task. You can subclass a task. A lot of ways you're expected to subclass a task. But what this can be passed in is some work to do, then a budget. And what this task is saying is just having some extra value variables for a scheduler, extra properties for a scheduler to look at. I also have
34:42
a sporadic task. Go to declaration. Looks very much like it. Almost exactly the same thing. But the way they're going to be handled is different. So, periodic tasks. In this case, the mediator I wrote says, if there's
35:01
any tasks in the queue to be executed that need to finish within the budget, basically, in this case, 50 milliseconds, they should be executed before all the other ones. The sporadic task, well, whenever there isn't anybody else being processed, then we could do those. So in this case, this can be used to
35:21
push through events off like a market exchange to UI while superseding all the UI events. But still interweaving them to an extent to keep some efficiency. So this mediator comes through here and he's using a custom task, its own type of thing. So when we come in here,
35:41
we'll see the demo here. Mediation. Mediator. I have two simple examples in here. I'm doing an observable and generating a range of numbers. This is actually RX, the observable thing here saying generate 50 numbers
36:01
starting at zero, and I'm going to subscribe to it. Then every one of those comes through I'm going to say, hey, mediator, I want to schedule this piece of data on you to handle. And this mediator does some fancy things like look up which thread, what methods in the application. It's kind of like a PubSub bus in the app
36:21
for anything you pass to it. But it says here, for any things that are periodic, run this method. Anything that's sporadic, run this method. So when we go and run this, we'll see here let's go to mediation and do the dispatcher only which is that one. We see you get
36:41
this sequence of zero, one, two, three through 50 all the way through it. This threw in zero to 50 and we put it on the mediator. We said do it all the work on the UI context. And so what happens with this is the broadcast function
37:01
looks up the target by topic and sees that it's a it's a sporadic and goes into schedule on the task scheduler this sporadic. And you see here
37:22
we create a sporadic task, pass in what we want to invoke, the budget properties and then we say that task start. And you can pass in a scheduler or any task. Typical way to start a task in .NET is task.factory. start new but you can create a new task then you can give in any scheduler. So the scheduler
37:41
is actually passed a task you can put it in a queue, start its execution whenever you want, compare it to other tasks and figure out who goes next and do all your kind of work with that. Because what happens with this one then if I come back over and look at this example here and we're going to see
38:03
in the view model here the second example here is I'm doing just through 25 right now because I'm going to do two I'm going to broadcast such that we want to go run a periodic task and a sporadic task. And
38:21
we're going to see what happens when that runs because what goes on here is when I run this doing an alternating priority we see we get all the odds come first because what happened with this is that first statement in there took the odd number, scheduled it to run on a periodic task which is scheduled
38:41
and executed by the task scheduler sooner than the other ones. So all the odds pop out first and all the evens pop out second. So we can control that with just creating a task scheduler.
39:02
If you look around a lot you'll see concepts of pipelines. Anybody familiar with the pipelines in TPL? You come across this. It comes across its basic data flow. There's some good documentation that shows you how to do it. But I found this quote by Stephen Tove the other day
39:22
said, don't do those use data flow. So guess what? The next topic is data flow. This is where stuff gets really fun. Data flow is data oriented. It's message oriented. You think of data that has to be processed in your app and scheduling code to run against the data.
39:43
There's some very elegant problems that can be solved with this very easily which I'll show a few. It's similar to message passing after base programming. It's got constructs for passing flow data messages between blocks, handling order, doing search and all kinds
40:01
of changing path based upon the value of the data in the network. It's very similar to continue with TPL. Except it's a lot more powerful. Continue with has a problem. When you say this task, continue with another task, you're only going to go to that task.
40:22
You can't say when you've done this, spread the data that's coming through that first task across eight different versions of this continue with the load balance to work. Tasks really form pipelines from here to here to here to here to here and you get this straight through
40:40
construct that pretty much is one task, one task, one task, one task all the way through the thing unless you're right task schedulers to try to handle this stuff separately. Data flow breaks through all that. I have some slides on what's involved with this, but basically the concept is you work with blocks.
41:01
In iData, all implement iDataflow block. You get source blocks, target blocks, and propagator blocks. You want to put data in a dataflow network to be processed, you post it to a source block. A source block can pass data to a target block, another block. A propagator block can receive and send data.
41:23
It's just a merging of these things. There's some building implementations of this, which I'll show a couple of these. The buffer block doesn't do much of anything except queues up data and sends it downstream when it feels
41:40
it's convenient. It's a way of building a thread-safe queue of data for processing. A write-once block will only execute once, ensuring that this piece of data only gets processed once in a multi-threaded environment. A broadcast block, the data you send in there,
42:00
everything that's connected to it, send it to every one of them. So you can fan the data out evenly. Well, the same one piece of data to everything. Then there's execution blocks. With these, you basically can assign code to it.
42:22
You put a piece of data in and you say, run this function on it. In the case of an action block, that's all it does. Receive the piece of data, run the code. A transform block receives a piece of data, you run code, it expects a result return from it. You return something new, you transform the data,
42:42
pass it down the network. Then you have transform many. There's all kinds of things. There's stuff for joining. Common problems are data's coming from here from here. When I get one from here, I want to put them together and send it down. You can do it automatically with these join blocks.
43:03
There's batch join blocks and join blocks and all kinds of different things. I think usually the best way to demonstrate this is just to explain data flows to just show it running. We're going to go over. I have a number of demos built up here.
43:21
Let's bring up data flow and look at some of these examples. The basic construct is action blocks. The first demo is we're going to create an action block.
43:41
You say new action block the type of the data you want to process with it. I'm just using integers. It can be any type. Then you give it a lambda function that says oh, okay. Get that about 40 times a day. Usually in a XAML designer.
44:02
Here's something new. This static function I have here just writes the task ID and the value of the data out. You can see this in operation. Then on the action block, you can say post. I'm going to post an integer of one, two, and three into this. We'll watch it run.
44:20
Post three items, no waiting. You can see some things going on here. I said I ran the action. I'm not using mediation. Ran action message via mediator. There's just some diagnostic stuff. Then you'll see here task ID 777 value 123.
44:40
That function that delegate I have in here got scheduled to run on a task. Was passed all those pieces of data one by one in sequence. It generated the output. Action blocks have semantics such that they have a queue that can accept as many items
45:01
as they want, but by default they will only ever create one task to execute the data at any time. That's why we see here task ID 777. This only allocated one task. It's guaranteed that the order that you post in, you get the order through there, and it's going to be run through one task.
45:20
There's not really any real concurrency in here except that it's run in the background. I'm going to skip on this one, but there's different ways of telling when work is done in Dataflow, which I've got to watch the time. Basically, there's an input count how many things are in an action block in queue.
45:42
Then, getting back to completions here, you can see this is the same action block posting 20 items in, just using a thing to generate it now. I'm going to say action-block.complete. They kind of have a completion status like a task. If you throw a bunch of items
46:01
in an action block to be processed, and they take a while, which will show, and you want to know when all the items are done, you can set a complete flag and then wait for that to be completed or do a continue with on this function to do some more work when it's done. We'll see this one. This is actually kind of bad
46:21
because this comes in and blocks, and it's running through. We're still doing more, and it's still on one task. You see, this message came out at the end here. That's actually higher up. It's in the calling function of this, but because I waited here, we were really kind of blocked there until all these messages came out on the main thread.
46:44
The trick is, I want to start getting a little more concurrent with this. This is the same as the first example, an action block. I'm passing in the delegate. Every one of these blocks always has these
47:00
data flow block options. This is an execution block. In this one, I'm going to say max degree of parallelism equals two. By default, it's one. When we run this, when this is run, and I run it, you can see now task ID 9, 10, 9, 9,
47:20
9, 10, blah, blah. It's now created two more tasks. They're actually on different threads. The data, you can see, nicely, is actually being processed still in order, 0, 1, 2, 3, but it's being run across multiple tasks. All I had to do with that is change max degree of parallelism.
47:42
We can bump this up. Let's say we... Well, then I've got to recompile and stuff. I don't want to break any demos. You can bump this up. The limit is as high as int max value. Whether that does you any good, I don't know. There are other constructs in here.
48:01
If you look at the options add in here, you have max messages per task, bounded capacity. Like I said, this block has a queue of items it can pass. It can hold any amount of items but only apply max degree of parallelism instances of that
48:21
delegate against those messages at one time. I can say now I only want to have a max of 100 incoming messages. In a way, action block, if you look at it, you can start implementing disruptor pattern if you're familiar with that with like bounded input queues, rounded queues going through this controlling how much is coming
48:41
in and how many things are working on one thing at one time. On any of these, you can also do task scheduler. I can run this on any task scheduler. I could say run this block just on the UI thread. Maybe you don't want to really do that most of the time. By default, they're like thread pool.
49:01
But if there's a block down near the end of the data flow, what I want to do is run that one on the UI thread because it's going to update the screen. I can run it all on the background until I just get that last block and say boom, up on the screen. Let's jump ahead and look at some of that stuff because that's starting to get into
49:20
buffer blocks. Bring up buffer blocks. That's probably the only other block I'll show here today for time constraints. I really want to get into that next demo and some more. The buffer block,
49:41
all they do is just a queue to hold things. You don't even give them a delegate to execute anything. You can see here I'm creating a buffer block. I'm creating an action block just the same thing before. I'm going to generate 10 items. You see here a trick here as I'm saying buffer block link
50:01
to action block. The buffer block, you'll post things into the buffer block then. The buffer block will receive them, queue them, and then it does a two-phase handshake with the action block because of that link to the pass items to it downstream.
50:23
This one will be very much exactly the same as the first example running here. We can start doing more advanced things with it. I just do that. Boom. Looked exactly kind of the same as the action block one because it kind of is in a way.
50:41
Then if you want to get a little more complicated, you can see here I've said here on the action block now, let's go to two degrees of parallelism on the action block. There's still one buffer block linking downstream when I run this one. You can see now basic two degree task ID 12
51:01
and I'm only writing the values out on the one. Here's why we can get fancy with this. I'm now creating the buffer block with two action blocks. The first one is going to
51:20
write out what data it gets and whatever. Then on the buffer block here I'm doing a link to both of those blocks, but I can say, hey, if the value being passed through is odd, go to the first action block. If it's even, go to the other action block. I'm doing basically content
51:40
based routing through this. As the data is coming through, I'm making a decision on where to go through the blocks in the data network. If we run this one odd even, you'll see now that block two is 0, 2, 4, while block one gets all the evens because of that distribution
52:02
pattern. Then there's some other things I won't look at here right now because I'll get some detail, but basically the action block has that queue waiting for it. They run on greedy, so if that buffer block has 100 items and it goes to that block and it says
52:21
I've got 100 items, can you have it? I can take any amount, give all the 100 to that one item, that one action block. If you cut it down to like max size of one on the action block, it'll say I've got 100. The action block can say I only take one, okay I'll give you one. Then I'll go to the next action block that I'm linked to, try to pass it to the next guy
52:41
so it can start rotating and round-robin-ing load-distributing work that way by a simple concept like that. Broadcast blocks, we'll see this in another example because we're running low on time, but basically it fans everything out to everybody.
53:05
So RX, and then we jump into the other demo. It's a big topic. I've been trying to get my head around RX for about three or four years. But basically it sets up an observer pattern where you can start, instead
53:22
of iterating through things like a numeral, like doing four each and being active on every item in the group, RX sets up with the observable here's a function for you to call anytime some new data appears in this stream, which is the primary thing. But it also gives you a lot of capability for
53:41
working with the data in those streams that delegate you give it should I run that on the UI, should I run it on a background thread, or different types of things like that. Mostly what you do is you convert some link queries to observables and
54:00
then start processing those. It's almost an endless amount of things that you can do with this. So let's go. We're going to do two demos over here of pure RX. Okay. Let's go over to
54:22
RX examples view model. I don't have it open. Right there. Examples view model. Okay.
54:42
The first function here does a periodic generation of items, which I use it in the other demo, so I wanted to show it here by itself, but you can say, you declare an observable. It's a class with a lot of static methods. And you could say dot interval. So on an interval
55:01
that you specify here, every quarter of a second, generate a sequence, which is basically every 250 milliseconds, it's going to go zero, one, two, three, four, until you know, if you want forever. But
55:20
I'm going to say then, I'm going to take 20 of those. So instead of going forever, only take when you hit 20, stop. Okay. I'm going to say observe on a dispatcher. Okay. So when I get to this point, this function here, I'm going to be running on the UI dispatcher. And
55:40
because I'm going to drop the value that's in there into an observable collection that's bound to the UI. And the subscribe method basically says, okay, you got an item, call this function. But call it on the dispatcher thread, generate 20, and go through there. So let's
56:01
go to our X, and generate periodically. So this is going you know, every quarter of a second cranking one out for 20 items. It's real cool. That's step one. The next thing that I like a lot with this is
56:21
observable. We're doing the same thing, but we're going to say buffer. And the example, the trading example I get in two and three minutes, within three minutes is doing a lot of buffering. So this is going to say, instead of giving me every time it shows up, collect them all for this amount of time, which is
56:40
one second, and then give me the group of them that was collected. So this should, in theory, give me four at a time. You know, sometimes it's three, never five. So we go run this. Let's see here, we do batch. The second one always comes up four, five, six. I don't know why it's peculiar
57:01
to this example, but so it's only given to me every second instead. And this continues, this one goes on forever. Notice I didn't have a take in there of 20. So to stop this from going, every one of these observes returns an IDisposable. So to stop things,
57:21
you just call IDispose, disposable.dispose and it stops them. So if you want to clean them up, that's good. So those are the two useful things that I found in Rx for the kind of work that I do these days. I'm going to show you how that all pulls together because we'll be back to this guy. And let's run this again.
57:44
And we'll see what's going on here. I want to make sure I may want to run the code here real quick.
58:01
End to end examples view model. Okay. Here's the coup de grace of all this. Okay. When I press that start button, this method gets called. And there's some things it's keeping track of. But I'm basically saying I want to
58:22
on an interval of some amount of milliseconds which up here is defined as one millisecond. That's as fast as you can schedule things on the thread pool. Get in there if you do more. I want to take 5,000 items. So I say I want to take the first 5,000 items.
58:42
So basically 1,000 items a second for 5 seconds. Okay. So that created an observable. Then from that observable I'm actually doing a link statement. Because that observable is going to give me 0, 1, 2, 3, 4, 5
59:01
blah, blah, blah. This is actually generating some mock data flying into the system. So what's the security? Well I'm just using numbers like a stock or bond by name. I'm just using a number. And I'm picking which field to use and a value for that. So for every one of those events I'm selecting
59:21
an update item and this creates another observable in the generator observable. So from that generator I'm then going to say I want a buffer based upon this time stream.
59:41
So that interval is right now I've got that at 100 milliseconds. So instead of one every millisecond give me 100 every tenth of a second. We're slowing down that flood of things of things coming in from
01:00:00
other systems right away right here. And then that's saying hey this is doing some work called conflation which I don't have the time to get into right now. But then passing down. So then I start doing some data flow. That was all our x up to now. So now I'm going to create a transform block which really doesn't
01:00:20
do anything. This is going to pass downstream but it's going to allow me to specify higher degrees of parallelism to go in here.