We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Playing Games In The Clouds

00:00

Formal Metadata

Title
Playing Games In The Clouds
Title of Series
Part Number
48
Number of Parts
94
Author
License
CC Attribution - ShareAlike 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
What does haggling at a garage sale have to do with load balancing in distributed systems? How does bidding in an art auction relate to cloud service orchestration? Familiarity with the ideas and technologies involved in cloud computing is becoming ever more important for developers. This talk will demonstrate how you can use game theory — the study of strategic decision making — to design more efficient, and innovative, distributed systems.
Universe (mathematics)Student's t-testAreaMultiplication signArithmetic meanLecture/ConferenceEngineering drawing
Wave packetSoftware testingWeb pageGame theoryComputer animation
Arithmetic meanUniverse (mathematics)Endliche ModelltheorieCloud computingDomain nameSoftware engineeringDistributed computingGame theoryComputing platformService (economics)Computer animation
Observational studyDistribution (mathematics)Process (computing)Real numberGame theoryVideo gamePhysical systemCASE <Informatik>Form (programming)State of matterComputer animation
Game theoryInteractive televisionRational numberCartesian coordinate systemRange (statistics)Connectivity (graph theory)Observational studyComputer scienceMetropolitan area networkDomain nameComputer programmingVideo gameComputer animation
Task (computing)Game theoryProcess (computing)Set (mathematics)NeuroinformatikInformationGroup actionStructural loadPhysical systemProgram flowchart
TheoryMultiplication signCASE <Informatik>Service (economics)SurfaceMaxima and minimaGame theoryTheory of relativityLine (geometry)Point (geometry)AreaSystem callNumberRight angleGraph (mathematics)Mathematics
Physical systemNeuroinformatikTheoryDifferent (Kate Ryan album)CASE <Informatik>Multiplication signComputer animation
Process (computing)Number2 (number)NeuroinformatikEvolutionarily stable strategyStructural loadDifferent (Kate Ryan album)Game theoryVideo gameOrder (biology)Multiplication signComputer animation
Process (computing)Multiplication signRun time (program lifecycle phase)NeuroinformatikComputer animation
Constraint (mathematics)Process (computing)Bit rateDifferent (Kate Ryan album)NumberPotenz <Mathematik>TheoryMaxima and minimaNeuroinformatikGame theoryCASE <Informatik>Distribution (mathematics)Multiplication signFunctional (mathematics)Resource allocationGraph (mathematics)2 (number)Dependent and independent variablesAlgorithmPoint (geometry)BitMathematical optimizationPhysical systemLine (geometry)Response time (technology)Slide ruleCurveNatural numberLoginDisk read-and-write headVideo gameElectronic mailing listLevel (video gaming)Integrated development environmentView (database)Arithmetic meanGraph coloringQuicksortPlanningDiagram
Selectivity (electronic)Bit2 (number)Different (Kate Ryan album)Point (geometry)ResultantPattern languageComputer animation
HypermediaEvolutionarily stable strategyComputer animation
MathematicsDemosceneWeb pageMultiplication signBuilding2 (number)Evolutionarily stable strategyFocus (optics)BitCASE <Informatik>Point (geometry)Maxima and minimaResultantInsertion lossComputer animation
Physical systemResultantVirtual machineAuction theoryMedical imagingProcess (computing)NeuroinformatikOrder (biology)DemosceneNumberMiniDiscSemiconductor memoryBitAlgorithmAverageMobile appMathematicsInstance (computer science)Multiplication sign2 (number)MeasurementGame theoryCloud computingSpacetimeForm (programming)Food energySoftwareMathematical optimizationTask (computing)CalculationDifferent (Kate Ryan album)DiagramCASE <Informatik>Connectivity (graph theory)Software frameworkComputing platformPoint (geometry)Cartesian coordinate systemMechanism designFunctional (mathematics)Resource allocationWeightMetreDistribution (mathematics)Arithmetic meanComplete metric spaceTheoryBounded variation1 (number)Context awarenessStatement (computer science)Identity managementFitness functionInformationNormal-form gameEvolutionarily stable strategyGoodness of fitLattice (order)Disk read-and-write headSocial classKernel (computing)Mathematical analysisOntologyPhysical lawWaveMoment (mathematics)LengthSet (mathematics)State of matterLogic gatePerturbation theoryMetropolitan area networkEndliche ModelltheorieVideo gameFamilyComputer animation
Vector potentialVirtual machinePartition (number theory)Game theoryTheory of relativityTheoryPhysical systemType theorySoftwareFacebookSoftware testingComputer animation
NeuroinformatikResultantInformation securityMultiplication signConnectivity (graph theory)Virtual machineDatabaseParallel portAlgorithmGame theoryRange (statistics)Text editorNoise (electronics)Evolutionäre SpieltheoriePhysical systemComputerCASE <Informatik>Point (geometry)Evolutionarily stable strategySoftware frameworkMereologyCoalitionCartesian coordinate systemEndliche ModelltheorieService (economics)Real numberDesign by contractCloud computingComputer configurationTouch typingTheoryComputing platformArithmetic meanSocial classContent (media)Computer animation
Maxima and minima
Transcript: English(auto-generated)
Two universities students and their final chemistry exam. They were smart, they received A's in chemistry all semester,
but on the night before their final exam, they were responsible, and they weren't partying in another state. So they didn't get back until the exam was over. They had to think of an indigenous excuse. So they went to their professor and they said, we're really sorry, we wouldn't have made it back in time for the test,
but we were coming back and visiting a friend and we had a flat tire. We fixed it and made some tests. So the professor thought about it, eventually agreed, and sent them to separate rooms to take this test that she'd been done for.
So they sat down and they turned the page to the first question. And then they turned the page to the second question. So what did you write in this situation? Stories like this that got me into game theory.
I'm a software engineer working at spiritual labs in London, and I'm mainly working on their cloud foundry platforms, the platform as a service. And I found the whole cloud domain changing the world. And if you don't understand something, it's really useful to be able to put it into concepts that you're familiar with.
And for me, that was game theory. I studied economics at university, and game theory was one of my favorite models. And so, through my reading, I discovered how game theory can provide some interesting clarity in the distribution system space. And so I wanted to share some of my findings with you today. So we're going to start with a basic introduction to game theory.
Then we've got two games. One looks at marketing and one looks at auction. Then we're going to look at how these simple examples have been extended to real life case studies before it will be hired to run. And by the end, I hope to show you that not only is game theory a useful tool
for providing the simplicity and clarity in distribution systems, but you can discover how you can efficiently allocate jobs and the systems that you go on to build or work with. So let's begin with more about game theory. Game theory is the study of strategic interactions between rational agents.
Applications include economics, politics, biology, and computer science. It was developed by Jonathan O'Neill, a hospital resident in the 1940s, with many other scholars developing it extensively in the 1950s onwards. It applies to a wide range of games and interactions, and you typically see it applied to many dual scenarios, like the opening story.
Despite its breadth, all games, no matter how common, have some basic components. So what is a game theory? There are always at least two players. Each player has a set of strategies, which is based on the information they have and the actions that they can take.
And there are payoffs to each player for every possible outcome. So we're going to look at a game that helps us solve the load gap problem. We've got a distributed system with two computers, and we have a scenario where jobs are being sent into a task distributor, which then has to decide how many jobs are sent to each computer.
So let's look at the first game, bargaining. One afternoon, you're walking along the street with your friend Gemma, and you come across a bag of sweets on the floor. You break all the sweets until you know how to negotiate as to how you're going to split them up. Barbing theory attempts to give an answer as to how you want to mentally divide up those sweets between you.
So an old way of saying that is that bargaining theory asks, how will sadness be split between angels? So in the case of the sweets, there are surface to be gained by acquiring something sweet, and the maximum surface is equivalent to you getting one of the sweets and never getting another.
So one way of answering this is to look at something called the natural answer. So in relation to the previous example, we can pop up a chart that looks like this. It shows all the possible payoffs you can get by dividing the sweets. The outer edge is the line where all of the sweets are taken, and any point inside the purple area means that after you've done your negotiations,
there are some sweets left behind. So in the access representing Gemma's payoff, you've got the case where she gets all the service and gets one of the sweets. And you've also got the case on the access representing your payoff, where you get all of the sweets. And we find something called the Nash-Barbering solution, somewhere along this frontier, where you both have a higher utility
than if you didn't take any of the sweets, and there's nowhere else in the payoff set where you can beat the sweets between you without increasing the happiness of one of you. And in this example, I'm assuming that yours and Gemma's payoffs are linear and the same with respect to the sweets. So this means that an additional sweet
or two or three in your pockets is responded to with the same amount of happiness. But often in game theory, as we'll see later on, examples of a natural answer when you tend to see curved frontiers representing the fact that ages typically respond differently to marginal increases or decreases to whatever is being farmed. But if we assume linear payoffs for now,
then minus sweets will be slipped in half between. So this is all well and good, but you're wondering, what's this distributed system because we're doing this? So we can use marginal theory to think about how resources should be allocated within a distributed system. When we've got a system where some computers are heavily loaded,
whilst others have a lighter load, we can have cases of poor system performance, and we want to mitigate gains back. So that's your time to our distributed system where the two computers need to work the other different piece on top. How do we model this? So the playlists are the computers.
Each computer has a different processing rate, and we know this at the beginning. Each computer's strategy is to accept a pre-breathe number of each learning job, and the payoff to each computer is related to their final load. So we assume that the computer's light loads are better off. So let's assume we're looking at
a computer A that can handle five upcoming jobs every second, a computer B that can handle two incoming jobs every second, and together they need to complete four jobs every second. So here's what's happening. Four jobs are being funded in the main computer, and alongside there's some of those
presented in computer A, and some presented in computer B. And we want to learn how many jobs each computer can read to accept in this game. We want to distribute the jobs in a way that optimally minimizes execution time. So what do I mean by optimally? So what I'm saying is once we've covered the job allocation,
there's no way that we can redistribute the jobs such that at least one of the computer performance is worse than the time taken to complete all the jobs increases or doesn't increase. So we assume that each computer has a payoff option, which we can think of as the computer's happiness,
and this is going to look something like this, log of x minus y, where x is the computer's processing rate, and y is the rate of jobs arriving at the computer. So don't worry too much about the log function. What it's essentially saying is that as more and more jobs arrive at the computer, the time taken to complete those
jobs increases exponentially. And so here's what that might look like for a computer that can handle five jobs every second. So if I'm a computer from my point of view, I'm happier so I don't have to do any work. I know I have to do some, but I want that to be as little as possible. As the amount of work that gives me increases, I get unhappier and unhappier.
But the fewer the number of jobs I have to do means, the shorter the time it takes for me to complete that work, I'm happier I am. We want to maximize the function of those like this. So what that says is that by taking that payer function from the computer, adding them together
and substituting the processing rate from five to two. And we have the constraint that the total jobs arriving at the computer equals four which I said those number of jobs they've got at the computer. So if you pop that function on the graph you get something like this. And we're trying to find a point up here where happiness
in the system is greater. So when you solve for that point, you get that the number of jobs descended from P to A is three and a half. Meaning that half a job should be descended from P to B. So here's another slide.
So what does this look like on a barbed wire? So here you can see the home payer set for the system, with a curve out of under here. So this line is where all four jobs are being handled. And it's curved because the different processing rate from computers means that as they re-addicate jobs between them, the responses are different based on which computer will it be at.
So with the allocation of three and a half jobs to P to A, if you calculate the response time in the system you get 1.33 seconds. So what if we try and shift the jobs around a bit? So if we look at this point and we give the computer a bit more work the optimal response time goes up
to 1.39 seconds. What about if we give computer B a bit more work? Well then the optimal response time becomes 1.33 seconds. And so what I'm trying to show here is that given this natural algorithm solution, there's no way we can redistribute the jobs without increasing the maximum response time. So by using
game theoretic intervals, we can find efficient distribution of resources with our systems. And we've looked at an example with only two computers but this case can be extended to any number of computers. And it enables us to start thinking about algorithms that can efficiently distribute jobs at a system. And we'll look at that a bit more later.
But for now, let's look at another game. So in the end, you and Gemma decide that instead of switching the suites between you, you'd be better off taking them to the school playground and auctioning them off amongst your friends. So how does this auction work? We're going to look at something called the second price auction setup.
So participants in this auction simultaneously submit their bits to you in a number. The person who makes the highest bid wins the value of the reason. But they'll only pay them to pay the value of the second price bid. This might seem like a bit of a strange setup, but it's how things like eBay work and this brings out an interesting result.
So let's look at an example of three participants. Lucy submits a bit of $25. Mark submits a bit of $5. And Helen submits a bit of $10. So given a second price auction, Lucy will win the auction, but she'll only pay $10. So often
they'll say, given how different agents value items of their bidding form, how should they bid? So I want to interject at this point and introduce something called a dominant strategy. A strategy is dominant if that strategy gives the player their highest possible payoff regardless of what the other players do.
So the fact that a dominant strategy is not a winning strategy, it doesn't guarantee that you win and nor do they always exist. But if they are there, it means that out of all of your options, that strategy will give you the highest payoff you will get. And the reason why I introduce that concept now is because when you have a second price auction,
the dominant strategy is to always bid truthfully. When I say bid truthfully, it means for example in the case of the sweets, it's in your best interests to bid the maximum value that you have for those sweets, more or less. So let's give that example a bit more of an idea. And let's assume that each of these three participants have bid the true value that they hold for those sweets.
And these are their respective pays. So they've lost the auction and they don't get any value zero. But the payoff for a winner is the true value minus B2. So for Lucy, she values those things at $25 and she takes off the value that she pays,
which is $10, hence her payoff. So, the way a second price auction is set up, there's no way that either Lucy, Helen or Mark could have re-bid such that their payoff improved. So let's focus on Helen to try and get, understand what this means. So, she's come second. There's no point
in getting it out. In fact, anything up to $25 doesn't change her. What if she were to bid $26 and she wouldn't have known the auction? Well then, her pay becomes minus $16, because she only values the sweets at $10 but instead she has to pay $26 for them. So there is no
loss of bid and then the true value of $10. That makes sense for Helen. And this result is true for any participant in a second price auction. So is this interesting? Well, we can use auction theory to reveal the results came to many people's machines within a distributed system. So let's go back to our model.
Again, the players are the computers. Each computer has different capabilities, but this time they're only known to the computers themselves. And each computer's strategy is to announce a bid. So in this case, a bid is going to be a statement about how much of memory the machine has. So if the computer has 10 gigabytes of memory, then an honest fit would be represented by the number 10.
And the pay for each computer is based on the result. So again, we've got two computers. The first one is 15 gigabytes of memory. And the second one, the computer gains 10. And they're going to enter an auction for the price of winning some jobs to run on their machine.
So the computer is submitted in the form of an A, and the computer B is submitted in the form of a B. And ideally, we want the machine, we want the jobs to run on the machine that has the measurement of the computer A in this case. So that's our diagram. We've now got a new component called auctioneer.
It's going to announce an auction to run the jobs in question, and then each computer is going to submit a bid. It will slap out the winner and communicate that to the task issue reader, who will then send the jobs off to the winning machine. There's also something else going on behind the scenes here. Each computer is programmed
with a payoff function, which mimics the second price auction that I described earlier. So let's look at what that means with respect to computer A. So if computer A doesn't enter the auction, the payoff is 0. So it says that it's got a loan on computer B, and again the payoff is 0.
But if it wins, then its payer is going to be 50 minus 50. So that's a payoff function for its memory capabilities subtracting from computer B's bid. So in this case, this is a rather absent example, and we're going to ignore the burden on the machine of any jobs that it eventually runs.
So given this, similar to the sweet example, I want to show why with computer A it's always in its best interest to bid a value of 15, nothing else. So let's look at why it's best if computer A doesn't want to bid anything higher. So in this first graph, we see that computer A is winning.
It's bid 15, computer B is bid 8, and it gets a bit higher, say 20. That doesn't change the payer, because it's way set up. So there's no incentive to win. What about the place where computer A is using as bid 50?
So where computer B is bid 70? So computer A may want it to bid higher to win the auction, but in that case, the payoff becomes negative. It becomes minus 2 because the payoff is 15 minus 15. So again, similar to the sweet, overbidding its true value
means that in order to win an auction, the payoff will always be negative. So there's no case where the computer has to overbid its memory capabilities. What about bidding lower? So here, this is a scenario where computer A is using the election, or the auction, and
if it were to bid any lower, that makes no difference, the payoff still remains zero. And here we have a case where computer A is winning at 15, and computer B is at 8. And anything lower between A and 15 again will make no difference to the payoff, unless we look at a scenario where computer A
reduces its bid such that it goes below computer B and then it goes into a losing state. And so once again, the incentive is to stick for a bid. So with this example, each computer will submit an honest bid, and since it has more memory, computer A will run those jobs. So what does this show us?
Using concepts like the second price auction, we can design systems that incentivise machines to truthfully report their capabilities, meaning that distribution of jobs remains efficient
even as results are made of energy changes. So far, we've looked at a couple of games and we've made quite a few assumptions. For example, all the jobs in half are identical. But what about needing to distinguish between one of the tasks or long running processes? And what about things such as deadlines or
needing to run jobs on the same machine? And in the auction example, we ignore the burden of the machine of actually running those jobs. We just focus on the at-hat second price auction payoff. And so there are many more assumptions in this example I've discussed with you today. But the simple starting points are still key, because
not only do they help us start to think about these problems above these models, these simple starting points are also interesting for applications which I'm going to check in out. So if we return to this borrowing example that we've just done. In a paper called No-Dancing Distributed Systems by Daniel Grossman, he showed how you can develop an algorithm
based off the national marketing solution. So say we had a distributed system of 3 metres with their respective personal weights and they needed to handle 7 jobs every second. So after doing the initial calculation, we work out how many jobs are sent to each computer and computer C returns a negative value.
So in this framework, that's a purpose of the computer being too slow to be effective and so we remove it from the system before redoing the optimization. And we repeat the calculation until all remaining computers in the system have a positive value for the number of jobs that were sent. And so this kind of outcome represents that national marketing solutions point where
you can't distribute the jobs in order to minimize or in order to lower the time taken to keep all of those jobs. So there was an experiment set up where this algorithm was pitted against other ones that optimized for different things such as trying to minimize the overall expected time for each job or one of them dished out
the jobs in proportion to the pressing weights of the computers. And what was found was not only was this algorithm simpler to understand and simpler to program, because those computers weren't utilized. They were just ignored in the algorithm whereas other allocation mechanisms overloaded those computers.
And every single job, no matter which machine it ran, had the same average completion time. Whereas for other systems, there was large variation in the time taken to fix the jobs. And what about auctions? So Cloud Foundry, the platformer says I work for and incorporate some of these auctions
when it comes to orchestrating where applications are run. So on a very basic level, what happens is the user says I want the application instances but the auctioneer asks the machine in the system what have you got, what space have you got and then the people have been brought back.
So in another workplace called that placement in Cloud Foundry in Diego Armit Gupta writes how these bits are constructed based on different criteria such as available memory and available disk and given certain constraints such as the fact that the available memory for the resource has to be at least as great as the requirement for the app instance.
The auctioneer then decides where to place these apps. So on surface, this is a rather straightforward setup. And the way Cloud Foundry used to place applications used to be way more complicated. And the game theoretic context helped these engineers to find a more simple and effective solution. But sometimes you have systems
where the resources are owned by self-interested agents and any note added to the algorithm can be vulnerable to information and this can lead to performance and efficiencies. And so to help uncover this information, frameworks have been built around the ideas used in sacrifice auctions. And one such framework is called the Big3 Cloud Foundry
This is where computers are programmed in a profit function. A payment runs a cost and the payment is paid out by the auctioneer after a received bid. And the cost is based on the burden on the machine of running whichever jobs it sent. And the payment is worked out such that the computer maximizes its profit
when it sells the truth about its resource capabilities. And so in these systems, the computers even though they may have other incentives to gain more network or things like that they'll find themselves reporting their true game abilities. And again, there's more on this in the process paper. But what do game theoretic approaches mean more
broadly for the type of systems that we build? So to see Nicholas Tyler and his Facebook discuss the potential of rigidity when in the face of failures, systems learn from feedback and get stronger. And I think the systems built from game theoretic concepts have the ability to show some of these characteristics because in the face of failures such as network partitions or things like that
the machines can be optimized and efficiently and dynamically reallocate those choices. So that's a really interesting concept to think about in relation to this stuff. A nice time to wrap up. So what are the specific results that we've looked at today? When we know the capabilities
of machines in a digital system we can use things such as a NAS bargaining solution to help us think about how to allocate resources. And in cases where we don't know the capabilities and we want to try this out frameworks built around the second highest option can be really helpful in helping us reveal those capabilities as well.
So often game theory is criticized when applied to humans in regards to real world situations because it doesn't take into account things such as emotions and other stuff like that. And so you can say that here's concerns when you're looking at game theory computers don't really matter and they still don't think they do but when you're dealing in some ways computers can be like humans
because they behave in ways you don't expect and when you're dealing with uncertainty you need ways to clear away the noise and that's when game theory becomes useful. So you don't really have to use game theory as a tool to understand these issues. As a starting point you can just think about how your model part of your system is a game and think about what contracts
or competition need to occur between components. And with mathematical up there there's a whole world of really interesting, simple yet powerful algorithms to think about producing relevant pieces. I recommend you to go read this book if you're interested in finding out more.
It doesn't look like it but it's essentially just an introduction to game theory and a range of stories going from all to war strategies and when reading you can think about how some of these examples can relate to the computer systems that we work with. And there are many other applications that I didn't touch on so if you're interested in cloud security non-profitive game theory
provides some useful parallels. Maybe you're working with a database editor Sandra and you're interested in clustering or then you're interested in evolutionary game theory. If you want to know more about the companies in the platform of service landscape, then coalition theory can produce some interesting results. If you want to dig deeper into some of the things
that I've mentioned today, then these are the old posts and books that were really useful in helping me introduce this talk. And one last question remains Would you like to play a game? Thank you.