Merken
How to conquer the world
Automatisierte Medienanalyse
Diese automatischen Videoanalysen setzt das TIBAVPortal ein:
Szenenerkennung — Shot Boundary Detection segmentiert das Video anhand von Bildmerkmalen. Ein daraus erzeugtes visuelles Inhaltsverzeichnis gibt einen schnellen Überblick über den Inhalt des Videos und bietet einen zielgenauen Zugriff.
Texterkennung – Intelligent Character Recognition erfasst, indexiert und macht geschriebene Sprache (zum Beispiel Text auf Folien) durchsuchbar.
Spracherkennung – Speech to Text notiert die gesprochene Sprache im Video in Form eines Transkripts, das durchsuchbar ist.
Bilderkennung – Visual Concept Detection indexiert das Bewegtbild mit fachspezifischen und fächerübergreifenden visuellen Konzepten (zum Beispiel Landschaft, Fassadendetail, technische Zeichnung, Computeranimation oder Vorlesung).
Verschlagwortung – Named Entity Recognition beschreibt die einzelnen Videosegmente mit semantisch verknüpften Sachbegriffen. Synonyme oder Unterbegriffe von eingegebenen Suchbegriffen können dadurch automatisch mitgesucht werden, was die Treffermenge erweitert.
Erkannte Entitäten
Sprachtranskript
00:00
I everyone for our next speaker this whole lost the will of the philosophy of science from here from again these view with datadriven in Amsterdam and if you ever plagiarists home all we will we will get some nice insights how you can actually see whole to conquer the world with the simulations so please welcome a few could be total you to conquer the world it is that it will over R and I will show you how you can use generic algorithm to actually improve your skills so so in other words will be doing some risk analysis and overview of this talk and 1st I'll just introduce big game of risk you for the people would have no haven't played with any of the 4 this not recently of course if you read you want analyze the game a list and then you need a risk framework to do so to actually played the game so I'll show you how I implemented risk in Python and then I'll tell you what a genetic algorithm is actually and then we'll see how you can use to get him to play with so let's start with the game a little history it was invented in 1957 so it's
01:25
over 50 years old as the conquest of the world but since 2 years later commonsense 959 it's been called risk and according to the
01:33
publishers can playing with uh anywhere between 2 and 6 players but I'd say should need at least 3 because the to the rules of different and playing time it's anywhere between 1 and 8 hours depending on how how we want you take to decide what you're going to do I can tell you in Python is does a little quick so forget everything revolves around this game on
01:57
I mean you see here is just a map of the world divided into 6 continents just like in reality and 20 there's sort of a certain 42 territories and and so the territory's divided by borders and so their neighbouring territories if they share war there were
02:16
connected by a red line that goes over I when you prepare game 1st and you divide whole territory of over the players that you start with so imagine we have 4 players will decide that divide all 42 territories among those players so was that we assign them to what we have red blue yellow and green and 1st we put 1 army on each Territory and the the ones in the circles you to show that this 1 of the territory and then every player 1 by 1 gets to place 1 mommy on 1 territory wherever they want to put it and have until the point that they all have 3 items on the middle so that looks like this so
03:05
after less than the game is ready to be played and then so that to each during consists of 3 stages 1st there's the reinforcement stage than comet stage and then comes the fortifications so what happens during the stages of 1st
03:21
during reinforcement players have places of some additional armies and again 1 and is really depends on how this status in the game from many armies formal harmony territories that control and which ones so 1st of all he gets 1 army for every 3 territories controls plus you get some bonus arms forewarning of will come to you for example if you own the entire continent of Europe you get 5 more minutes and then there's some game mechanic that i want want explain in detail that goes uh is called reinforcement cards you can have in these by claiming another territory and then you can get a little bit of additional models from so after reinforcement you can and attack other players is the combat stage so what you can do is you can take some armies from 1 to the story and then attack enabling the with these are common but also cited by using dice somewhere chance and basically you only have a good chance to win if you have more and so really it's only about having a lot of almonds and then so if you have win the conqueror atleast 1 territory during this stage you get 1 of these reinforcement carrots and then after you've attacked tactic can attack indefinitely during the your term until you've had enough for or you you run out of arms and then the last stage of the journey which is a little less interesting this fortification is where a player moves some armored so you can pick 1 territory and the neighbouring territory and move your arms and between those you can only take 1 combination of characters that have to be neighbors and then you can use that many of that you have so that summarizes the game so well not really because then there is meaning of the game so that each player gets emissions and and by completing the mission the
05:19
player wins the game of course if you just eliminate all the players and you went but there's also the missions and like listed here there's quite a few of them have a list of all of them the for example the missions like conquer Africa and North America competent continents destroy another player or comparable to this there so now let's go on we want to risk invited by building a risk framework in Python that can justify the classes from the list of your 1st of all the board which handles all the armies on the game board and all the territories which connect to each other than those 2 cards which is that handles reinforcement birds and of a single player then is mission which describes the mission of a single player and also check whether this mission has only been achieved than this layer of course all the the the the the actual players that need to make a game decisions on what they're going to do to win and then in the end of the game that keeps track of all the other game mechanics the terms of all
06:24
stages in the OK so let's have a look at the 1st the 1st of object the board so when we yes we can
06:33
just take about in important bought from the product and then we can create and when we do this and we have to specify how many players really going to use is it already randomly distributed territories amongst the so then we can call that a lot of work methods from the board and will actually ship show you where the characters which territory belongs to whom and where the answer of course is initialized with 1 single army and returned from then have some of the water allows for very easy and manipulation of the armies on the board so you can easily change the of territory we can also give to an owner that's actually not yet in the game for example here we say as I city on territory 0 which happens to 1 BU 1 right here to considered to play R 5 which is the black hole and then place 50 on and then if the book the we can actually see that this is the an easy way to achieve borders also whereas the layout of the territory so let's let's move back to this this this picture here around the territory which is that has 15 black armies there's 4 5 neighbors of which 3 are owned by residents who were owned by the blue and so we asked the board we can ask for the neighbors of territorial 0 and I will tell you that there are only 5 neighbors which 309 Player 0 3 and 2 on the player 1 that will and all of them have 1 and this makes it easy to
08:15
decide for example which elements which territories we're going to attack somewhere because you can just ask the board which are OK so it can also handle attack moves to just call the attack methods from
08:28
a certain territory to another territory of course has to be territory that on by someone else with a certain number of armies and they will throw the dice for you
08:37
and then if you like lucky like in here and you would you can take over
08:42
100 years like the but moved into the Middle East West that was used the fifties against what so then there's also the mission and we can get all missions ties with the mission function also here we have to specify the number of players because this depends on the number of players and here there is a list of all the missions of the Commission has a description and there's quite a few that I ask you to conquer a certain set of continents some of them to conquer 2 continents and an additional continent of choice union after this on these 2 plus an extra whichever you choose work on over the of a certain number of territories or sometimes
09:29
even have 2 armies on started to rain and then there is the last missing so as to eliminate another place and of course this depends on the number of players we had 5 players than it would have been 1 more eliminate another player which OK and so we can we can just take mission and it said that but I will give you a
09:50
representation that tells you what the description of the mission is and to whom it's assigned by default it's not assigned we can assign to a player example without assigning polarity 0 then it will tell you it's assigned to the bottom and then we can ask whether the in the player has won the game already can have the mission evaluated work and check whether it has actually won the game which in this case it into another 1 special case so that is when the player is assigned to the himself so for example if we take the last mission here this is eliminate the yellow player is so far unassigned you can assign into yellow player which 3 and then I will tell you well there's a whole mission was to conquer at
10:38
least 24 hours which is not the same OK so that'll work so I now and then there's also place well and they're going to detail and to problem that they're Objectworks for now because made several versions will encounter some of them at a later and but the players have to implement the for methods and and that is the 1 sum of 1 method for a user friendly and 1 of the things the game stages so the reinforcement interactive fortification and then the term involves for was and then of course there is the game object from instead of just explaining what it what it does so let's just go through the rest of the program and have also implemented random where object which just play the game randomly so if you think 1 of the players makes a stupid decision what problem because it's just random so we can important the game we
11:43
need also provided with a few players so let's make it clear that game with for a random players and we can plot game like we used to you can also see that right now it's plot some statistics that probably not easy to read from their ideas and not very so the game board has been divided amongst the 4 players now the 1st stage
12:09
is still good farming and we can tell they are and the game board and to initialize a single are these are simply player to place 1 army on the board and you can see that so for example
12:23
here this the blue and here there an additional green army yellow 1 and those a right now we can to keep calling this function this method until all the players of the armies on board for that we're boring and so we can just call initialize armies and that is that's that's exactly that until we got so now you can see if you're willing to guarantee that each player has exactly the arms and so now we're ready for the 1st during the 1st term of the red and the 1st thing that the maybe it's place some armies in the world and so we
13:03
cannot do with the by calling the reinforced method and give it the way object needs to decide what is going to do and to see if you if you have a closer look that indeed there are a few more in the book so let's go back to the previous situation a little and then you can see for example here
13:24
3 almonds and then after going from topic further in total 3 armies which media OK so now that comes to detect faces with the same called the attack method and and then if you take a very close look let's not go into all the details but that happened it will have to actually detect something also anomalous between the and not properly then there's also the fortification phase and and and if you are making more OK so then now is moves let's not
13:58
go through all booster all right we can just state later and later with the number of dead reinforce the force from the attack and then the fortification and and we can keep doing this until the game has ended or maybe just would sentence and then you can see that the red is doing pretty well here
14:19
was taken over South America and back and the green almost to the reference but not so we can keep doing this until at some point the game and we can also ask again whether it has ended so far and but just checking all the missions if 1 of
14:37
them as 1 and then of course after a certain amount of trends of this in this case it's 54 to this 1 player wins and the red that's directly the this is not a very good strategy the right players is doing right now because there's always
14:54
armies in Australia here but that is because boys randomly moving armies and the chance of moving armies away from their it's there and that's not going to solve but have demonstrated here is that we can actually plagiarism OK now let's move on to genetic algorithms so what is the genetic algorithm and so it's a machine learning algorithm based on evolution and natural selection to basically also nature involves from so why have chosen genetic algorithm while it is easy to use and with the little just having very little knowledge of a problem and is very robust against noise and many dimensions and that would come in come in very good hands OK hello 7 and looking at example so imagine we're trying to solve this problem on and the solution of a puzzle is a string of 16 bits neural so the solution so for example it's it's this distance 16 bits here and now imagine we can we don't know the answer to the puzzle but also the only thing we can do is provided with a possible solution and then there is a function that will tell us how many of the bits are correct and so for example if we tried this strain on they would yield 7 because the force 1st oral correct the 2nd book is also from the 3rd workers to correct and the last 1 that I 1 that would you the total of course we know this is this a problem that simple enough to just with the force is usually do that but let's do with genetically so what we can do we can choose to spot start with a generating few random solution so for example that we can evaluate all of these and
16:47
then we come up with the score of the solution and which ended in the 1st case from the top and the
16:57
first one and the 2nd 1 this will have a score of 9 and the 2nd and the book of the surface the 3rd and the last of much worse so if we just look at the solutions lets only have a look at the the the the best ones because we're not interested in that solution right so we take the top 2 solutions and then we will you can do is we can split them off with data the solution we call the variance and I've followed them right here so there's a red 1 and a blue 1 and we can split them up and paste them together to form children so we do this if we take the 1st the the blue solution and take the last the bits from the red solution we end up with the forward solution that we had before so much for 9 . 2 4 6 but if we do it the other way around the 1st a bit of the red 1 and the last thing the bits of the blue 1 we find a solution that has a score of 12 now there's another
17:54
way to improve the resolution level but and that is by just randomly mutating the so for example we take the solution we just found 12 with a score
18:05
of 12 we can just randomly mutated states so for example take a 2nd bit and changes from 1 2 0 and we'll see that score drops to 11 but if we do sometimes for locking and will find a solution that's a little better than the solution we have it for a weekly can keep doing this you take to good solutions combined in and hopefully find better solutions than mutated them a little and combine them some more keep doing
18:33
this until we have found a satisfactory solution in this case we know that the optimal solution of a score of 16 but in many cases we don't actually know what how good the optimal solution so In short the genetic algorithm requires you to have a random or maybe not random initial pool solutions and you need some evaluation function you combined method and the Newton method and then you can just keep doing that and see your solution to improve so now let's think about how we would implement the genetic algorithm for risk but there's 1 problem so when we evaluate different murder to evaluate the risk where we don't have a function to evaluate the score like this so for taking when
19:19
we take the the strings of bits on we can easily evaluate them and get a score of builders 9 6 or maybe 12 but once we have it we have for Chris players let's say they went to the airport which 1 is the best there is now function that tells us how good it is so we could of course have to play a game so let's let's play a game between the 4 players and and where we will display a 3 the best but we don't know what do with the more dense but there's no mandible accurate that could be that actually in some other way back so that we can have a look at it like this it's more dense and then we see that player 1 is actually the 1 of the most games so if player 1 the best and what like we can play
20:07
hundreds of games and satisfactory satisfied with the precision with which but of course this text but now what do we have a place so we cannot have a player's rating and then you can go in and out of the 6 but so of course we could have been way to get they want to play for they play a game player 1 wins then verifies the play a play game and by assuming but does this now mean that player one's betterment clarify what could be we don't know we don't really have a way to to know and that's actually B the players from the 1st who played against the from 2nd so the best solution that would be to play games with all combinations of players which in this case would be about 70 but remember we want real 1 game between in in every combination is not enough so we end up playing a couple thousands of games and this only for a player imagine we have 100 then already playing a single game for a recommendation would require millions and millions of again this is not the same so we cannot do so that's why I have decided to use to scale to trueskill is a Bayesian ranking algorithm
21:28
developed by Microsoft Research and the use to rank the players of textbooks changed is very interesting so and let me give you an example so imagine we have 2 players and that never played a game than How will we don't know we don't know the skills from so if we just all the belief of the skill these players on the axis so you're on the x axis we see the skill the player when the light the very good and intellectual be very bad but we don't know how good the players are so this there's going to be some distribution then likely now
22:09
extremely good like me not extremely bad just somewhere that's all we know when this distribution is further work on by way to these books from my colleagues here for instance what if you want to play around with with these these distributions from so now if something before these players they had never played game so we will know anything but what they do play a game against each other and then player 1 wins then of course we still not sure how good they are but at least we're we think that it's likely that the player 1 is a little better now if on the other hand player 2 wins then it is likely that there is a little better than the 1 and so this is this is very nice is exactly what we expect but at a very early and on a very nice property of this algorithm is that imagine that we have 1 very good place so localarea player 1 is very good so we know already know
23:13
it's played many games and node labels of belief of the skill of the player is always on the right side of and now imagine that player 2 was new so we don't know anything about them and then so it to just have a broad distribution somewhere in the middle of like before now
23:32
imagine that play a game against each other and where 1 we know what we expected that there was a very good player player to probably
23:41
wasn't a good most likely so nothing much changes they want got a little better because of 100 m player to have been much
23:50
but on the other hand if the nucleic where 2 wins the game against where 1 then suddenly we realize you must be very
23:58
good so you can see that the distribution of our belief that goes all the way to do that and now this means that the in order to to have an idea of the skill of the player or the skill of
24:13
risk involved and you don't need all players to play against each other you can just use these are used to scale to get an idea of the skill the player and estimated belief but you couldn't implemented in Python but of course you know like for most things in Python there is already affected the distance
24:36
quality and from where you can just use it like this so by default every player gets a score of 25 and just make 2 ratings a and B and then you see so each player has a score a new of 25 and this which is the with of distribution of a so that's very nice and and then after a game we can calculate the ratings so for example if we before that so we have a and B with each with a rating of 25 and then we have them play a game where with the 2 groups so we have a player a and player b each in group and and that for the 1st 4 1 came in 1st and the 2nd group lost and 2nd there we can see that actually is the mu of where where there a increases 29 and the new there be decreases the forward rates of point on the other hand is so and then after this if than 8 wins again you can see that the score hardly increases anymore because what we already knew that they were better be on their land and then the wins after this from this course go back to fairly close to 25 again because of the people so this is very nice and very nice
26:02
things about the thing about trueskill is that it can also handle larger groups and multiple players in the type of example is that if we have a risk and that there is only 1 winner so we can say well where 1 player a 1 again and the
26:19
sea and the oldest goal so now we can use that to rank players and evaluated now we all we need to do is implemented a genetic risk but as I said before the player needs to be able to reinforce tax fortified and turning skirts so how would I do this hardware can move what the easiest way to do this is just create a list of all possible states of the so for example when reinforcement or or reinforcing have to place an army that I may
26:53
just make a list of all possible character the place to and then a rank of all these moves on based on some criteria and then I just picked the top ranked so what kind of criteria would be used for example in from placing a reinforcement let's define some some metric called territory ratio which is the number of hostile neighbors around the territory divided by the total number of neighbors around so this gives me a measure of how far this territories into enemy lines and of course we can have some metrical how important is this territory for my mission was about to conquer Africa the territory is in Africa and I need to start from can define multiple more of but I think these 2 example so imagine now I have 6 territories and place an army on 1 of them and I can calculate this character issue for all of these
27:54
and you can also figure out what the main mission is applicable to all of these different and imagine ideas so i l i all these 6 territories they have an index and so for territory 10 has a territory ratio of 0 which means that it has no and hostile neighbors but you probably don't produce reinforcement on their I not doing rates are we create a rank of the spirit of using these numbers but we can just define reinforcement rank which is then the territory ratio found some weight loss emission times some weights and let's say that the territory racial weight 1 animation we do we can calculate its rank like this so for for example then for the 1st territory the territory ratio is 0 . 1 that's still 0 plus 1 and 2 that's to begin to the small Territories and then the top ranked 1 articulatory 18 so we place there but
28:59
now how do we figure out which way we should use for this disaster well for this we got the genetic way so what we do is we initialize all the players lots of places with random weights play many many games we dropped the ball players we combine and you take the good players to create new maybe even better players and repeat this until so what does it look so for example had take a territory Rachel rate and just randomly distributed at that the random distribution between here minus 25 plus 25 why this and the
29:40
role of just something and then we play lots of games and we drop all players and we create new bias by combining mutate in these weights and link or we see that so this was the initial a phase after 1 iteration of the genetic algorithm see that the players with a higher territory right away but definitely better than those with the Lord international this after 1 iteration of course 1 iteration is not marked by other after 10 iterations all those all the players with the negative territory relation with almost gone and after 40 iterations see this is a nicely centered around 50 so I think the Saudis is 15 normalized well a little trickier and and so for 1 of the great I fixed that such that it could never be larger than 1 in fact I allotted to be beta minus 1 0 4 plus and can see also this for this 1 will too little slower you can see that this after 40 iterations as a nice because 1 so this means if you're going to decide where to put your reinforcement looking at the the effect the number of armies New Territories 15 times more important than whether or not the territory and its the
31:11
relevant for emission and so I've implemented 5 factors for reinforcement so this would be and whether or not the way get bonus the subcontinent bonus armies for completing the Continental forewarning of continent and so the the the direct promises that if I do already on the continental United bonuses also applied for not having a full continent then estimation in this territory granted for the images characters just adjusted when you can do the same so I can show you more plots so actually so the army funded ratio and rate was negative you can see here the duration for it's nice to be
32:01
negative feet so this means that you should never replace reinforcement next to a very large an army that close your borders which also makes sense I'm going to do the same for direct and and or just all the weights used for all that later function so this is the result after running 40 iterations course I could run more already maybe 80 hours of computer so I decided to stop so
32:32
this is the result of if you want to be a very good player study this by the way do not claim to be a very good question so I we gonna while now definitely not I mean look at this time there's lots of these
32:50
distributions have not converged to a single peak or maybe there and it doesn't really matter so much but in the end we do expect you to converge at least a little more so we we could spend a little more computing time this but then in the end by using these words we only allow linear combinations maybe it's much better to use like that the territory ratio squared plus the mission time something I don't know we could implement that that that we the data
33:20
and even larger dimensionality and we didn't take more time to to run and then also of course all these players not look ahead they don't plan the and then it definitely do not come to opponents plants so costs these are definitely not the most advanced which you can make but I thought it would be and nice Texas yeah I'm really sorry but conclusion and when I read in the play written by
33:56
a I've made a generic representation of risk clear and then use the genetic algorithm to find better ways but I'm planning to make with open source or we could have used for this I just need a little time to clean this up which are blogs all of which are the linked to the repository their quantum number and until
34:19
then any questions yes questions I have a question myself we're able to intervene and then define like severest strategies no 1 so I I I don't think so come back next year and a half of thank you very much for everything and I would prefer if you would use apology correct term of library in the world and the other thing is which we caution that you take to make sure not to develop Skynet overhead which precaution did you take to to make sure not to develop this kind of well if the you can standard content and corporate world that's good enough right but might be the best strategy many questions thanks for 1st of all and we like this so in your best players you see them in have you tried to play against best hi how would and most of the the interface for for playing games yourself as a lower coverage the of the actually call the method of played a single game that took me an hour so I lost the 1st is isn't really good statistics to tell whether or not the way the better than and you move on guess that would take too much time to to actually figure out can and you what's the worst such introducing um yes so actually at the beginning I had really trouble running the 1st iteration of the the genetic algorithm turns out and depending on your rates some of the players do not detectable the and so if you have 4 these players in a single game and then you have lower tax at all in your game which turns out to be a fairly the song hi how much trouble was to visualize the data with the map and the number of so how much time did you have to spend on that but I was actually 1 of the easiest things to people taking their bigger and then placed in the adults and the numbers on it took me half an hour a little more general question because we're designing something like this you take the object or to approach and so of the the this is going to give the people through so you have a problem like this like how you decide how to call what objects and where the people would layer of abstraction sort of a given organized we design is for the 1st time the best advice I can give you is just think about it before you start implementing so I give you implemented there so I started with an implementation of the game which then in the end the lower nuclear to interact with that but that's not so useful and just think about it before you do it and make logical choices have what you represent an object and what you I'm thinking about it I think and I don't think there's a general rule what is the best solution just think about it before you we have some sense of what I have just a question about the because about TrueSkill metric is in it is kind of similar to the and a reading of the the but just so instead of a single memory just use distribution on the dual the express their lives relative scale so these these if story is there is there anything more I'm actually not aware that the other from moment that you do remember what was it except OK I don't know what it could be the same and I know that images of patented this 1 so there must be something different at least but it could be very similar will place into effect 3 non joking the detainee like this is possible with this kind of algorithms which was among the something out of it because you have to play thousands of please my money in you can take advantage of this kind of architecture are you asking whether you can run a Monte Carlo EM well analyzed and only in India and it this for the amount just that a where you you're playing these games and and use number of random numbers as input that means that the 1st distribution of the characters is random and the probabilities for combat so in in a way this is the amount of and on the other hand of force it's not clear and these are these genetic layers actually to make decisions on what to do and but I don't think you can Major take an easier approach from this 1 and this 1 yes we have a common sense so that's probably time for like 3 formal questions so 1 here and so on you get have you can see that no network approach for this no harm and to be honest I was was actually a I wanted to implement a genetic algorithm once and I thought this would be a very good have problem to produce about a neural networks of course usually there are other areas where they're very good at making external to another the board game the moment work when those um so in the risk stress and the risk game there's 2 different faces 1st there's the initialization phase and then there's the entire team so you could imagine actually splitting up so what I'm trying to say is that he tried to find the best initialization strategy by fixing the strategy the plates taken the rest of the came to see if the what's that this kind of initialization you have and and the other way around fixing the way the players initialize and then only optimizing on the the 2nd half I haven't looked at that time at the beginning when I still only had the random player worked out the importance of the distribution of territory's at the beginning of the end some of the initialization of the random distribution of the characters for that in various so that this has a very large impact on the perceived friendly on the other hand so for them the genetic layers of just used a single are reinforced methods place all the Army's both in the initialization phase has in the main I'm sorry that you can't say anything that I used to play it is also a lot and I remember that's making you induces this other places with the very important part of winning the game did you taking account of the house not area so these these players are very young they just look at all the possibilities and and take the best ones whatever there is there is no interaction between the players whatsoever but of course this would be a 1st step for example if you if you realize that there have a certain player for example the yellow player has to eliminate the nuclear then you might want to defend the player to be able to India and prevent development from and women yourself all these things and all i taking into account this is the end of this very simple OK thank you very much probably we will see more next year will all improve strategies alliances and then think things all of which are thankful
00:00
Sichtenkonzept
Algorithmus
Spieltheorie
Diskrete Simulation
Wort <Informatik>
MailingListe
Framework <Informatik>
Analysis
01:24
Subtraktion
Schlussregel
Computeranimation
01:57
Mapping <Computergraphik>
Kreisfläche
Punkt
GreenFunktion
Spieltheorie
Uniforme Struktur
Teilbarkeit
Gerade
QuickSort
Computeranimation
Eins
03:04
Addition
Kraftfahrzeugmechatroniker
Bit
Schaltnetz
Formale Grammatik
Term
Computeranimation
Chipkarte
Eins
Arithmetisches Mittel
Informationsmodellierung
Spieltheorie
Uniforme Struktur
Ganze Funktion
05:18
Chipkarte
Kraftfahrzeugmechatroniker
Klasse <Mathematik>
Spieltheorie
MailingListe
Term
Whiteboard
Framework <Informatik>
Computeranimation
Entscheidungstheorie
Chipkarte
Objekt <Kategorie>
Weg <Topologie>
Framework <Informatik>
Spieltheorie
06:33
Whiteboard
Spieltheorie
Rechter Winkel
Wasserdampftafel
Plot <Graphische Darstellung>
Element <Mathematik>
Biprodukt
Whiteboard
Computeranimation
08:27
Inklusion <Mathematik>
Deskriptive Statistik
Addition
Lineares Funktional
Funktion <Mathematik>
Menge
COM
GreenFunktion
Zahlenbereich
MailingListe
Auswahlaxiom
Computeranimation
09:27
Deskriptive Statistik
Polarisation
Spieltheorie
Selbstrepräsentation
Minimum
Zahlenbereich
Default
Computeranimation
10:36
Objekt <Kategorie>
Statistik
Gewichtete Summe
Benutzerfreundlichkeit
Spieltheorie
Versionsverwaltung
Interaktives Fernsehen
Plot <Graphische Darstellung>
Term
Whiteboard
Computeranimation
Entscheidungstheorie
12:08
Lineares Funktional
Spieltheorie
Plot <Graphische Darstellung>
Term
Whiteboard
Computeranimation
13:01
Objekt <Kategorie>
Hypermedia
Phasenumwandlung
Spieltheorie
Plot <Graphische Darstellung>
Strom <Mathematik>
Phasenumwandlung
Computeranimation
13:58
Punkt
Forcing
Einheit <Mathematik>
Spieltheorie
Computeranimation
Aggregatzustand
14:37
Algorithmus
Lineares Funktional
Bit
Total <Mathematik>
Zeitabhängigkeit
Natürliche Zahl
HausdorffDimension
Geräusch
Maschinelles Lernen
Computeranimation
Zeichenkette
Virtuelle Maschine
Algorithmus
Funktion <Mathematik>
Twitter <Softwareplattform>
Forcing
Rechter Winkel
Evolute
Strategisches Spiel
Abstand
Zeichenkette
16:47
Bit
Rechter Winkel
Flächentheorie
Varianz
Computeranimation
HeegaardZerlegung
17:54
Bit
Mathematisierung
Computeranimation
Aggregatzustand
Übergang
Bildauflösung
18:31
Lineares Funktional
Bit
Algorithmus
Spieltheorie
Randomisierung
Spieltheorie
Optimierungsproblem
NewtonVerfahren
Natürliche Sprache
Computeranimation
Zeichenkette
Leistungsbewertung
20:05
Distributionstheorie
Algorithmus
Spieltheorie
Schaltnetz
Kartesische Koordinaten
BayesNetz
Bitrate
Ranking
Computeranimation
Eins
22:07
Distributionstheorie
Knotenmenge
Algorithmus
Kategorie <Mathematik>
Spieltheorie
Computeranimation
Instantiierung
23:30
Distributionstheorie
Spieltheorie
Mathematisierung
Güte der Anpassung
Ordnung <Mathematik>
Computeranimation
24:13
Distributionstheorie
Punkt
Default
Gruppenkeim
Spieltheorie
Bitrate
Ranking
Computeranimation
Gesetz <Physik>
MultiTierArchitektur
Spieltheorie
Abstand
Default
25:59
Multiplikation
Hardware
Datentyp
Gruppenkeim
MailingListe
Computeranimation
Aggregatzustand
26:52
Einfügungsdämpfung
Gewicht <Mathematik>
Total <Mathematik>
Rangstatistik
Automatische Indexierung
Zahlenbereich
MailingListe
Extrempunkt
Bitrate
Ranking
Gerade
Einflussgröße
Computeranimation
28:58
Soundverarbeitung
Diskrete Wahrscheinlichkeitsverteilung
Negative Zahl
Algorithmus
Gewicht <Mathematik>
Spieltheorie
Relativitätstheorie
Randomisierung
Zahlenbereich
Iteration
Binder <Informatik>
Bitrate
Phasenumwandlung
Computeranimation
31:09
Algorithmische Zahlentheorie
Resultante
Schätzwert
Lineares Funktional
Gewicht <Mathematik>
Iteration
Plot <Graphische Darstellung>
Bitrate
Teilbarkeit
Bildgebendes Verfahren
Computeranimation
32:29
Algorithmische Zahlentheorie
Beobachtungsstudie
Distributionstheorie
Schaltnetz
Güte der Anpassung
Wort <Informatik>
Computeranimation
33:20
Algorithmus
Generizität
Algorithmus
Dokumentenserver
Web log
Open Source
Selbstrepräsentation
Selbstrepräsentation
Automatische Handlungsplanung
Computeranimation
34:19
Distributionstheorie
Subtraktion
Momentenproblem
Implementierung
Interaktives Fernsehen
Iteration
Zahlenbereich
Term
Whiteboard
Computeranimation
Eins
Algorithmus
Spieltheorie
Programmbibliothek
Inhalt <Mathematik>
Softwareentwickler
Phasenumwandlung
Auswahlaxiom
Bildgebendes Verfahren
Diskrete Wahrscheinlichkeitsverteilung
Soundverarbeitung
Zentrische Streckung
Statistik
Datennetz
Abstraktionsebene
Relativitätstheorie
Schlussregel
Bitrate
EinAusgabe
QuickSort
Zufallsgenerator
Entscheidungstheorie
Mapping <Computergraphik>
Objekt <Kategorie>
Flächeninhalt
Forcing
Festspeicher
HeegaardZerlegung
Mereologie
Strategisches Spiel
Computerarchitektur
Overhead <Kommunikationstechnik>
Normalspannung
Standardabweichung
Neuronales Netz
Lesen <Datenverarbeitung>
Metadaten
Formale Metadaten
Titel  How to conquer the world 
Serientitel  EuroPython 2016 
Teil  98 
Anzahl der Teile  169 
Autor 
Geer, Rogier van der

Lizenz 
CCNamensnennung  keine kommerzielle Nutzung  Weitergabe unter gleichen Bedingungen 3.0 Unported: Sie dürfen das Werk bzw. den Inhalt zu jedem legalen und nichtkommerziellen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen und das Werk bzw. diesen Inhalt auch in veränderter Form nur unter den Bedingungen dieser Lizenz weitergeben 
DOI  10.5446/21215 
Herausgeber  EuroPython 
Erscheinungsjahr  2016 
Sprache  Englisch 
Inhaltliche Metadaten
Fachgebiet  Informatik 
Abstract  Rogier van der Geer  How to conquer the world The popular board game of Risk has many fans around the world. Using a Pythonbased simulation of the game, we use a genetic algorithm to train a riskplaying algorithm.  During this talk we'll explain what genetic algorithms are and we'll explain an entertaining usecase: how to win at popular board games. During the talk we'll demo how object oriented patterns help with the design and implementation of these algorithms. We'll also demonstrate a library that allows users to push their own risk bots into a game and battle it out on. 