Add to Watchlist

Designing a scalable and distributed application


Citation of segment
Embed Code
Purchasing a DVD Cite video

Automated Media Analysis

Recognized Entities
Speech transcript
me I access you can find me on the Web as introvert and and the gen
to Linux developer and the part of of the the pilots and seeing the clusters team and I maintain various packages related to sequel key-value stores just reduced and message queuing and technology and in my professional life I secure that combining a number and where we do problematic and data-driven marketing and advertising for clients the 2 days
well and and talk to you and about designing a stable and distributed applications and as you may have seen this year we have quite a lot of develops talk and actually quite a few factories good things and this is actually 1 of them and I will not nuchal lot of fighting codes on the slides themselves instead I will demonstrate the full effect for running distributed fighting at scale and just as a disclaimer there is no all additionally way of doing this kind of thing and so well we just share my experiences and give some guidelines and that I found interesting and to address this kind of design and I would do something very very as to showcase a real application in the live demo and that's what I promised contract and time and this to user and so what
are we going to do we're gonna design geo distributed spatially counter web application and and nominal just explain quickly what are the steps we follow for this I was stopped by explaining to you and agreeing with you about the applications contract which defines the goal and functionalities we expect our application to do then I will continue on and with some philosophical guidelines that are used to design specification and then I will present the stack height chosen explain why and then we'll talk about service discovery and then we will implement studies disk or a number of applications and we'll end up with the light dental and and maybe an open discussion so let's start with the 1st point of our
contracts Gerald iterative means emitted at the center and we expect our application to provide the same level of functionality around the world this that in this talk I will not cover how you'd you'd you'd DNS to a direct your users to the closest at the center and but instead I will focus on the application itself the 2nd point all of our applications contract is that and the web applications will display the sum of the pages from all our data centers around the world and the counter displayed should be the same to all users wherever they come from the 1st point of our contract is that getting out or down the replication should take no manual operation or reconfiguration on the application side and even when we add the whole data center to our replication and that means then that a replication we'll be able to raw shrimp itself automatically and this will obviously provide some kind of fault tolerance the last point of our applications contract is that we we have the background color of the webservice configurable and when changing this configuration it should be made available to all the web services and immediately in all the data centers so we configure which ones and it's the product and taken into account immediately everywhere
OK and that some kind of contract what I think is that something this kind of complicated problems usually depends more on pragmatic technical technological choices rather than pure coding skills and let me share 1st some guidelines I found interesting over the time and to address scan vision the 1st 1 is
actually that your stack is what makes your code right and what makes your application accessible so for this matter I really favor or using and then choosing tools offering a maximum of features that developpers and cooperation guys can both benefit from that's an important point and this is a this will allow all the involved parties to use and reuse robust functionalities and instead of having to really implement or code them and over and over the
2nd point you may all known actually and is the that the Zenith white and I think it's a good philosophy that can help you choose the right technologies and implementations for your architecture I usually and more on the more than I can avoid using any black box or magical technologies and this means usually that I tend to avoid technologies like I wouldn't be able to explain to my mom or dad being less than 5 the other 1
is that there is a good story that you may have radial is that 1 day they were those guys who had to be the tools to manipulate text files and they could have done 1 big program that could do everything on its own we use a lot of options and stuff like that instead to create use like gap direct costs said which specialize in special classes of fine manipulation and then the created bytes so we could combine them all once well this proved to be a great design over the years I guess and and we can absolutely applied this kind of designed to distributed applications the
sender into breaking things and then our replication down into smaller components each component will provide a small and simple service isolation also means that these components should be you really ought to know most and so we really have to resist and the temptation of sharing any kind of state between them you can relate this to rest which are good example as we because we do we use them and reuse them in our own applications and and they can be seen as isolated components from our application but now how they can component be great well that's
when you start thinking market services right that's the 20 words today and it is getting more and more popular and so good thing but actually micro-services are nothing new is just the extreme version of component is the nation and it is by a sense of a distributed architecture style and actually suffers from the same tradeoff and 1 more which is that when you talk about micro-services you talk about micro managers you have to work hard on automation and these can take time so I recommend finding a good balance between your needs and their added orchestration complexity that this implies you have to ask yourself do I really need to speed this up because then the more is the job of components the more you rely on their ability to communicate with you so with each other and
this is true for every distributed architecture and every other distributed design actually remote communication is slower and gender and reliable and in case of network failure or latency which cannot control and this can even worse when you start using an Internet Internet story to connect your components together I find that message or just using technologies are a good choice to address this kind of issues and because the provided by themselves some kind of network for parents and mechanisms and but what when and how a components tends to not communicate which is that each other that's when obligations
can become eventually consistent I think that this is a major point because this has a great impact on how you design your application we have to decide where we can accept that kind of state in our code or architecture and make compromises so now let's talk about our applications that OK
in my case I chose engine X and was I choose engine because it's really it's very fast and offers a lot of interesting features of the HTTP level we will use it as the main inference of our web services he was the on the other hand is a fast in and pluggable application server it we run all code it's written in C + + and was designed from the start was vital as its primary language support but it doesn't only support by it offers an on the price some strong and proven features which we can use an actively with biotin as well such as a sink looks for a given facing the on on on the schooling and metrics support and what's good about them is that they're integrate which is sort of the there are configuration options In an engine next to speak with you was so now let's review our application components the 1st 1 is
what we call and agreed to call the collapse the collector get 1 interview request and for it should should be request we generate the heat jobs for our back-end process we we query the total hit count which we will display back to the user it's pretty straightforward on paper then we have the process the process of consumes jobs and increments a counter for each job so it consumes there again it's pretty straightforward but now let's see how we can use this fact for those components we
end up with having the predictor and components on the left this actually represents the whole server and the processor runs on its own several as well so there is a clear separation of them and you next is at the front of of the Our collect web service and passes down the request is true you whiskey down to our approach which is in this case fast and using a synchronous loop of Aegean the process of components it is the back in responsible for calculating the total heat some he was is running this single item codes as a new 1 that's use the term but you can see that sort of so that just pure python executions nothing at now we have these 2 separated components running on each server and we needed to to exchange just between them
that's what been studied was exactly designed to do in the blazing fast and reliable way I chose study over all the and queuing technologies such as your rent EuRADic thank you because it's a core design is just like names memcached the 1 of you will already tried it and it just so that means that just doesn't 1 thing and do it simple and fast it is that simple to set up very easy to operate there is almost nothing to configure and it offers persistence for fault-tolerance bonding and
using you was and more into sort B In case of a sudden crash and so we was the response Livingston December on the fly for us is really simple on understanding just the common that you were run on your time but note that the last question is here where do I run this means for the services do I stated on its own server the micro-services weight or and make within 1 of our components and if so which 1 well
In this case I want to make sure that I never lose any hit that means that I need a strong locality between the predictor and did I don't need I don't want to have communication problems between them so I don't them in the same cell don't use the same component I accept the compromise that comes with this is that my process as can become eventually consistent in case of a network failure here in between in case the processor come get the job from and so the then my counter with not be incremental that means my application becomes eventually consistent not let's see how this plays out In 1 data center
duplicating collectors have an impact on the process of every time I had collector I need to reconfigure out by hand and that each of my processor hold 1 processor so that it connects to every because of the instances from all the connectors and both jobs from them duplicating our of means that we need some kind of external databases around here and so they can share a single counter the 2 processes here is that you can see the incremented that hasn't accounted for each other they get they get but together in Paris then the collector can access the same contour and displayed to the user how does it work find start expanding this over multiple data center well we can
actually keep 1 encounter 1 local contour bad at us and then then the collectors would need a way to access the come from older that a sentence then the consent of the 2 countries in this case that would be 500 and this the result to the user we need a system that we don't know our components to detect each other's automatic that is
where the stories for you can see service discovery of this sort of dynamic implementation of GNSS within as you request a domain name you get the right the Wizards of all down and you are the for next to it so this discovery on the other hand is dynamic that means that you query content for a given service and you get a list of all the available providing the service if 1 of them is supposed becomes unavailable in case of a shutdown stand ups and downs or failure it is removed from the catalog immediately in your application stops connecting to it that's as simple as that there's few service discovery and service enabled providing different kind of features and such as goalkeeper EGCG and
comfort once again I choose constant because it provides all the feature of a need to address the limitations we just talk about it is written in go and it is very easy to use and deploy there are several by 10 libraries to use console and in our them all I use constantly for this application and so on now let's take a diving to consume and see how it works on each data center we have a constant cluster which is usually an made of tree consonants 7 1 of them is to local data centers constant leader each constant clusters and the content key-value stores I said contour but it's just a local key-value store you can't believe anything and then you use agents here and to interact with the constant stresses that means that your services we register and there is a set of themselves through the constant and agency that means of so that your clients will be able to look at for service and the in the catalog or query the key-value store them using the agent in the consequences the constant pressure can also be queried using standard DNS for energy to be API if you want to do it yourself and finally we connect different concept classes from and from each other using the won the support of through Internet here this is a simple configuration that you Adam and its cons suppresses so they know where to connect to and then the join and communicate which is so the matter there is 1 great thing about and Council and you was the is that there is
breaking the integration of our consul into you was the it will go on and then to automatically register your application in the current there when it started successfully then you was the will handle for you the thing that have checking so they will send periodically have checked saying hey the brake application is still alive you can keep it from the test and you give the cat pressure and we will do it for you so you don't have to go yourself then again just actually have you and and if replication happens to or even if you could do was the a hold phase then and the service and will be removed from the constant pressure automatically it is
very easy to use it's just 1 line you had in the it was defined I showed you earlier and there's nothing pretty much around here much to say OK so we have all these bricks together
and we finally ready to put all the pieces around so let's do this up
we had a predictor and the process components then we
had communication between them using the instability
then we have counter the central and we could Don start and scaling out our predictors and getting out our processes then
we use the key value store from the constant pressure to on the use and the counter for us and we used service discovery to 0 our processes to detect every on the In services in our typology and get jobs from them and then
now we can have another data center in our typology connected to a constant clusters together from Internet it was what I'll components are aware of the presence of the 2 data centers and we're done there was 1 last note before that our collectors should be doing in the sense of each data counter but how how do you implement this kind of thing if they can connect to every available that center and query the count the counter their so that means the connectors we connect to this concept classes and yet the 100 counties then each we looked at auto-detect that there is another that doesn't connect to it from internet and get the counter over make sense what happens if there isn't a problem a communication problem between our data center that doesn't skin right so instead each time the processor with increment in local counted so each for each time this 1 it will eat itself did think that there is an that the center the the label on our typology connect to it and had its own value of the US counter and here on the Europe the in the key-value store of the opposite that center so that now when get comfortable collectors need to get the least and the sum of all the counters from all data centers they can just query the local contour from there constant classes that's a lot that's locality and we avoid any kind of problem or inconsistency between our for our when application and at the end of the like the more I wish of face and Internet problem so you will be able to actually see it happen while OK I and know you know
I think and of relaxed now that that sounds like a mind blowing you know OK so here I take to the big risk of the of today and
we go for like them let's go so we'll start by going
so on the right side you have the European stuff on the left side you will have the US stuff I would
start by showing you the consumer you which comes directly with the with Council so we start with the with this and in the European Data Center you West and we will have U S West R. now it's it's dumb for on purpose and we only see the comments leader which is at a level there is only 1 of which is in south and there is no key-value store and there is anything our q-values store find so that I forgot to protect that's the 1st but in this is the Consuelo direct so no I was start by I would start a collective right so I'm getting my web studies what you can see here is that you as already detected my collector and registered its on coastal forming we can see now
that in the services are indeed have a connector and been service available on my question there is a new node that appeared but the key value stores student 0 I will now my webservice OK so it's responded correctly and the sun is for now 0 I just did 1 did I do to 3 4 5 what's happening well I still don't have a process of studies available so from now on my connector is just that in the job In my disability 7 but there are staying there they need to be people from there and processed by my process that is so there are inserted in the key-value store which will then be displayed over here OK so let's do it i'm gonna connect to and the process and start my process of service OK so my processes established started right and what we can see OK now are some people who actually connected to the Web service yeah because we have 9 jobs around here OK so well I mean by that I should have with and and so my processes that is starting in the US and at the center and that's what he has here once again you was the resisted it on and on the on the coast of cluster and then it allowed to our and process to detect that there was 1 so this set is available in these that center and any connected to it which is this is the machine the correct mentioned it's discovered that there was 9 jobs on it and then the went and increment the counter slashed US West sorry and that equals minus which of the number of the of the of the the that we had at this time now I think yeah OK there are some guy just Amerindians anyway it's a good it's it's perfect it's difficult to so you can see here that yeah alright this is not a joke about look best the eyes so yeah actually if I reload now all my application here and web service I can see that he discovered that there is actually in the EU where's that and the label and that the count was at the time 247 and the sum of 257 this itself is this 247 and I
can still see now that in my services in and the processor is here and working as expected restraint our tree nodes now and the key value indeed I have count and you you west which is now 272 3 OK so this is working as
expected and right just for the audience if you could try not to much hammering because now I'm going to start a new and collectors said this and West you really interesting to see that as soon as we started and it would be picked up on the processor and the processor we say a noun there are to be services available and thing was them to no you don't want to stop and look at which thank you and you know OK you see that immediately and our process so it's an opposite effect that the new 1 and said that OK now I have to be so the service and the load is distributed it would make predictors and 1 so it's scales really easily and I
can see students that in my and constant cluster now I have 4 an that the key value keeps on growing right OK I promised another thing in my contract is that the background of my so I would do it myself and rest of this every 2nd OK I went and I would start setting the color of my in my view that center to green OK
and this will actually but the job on the been studied from the collector and and then it could be picked up by the process of which will in turn detect all the that and available and said the right color on the key value which is the picked up by the proper collector and web service and displayed to the user but this is what just happened before your
eyes and indeed the p-value now I can see that the corner is green OK so this is to
work actually found which is pretty amazing no and I can change it whenever I want and it's immediately picked up by an hour web services whatever they are OK this was done on 1 that doesn't how does it work on the on another that doesn't so here's this is constantly there's lots of things on the on Europe so I will start by and adding the new constant pressure in the US now OK they picked up with each other so now you can see that in Europe they see that there is a US West ECML available and you can see in that the US that they picked up by the US DC available find In this time I will start by and Starting my process or in the US 1st OK what happened here is that I implemented some kind of synchronization in celebration of of mine US just as a constant pressure and when and the and started it but registered as usual by was the from in the in the in the U. the U. S. foreign and but as something and then you picked up that is 1 European and at the center of the liver so it went there and discover that there was a key value or is the value per per and the synchronized it on the US soil then it did the same thing for the for the counter available in the US which was as the time 1 thousand and finally he said that there is no basis for the service available in the US yet this is normal and expected we didn't start any connector on the US yes yeah so that's
going to be US and comes to press the and we can see that we only have 1 of the surveys than the constant media which is the process of we just started there are 2 nodes and in the key-value store I have the configuration the purple already available and I already have the EU West and contour here and I like you US West to 0 archaic so what you can see actually if I keep on changing here is that is growing yes because the
process on the European sides picked up that there was an the US that has endured the label so now it's not only could be its own counter to Europe but it's some coping in it to the uh US OK but still I still don't have a web site is available in the US so that's what I'm going to do no and here what do you expect looking well here we expect that's an hour application which starts and then 0 by yourself good governance nothing OK for now it's normal because we didn't have any hit count major and from the US side so I we just got query my
US web services which just have appeared in Europe as soon as the counter got incremented in the US and now you can see that there are pretty much doing what's our applications Conference was for which is the spread of some of the tools of the of the hit counts of from every other center and I can still play a bit with is my core and change the color and it gets picked up everywhere around the world few
of the things that you alright just
just to finish up I promise that I will that to you to to and yeah I'm going to give you an I'm going to get used there and get them communication between the 2 datasets that sentence and you will of fully understand why these copying of the counter is efficient to and to address an inconsistency problems so here's I just thought the constant server on the US OK so now in the
US indeed the Sun is 0 I contrary and the local key-value store which is dependent on the q consul cluster but what you can see here is that my son it countersteer remains consistent and that is because this counter was synchronized from the US to europe Europe and constant key-value store that was the point I was trying to make earlier there was not very clear about it but now you can see it's like and then I can just start again my consuming dress their own
the US spot and everything gets thinks that picked up and once again then that he was being here he
was with that we reconnect to to learn the clustered by itself and start doing the job as usual OK well thank you
and this was soft good
is available on the 10th so I encourage you to check it out and maybe now that we have some time ahead of us we can discuss this and it's not about the question like I said it's a open discussion because this is the way happened to implement it but I am sure that you may have implemented it may be other ways and you can how yeah there is 1 good thing about the source code is is it is not just about source code it I also provided all the and civil playbooks to actually orchestrate and automate the installation of hold stack you just seen and so you can play with it I did it on Amazon Web Services so I guess it's the common standard for the most people and but you can definitely and that the 2 containers or whatever you want it's not a problem at all well be without without 1st of all thank you very much for this awesome told I hope question and was there any reason not to use the radius for Hezekiah when store because it provides a lot of queries functionality working with when that would have meant and other components in my topology which I really didn't need for these kind of application so it really depends on what you are designing actually but in in this case all they need when they need it is a key value store and I could just be very easy to implement the replication of these things to and service discovery offered also by councils it's always only once technology I could address everything at once so I didn't really need some kind of added complexity are features from reduced to achieve anything else from you but yes you could definitely use it for your own needs and use it any questions things we have wonderful talk the and intelligence stuff gold so all that means imagine a case for have assessing mark apology but a bit different task for example implemented the case I have to time thinking and fire a bullet right this slides and fire bullets another clients and are collected data and on the the transform problems of transitive like and I have to calculate some of also a client server perhaps what that suggests is there to population latency of on the client word with more logic to the server it's just a couple of things that's a tricky question which is about latency management well and if they if you need 400 milliseconds to go from 1 part of the work to the other whether you implement this on the server on client or a server is almost the same the advantages of on the client side is that usually used is to make sure that your user is close to the data center serving your game so In this case I would try as much as I can do to mitigate is an but some energy also on the client side but if the user from the US fire 1st and some from Europe fired whatever you do on the client side in the US it will take 400 ms for this information to reach the other 7 so maybe you were looking for peer-to-peer type of connections instead of having a start acknowledging a start apology you know maybe I will try in this kind of field for you for example thank you so it all I want to have some points about he's but the it's it's example for this talk because of the lingering you so it's implements all the of the center because the client can be broken and you will see what is this idea and so on in this case the of the but some ways there implements some of logic and the center yeah if you there is no definite you where circuits also a good thing to use evidence-based and direction so you can benefit also from the client side to understanding of things if you wear in and 2 thousand and 13 and Europe item that was very good at a demonstration of the game which was not distributed but with the client side and and then and then fight inside with using you was and by the creator you with about and so I maybe you should check it out just to to see just how we he played with the server about and the different parts of the yeah it's even event-driven even based on on on on the on the from and you have just finished on games actually when you see animal and the most games such as evil mind that I played a lot of the time and there is no miracle solution all the clients connected to 1 and only 1 that center that's that's a point so yes there is no big American so can mitigate but not for field for it for 1st of all things for the class I have a question what happens if this case is simple 1 it's the content but what happens if we have something in the year to process and of the processes by that states of the only information you so basically we we have an inconsistent state and how would OK and so the failure of process are widely used for assessing a job which can be intended to congestion take time sorry yeah and well actually it's already implemented in the you have these and reserve and Gillette mechanism so that's an acknowledgement protocol that means that when you take a job you reserve it and you have are traditionally so 2 minutes to process it before being but back in the cube so if you're process dies in between those 2 minutes and that's configurable you can choose and well then it will re-enter the Q and people by another life process are in process Indian so that's persistence and the delay between them is this the time to leave you you the nineties so the site and it's called time to run the value you put there what about old so if have something we call and we need to have a consistent state across data centers may be how can we do about that they would have them all something very important come and when someone checks the company has yet at 100 per cent then nothing on and that it's 1 0 it's a single it's a simple and so then you can the correct part and the process of but you have to make them to stick together in the same component in the and in the sensor and in the same component duplicates component over multiple data centers and then have them be aware of all the data was available and you just have to do everything at once that's you don't need to study actually almost yeah you don't need it at all not yet been so the is indeed here for the i asynchronicity in our case OK but it's still at the level of service discovery will allow you to do it understanding and where experiences with that using this kind of thing for someone reliable things that the may be many and here is that if the sending an e-mail to have a Q of e-mails is not some of that and so when you know when person's handling something that you're going to find from source or a high of it and if your process there is handling a job that is unreliable like sending an e-mail or SMS of the how did how do you handle that's how do you handle that resiliently and effectively well I think it's the same than before and if you have to see a job like a representation of maybe use some of you use celery and so it's a task actually you can see it as task it's just like the same thing so the difference here is implemented by myself I didn't even next for library which comes with all the dependencies in this case would be a rabbit and you so it's my job in being stored the represents an e-mail and that my once again phase between before it that happened to send it efficiently and effectively sorry and then it will re-enter the the same Q and it would be picked up by another process of which we're trying to that question so I can add on to it may be a little bit which is you know so in the case of an e-mail you send it out the prices like yes this is my e-mail that's also and then you know 7 hours later it comes back and balanced and say you euro in that thing of trying to link to link of jobs back together yeah you will have to to implement some kind of bonds passes so depending on your anti-AIDS maybe something is fairly easy to do or not and then detect this kind of event and I guess you would need a sort of database between them to store I will do that you all the HTML and or the the source of your e-mail that you were sending and just generate another job from its and to to to redo it but usually about well we do a lot of e-mails saying that our company so this is you you shouldn't do this kind of thing actually in in real life detector bounds and try to rescind the same e-mail to the same address that would be consider that I've used by most and e-mail service provider but yeah you could end up with something like this hi l i let's assume 1 of few that the center these DOS by someone you don't have the ability to fast break the computation across many of us and right yes you can if you are to duration US in for example uh only US process or will computed the addition for US incoming it the goal you can cross that all spread out all over the world or and ruined anywhere well to can it you know what I mean I think guess an isolated that 2 problem with residual the first one is that it gives your Internet connection so you end up like when I stopped there and consuls leader on the US that's a of brings of right so it's you can compare it to a network failure so yeah if you have a need failure of such as this your hit counts and your regular users requests don't come in right so there's nothing much to process anyway no amount but every now and then and then this the phi communication between the processes and the collectors is look at the local 1 of its own and you can see right and the so this won't be affected at all but you need you still need to get you can have your Q full of procedures and allocated can kill you sorry you get to you can quite mainly for yeah that that's possible then you have to do some kind of to scaling nodes in our and filtering well rate control but you can't every library specically a processor to process the U S B stole the yeah yeah you could you could you can the 1st implementation actually that I did I did like this like you say the process of well getting jobs from all the of the all around the that are under the centers but then you still face the problem then when your details the processes from Europe wouldn't anyway be able to communicate with the US Mr. of these so well and I found it to be a bit complex to explain so I I ended up with this 1 this topology which I think was I hope easier to understand thanks and most have more questions common or discussion suggestion it hasn't all Constable there which is the size of the data he can the the word I don't remember the actual maximum size key and of the value associated to key but it's a couple of makes overall depends on the this size of the the None of this single for it was discussion thank you I tied to answer should is about to do to keep brings a consul or hundreds megabytes I know people as it tries to all Cuban Consul my be of 500 megabytes and it'll be argued that it was a big problem to synchronize its between ontology and at the end of the agents of consul so consul is activity established for the small of data yeah it's meant for configuration mostly is configuration distribution of university mental blue means a couple kept you know the capabilities of cone consist later availability of amongst systems from the adventurous solitons mostly in in the and then you would need some kind of i and crossed that the center you you would need the database that have x z r cross at that center replication support and use this 1 I guess that would be the perfect that's still a problem I I know that the we have bound to Internet speeds so yeah but that's the point but it works in real life there are lots of people using uh that doesn't the replication of databases and our quick to minimize that bases that have some pretty nice and neat implementation of of these I think 1 of the most uh natural ones about these and reproduction and very very high low desirous if you ever heard of the of them and it's so if there are good at this pretty good at this and doing big situation in very high workload for years now maybe you check it out OK thanks of frontal and have mission consul so will realize that some process will die before process will you have a time out between have checks hostility sometimes doesn't hear back from its yeah it's Herbert based so if he doesn't hear back it between it's removed from that looking so if I collected these on protease how exactly heartbeats sending is implemented once again so just wondering from architecture perspective you're collective process is implemented in Python right yet everything is implemented by cells you have some kind of threat which is sending hobbies no I use that he was gay and contemplating which does it all for me actually the http work force good it looks like this so I just connect to console to get uh the count every every key and my count follow the you can see right I can change sorry we this my life and is there are and so on get that from also the color and with a default value and finds every uh key keys and the the count for the in the key-value store and then sends it that sense them so in my collector all I have to do is connect to consume nothing else the have checked their service registration or there is a situation is done by the and use the consul pregnant for me that was 1 9 I added on the under collector initialization file for you is that many was these phone this code running and resistors and when he was sure it was a heat registered it on the constant cluster catalog and then it became available OK how you was the is doing this is only consume cluster it's a CPF yet and falling on we can check out so the source code of the principal organs on which is written in press press and it is really if but you don't have to could it yourself your status is here for you and that's good comparing was almost time focal think so we can have discussion after it it would be great the speaker again
Computer animation
Computer animation
Computer animation
Computer animation
Computer animation
Computer animation
Computer animation
Computer animation
Computer animation
Computer animation
Computer animation
Computer animation
Computer animation


Formal Metadata

Title Designing a scalable and distributed application
Title of Series EuroPython 2015
Part Number 155
Number of Parts 173
Author Jacob, Alexys
License CC Attribution - NonCommercial - ShareAlike 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this license.
DOI 10.5446/20070
Publisher EuroPython
Release Date 2015
Language English
Production Place Bilbao, Euskadi, Spain

Content Metadata

Subject Area Information technology
Abstract Alexys Jacob - Designing a scalable and distributed application One of the key aspect to keep in mind when developing a scalable application is its faculty to grow easily. But while we're used to take advantage of scalable backend technologies such as mongodb or couchbase, **scaling automatically our own application** core is usually another story. In this talk I will **explain and showcase** a distributed web application design based on **consul** and **uWSGI** and its consul plugin. This design will cover the key components of a distributed and scalable application: - **Automatic service registration and discovery** will allow your application to grow itself automatically. - **Health checking and service unregistration** will allow your application to be fault tolerant, highly available and to shrink itself automatically. - A **distributed Key/Value storage** will allow you to (re)configure your distributed application nodes at once. - **Multi-Datacenter awareness** will allow your application to scale around the world easily.
Keywords EuroPython Conference
EP 2015
EuroPython 2015

Related Material


AV-Portal 3.5.0 (cb7a58240982536f976b3fae0db2d7d34ae7e46b)


  712 ms - page object