We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Service Discovery for .NET developers

00:00

Formal Metadata

Title
Service Discovery for .NET developers
Title of Series
Number of Parts
133
Author
License
CC Attribution - NonCommercial - ShareAlike 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
So you built your shiny new microservice, and now you want to deploy and have other code talk to it. But how will other code find it or even know that it exists? And if you recognize that one of anything is a risk to your availability, how you can enable clients to find the instance of your service that is running. For many people today the answer is a wiki documents machines and instances, and config files bind one service to another with failover involving an operator manually changing the config and rebooting the service.
SoftwareSoftware developerVisual systemIndependent set (graph theory)Exploratory data analysisEnterprise architectureCollaborationismStructural loadConfiguration spaceService-oriented architectureComputer fileFluid staticsLocal ringWindows RegistryAerodynamicsSession Initiation ProtocolImage registrationSlide ruleDemo (music)View (database)CodeForm (programming)CloningPoint (geometry)Ring (mathematics)Right angleNetwork topologyPattern languageProcess (computing)Endliche ModelltheorieConfiguration spaceBitDifferent (Kate Ryan album)Service-oriented architectureTelecommunicationSoftware developerOrder (biology)Server (computing)Client (computing)Demo (music)Interactive televisionWeightProxy serverProjective planePlastikkarteCASE <Informatik>Revision controlSoftware frameworkMessage passingMultiplication signPrice indexOpen sourceDirect numerical simulationReading (process)DiagramFocus (optics)Ocean currentSystem callArchaeological field surveyFrustrationDynamical systemVideo game consoleReverse engineeringSlide ruleShared memoryFront and back endsVariable (mathematics)Local ringWindows RegistryInformation securityWeb browserForm (programming)CAN busCovering spaceComputer animation
Software developerServer (computing)Client (computing)Direction (geometry)Multiplication signType theoryBoilerplate (text)Standard deviationGame controllerConfiguration spaceDefault (computer science)Web 2.0Order (biology)2 (number)RoutingService-oriented architectureWeb serviceStructural loadBitAddress spaceWebsiteRevision controlCASE <Informatik>Computer animationSource code
Software developerElectronic visual displayContent (media)Data storage deviceSystem callCASE <Informatik>Order (biology)Video game consoleCodeEndliche ModelltheorieServer (computing)QuicksortSet (mathematics)Multiplication signOcean currentSource codeComputer animation
Software developerServer (computing)StapeldateiClient (computing)Dependent and independent variablesOrder (biology)Archaeological field surveyDemosceneForcing (mathematics)Endliche ModelltheorieSource codeComputer animation
Software developerService-oriented architectureArchitectureServer (computing)Kolmogorov complexityForceDatabase normalizationChemical equationClient (computing)InformationSoftwareComputer hardwareSanitary sewerComputer configurationComputer fileConfiguration spaceAlgorithmWindows RegistryService-oriented architectureFunctional (mathematics)Address spaceNumberWeb 2.0CodeSound effectMultiplication signStructural loadRight angleLastteilungElectronic mailing listUniqueness quantificationClient (computing)Endliche ModelltheorieDigital photographyServer (computing)Queue (abstract data type)Key (cryptography)Price indexProxy serverInstance (computer science)Core dumpPoint (geometry)AuthenticationGroup actionRevision controlPatch (Unix)SoftwareDatabase normalizationWeb browserFactory (trading post)WebsitePattern languageGodDemosceneDatabaseSystem callInternet forumIP addressCASE <Informatik>Boundary value problemMereologyInternet service providerDependent and independent variablesMixed realityEvoluteVideo game consoleContext awarenessSocial classBus (computing)Local ringConfiguration spaceProcess (computing)Filter <Stochastik>Object (grammar)Validity (statistics)Demo (music)Different (Kate Ryan album)ResultantArchitectureInstallation artOrder (biology)1 (number)Computer animation
Software developerCodePoint (geometry)Electronic mailing listMultiplication signLipschitz-StetigkeitGame controllerRevision controlUniform boundedness principleConfiguration spaceStandard deviationTelecommunication2 (number)Order (biology)Dressing (medical)Interior (topology)Server (computing)Exception handlingAlgorithmSheaf (mathematics)Object (grammar)Instance (computer science)Group actionFigurate numberService-oriented architectureVideoconferencingBounded variationElement (mathematics)SequenceRoundness (object)Structural loadAddress spaceLibrary (computing)Demo (music)MereologyComputer animationSource code
Software developerClient (computing)Service-oriented architectureServer (computing)2 (number)Instance (computer science)Multiplication signSystem callBeat (acoustics)Source codeComputer animation
Software developerConfiguration spaceAlgorithmPhysical systemComputing platformService-oriented architectureClient (computing)Computer configurationServer (computing)Stack (abstract data type)Image registrationMietserverJava appletData Encryption StandardDependent and independent variablesVertex (graph theory)ConsistencyPartition (number theory)TheoremSpherical capVerteiltes SystemSystem programmingServer (computing)Point cloudRight angleClient (computing)Pattern languageService-oriented architectureStructural loadRevision controlElectronic mailing listOval1 (number)Cartesian coordinate systemLocal ringPoint (geometry)Product (business)ConsistencyDatabase normalizationPhysical systemRoundness (object)Partition (number theory)DatabaseAlgorithmChemical equationMultiplicationError messageInformation securityNP-hardSoftwareConfiguration spaceMatching (graph theory)System callFreeware2 (number)Bit rateProcess (computing)Physical lawNumberLibrary (computing)Address spaceIntegrated development environmentData storage deviceInformationAxiom of choiceData centerEvent horizonDependent and independent variablesEndliche ModelltheoriePlanningFuzzy logicKey (cryptography)MereologyDemo (music)Windows RegistryLevel (video gaming)Image registrationCASE <Informatik>Game theoryComputer fileSpacetimeDirect numerical simulationLastteilungThermal conductivityWeightJava appletSpherical capTheoremReading (process)QuicksortBitWritingComputer animation
Server (computing)Image registrationClient (computing)Service-oriented architectureJava appletSoftware developerWeightInstallation artWeb pageAsynchronous Transfer ModeCommunications protocolAlgorithmSanitary sewerLibrary catalogMessage passingElement (mathematics)Local area networkWide area networkDatabase transactionEndliche ModelltheorieParity (mathematics)ZugriffskontrolleInformation securityElectronic mailing listEvent horizonQuery languageOperations researchLocal GroupConsistencySession Initiation ProtocolProcess (computing)Software development kitAsynchronous Transfer ModeCycle (graph theory)ExistenceNumberService-oriented architectureVideo gameCodeBitOrder (biology)Distributed computingAffine spaceClient (computing)Image registrationWindows RegistryOpen sourceServer (computing)Process (computing)Virtual machineRevision controlData managementVideo game consoleSpacetimeSemiconductor memoryEvent horizonMultiplication signPlastikkartePoint (geometry)InformationData recoveryWebsiteWide area networkDirect numerical simulationVotingCentralizer and normalizerResultantCAN busFilm editingRight angleSoftware development kitSource codeSystem callCommunications protocolForm (programming)Slide ruleRotationData centerLibrary catalogPhysical systemGame theorySound effectWeightDemo (music)ConsistencyCovering spaceSystem administratorRaw image formatFreewareDatabase normalizationTerm (mathematics)AlgorithmDifferent (Kate Ryan album)Projective planePattern languageComputer animation
Software developerLTI system theoryLocal ringWindowArmPoint (geometry)Computer clusterImage registrationFlow separationNumberRight angleElectronic mailing listService-oriented architectureCycle (graph theory)Revision controlUniform boundedness principleGroup action1 (number)CASE <Informatik>Video game consoleConfiguration spaceWindows RegistryNormal (geometry)Server (computing)Demo (music)Library catalogForcing (mathematics)Game theoryComputer programmingDifferent (Kate Ryan album)Client (computing)Order (biology)Interior (topology)PlastikkarteCartesian coordinate systemSet (mathematics)Session Initiation ProtocolComputer animationSource code
Software developerService-oriented architectureServer (computing)Library catalogConfiguration spaceDependent and independent variablesImage registrationMultiplication signElectronic mailing listDemo (music)NumberClient (computing)Bridging (networking)Revision controlLocal ring2 (number)Session Initiation ProtocolElectric generatorSource codeComputer animation
Software developerAirfoilDirection (geometry)Electronic mailing listRight angleEndliche ModelltheorieShared memoryService-oriented architectureChemical equationRevision controlOrder (biology)Windows RegistryClient (computing)Image registrationServer (computing)System callConfiguration spaceSource codeComputer animation
Software developerProcess (computing)AlgorithmServer (computing)Image registrationService-oriented architectureImplementationSession Initiation ProtocolWindows RegistryClient (computing)Roundness (object)Optical disc driveService-oriented architectureStructural loadClient (computing)Ocean currentElectronic mailing listServer (computing)Point (geometry)Video game consoleDifferent (Kate Ryan album)Windows RegistryMathematicsSession Initiation ProtocolPhysical lawData storage deviceMultiplication signView (database)Data structureSynchronizationGame controllerRight angleCivil engineeringBitArmCycle (graph theory)Workstation <Musikinstrument>Chemical equationImage registrationIntegrated development environmentUniform resource locatorProcess (computing)Order (biology)AutomationCodeEndliche ModelltheorieLastteilungAlgorithmMereologySource codeComputer animation
Software developerRight angleMultiplication signTwitterPhysical systemOffice suiteServer (computing)Electronic mailing listVideo game consoleOrder (biology)Configuration spaceImage registrationPolygonCodeInformationService-oriented architectureSingle-precision floating-point formatGame controllerClient (computing)Dependent and independent variablesData storage deviceArmBit rateRevision controlPattern languageNatural numberComputer animationSource code
Software developerEndliche ModelltheorieClient (computing)Server (computing)Service-oriented architectureMultiplication signRevision controlWindows RegistryMereologyVideo game consoleInformationSource codeComputer animation
Software developerConfiguration spaceLocal ringService-oriented architectureUniform resource locatorAdditionCentralizer and normalizerWindows RegistryImage registrationRevision controlSlide ruleSource codeComputer animation
Software developerIntegrated development environmentAerodynamicsServer (computing)AlgorithmService-oriented architectureImage registrationScripting languageServer (computing)Image registrationEndliche ModelltheorieTerm (mathematics)Service-oriented architectureFrequencyClient (computing)Instance (computer science)Electronic mailing listWindows RegistryDynamical systemEvent horizonSystem callExpert systemVideo game consoleWindowMobile appMultiplication signPoint (geometry)InformationThread (computing)Marginal distributionPlanningArmParsingRight angleSurfaceProcess (computing)Video gameComputer animation
Software developerProgrammable read-only memoryService-oriented architectureSystem callPoint (geometry)2 (number)Windows RegistryMultiplication signElectronic mailing listServer (computing)Revision controlMessage passingConfiguration spaceComputer animationSource code
Software developerService-oriented architectureInformationMultiplication signClient (computing)Back-face cullingRight angleMessage passingNumberSynchronizationCausalityRoundness (object)Video game consoleBitWindows RegistryServer (computing)Source codeComputer animation
Software developerService-oriented architectureSystem callLevel (video gaming)Library (computing)Configuration spaceSource codeComputer animation
Software developerScheduling (computing)Structural loadServer (computing)Client (computing)Address spaceVirtual realityAlgorithmDistribution (mathematics)SoftwareComputer hardwareDenial-of-service attackRadical (chemistry)Firewall (computing)Database normalizationIntegrated development environmentService-oriented architectureImage registrationWindows RegistryComputer fileConfiguration spaceProxy serverTemplate (C++)Instance (computer science)Type theoryTime domainLatent heatParameter (computer programming)Core dumpWeightCodeVirtual machineDirect numerical simulationVideo game consoleStructural loadService-oriented architectureOrder (biology)Web 2.0Product (business)WeightCodeWindows RegistryBitConfiguration spaceResultantWebsiteDirection (geometry)Image registrationFirewall (computing)Point (geometry)SoftwareRevision controlLastteilungComputer hardwareSingle-precision floating-point formatSocial classProcess (computing)RoutingIntegrated development environmentSet (mathematics)Template (C++)Server (computing)Physical systemPointer (computer programming)Computer-assisted translationLibrary (computing)Row (database)ExistenceCuboidClient (computing)Instance (computer science)Endliche ModelltheorieIP addressDomain nameChemical equationLevel (video gaming)Arithmetic meanExtension (kinesiology)Entire functionMultiplication signFigurate numberMusical ensembleShift operatorOpen sourceAssociative propertyAxiom of choiceProxy serverGreatest elementOperator (mathematics)Network topologySurfaceExecution unitRight angleSoftware developerCASE <Informatik>Dependent and independent variablesComputer animation
Software developerService-oriented architectureLibrary catalogQuery languageConfiguration spaceWindows RegistryVirtual machineIntegrated development environmentVariable (mathematics)Run time (program lifecycle phase)Process (computing)Server (computing)Client (computing)InformationCodeKey (cryptography)Integrated development environmentMultiplication signWindows RegistryWordPhysical systemServer (computing)Data storage deviceService-oriented architectureDatabase normalizationSoftware developerSet (mathematics)Right angleRevision controlConfiguration spaceInformationProduct (business)Image registrationTerm (mathematics)WritingTemplate (C++)Covering spaceCharacteristic polynomialComputer fileDressing (medical)Direct numerical simulationLevel (video gaming)Prisoner's dilemma2 (number)Exception handlingPoint cloudStrategy gameBasis <Mathematik>Run time (program lifecycle phase)Address spaceInheritance (object-oriented programming)Instance (computer science)FreewareVideo game consoleVariable (mathematics)MultiplicationUniform resource locatorCentralizer and normalizerData managementComputer animation
Transcript: English(auto-generated)
OK, hopefully you are all here to see me talking about service discovery for .NET developers. Excellent. You are in the right room then. OK, so this is me. I basically doesn't say much apart from the fact
that I'm old, but this is the point I always call out, which is I'm not standing here claiming I'm some kind of smart geek. I just happen to have done some stuff that I want to share with you. I understand some stuff. That's where I work. The only real message is it has nothing to do with what I'm talking about, so I'm not pushing anything on you.
That's an open source project I work on called Brighter. It's a CQRS framework. That's a Twitter handle for it. If you want to find out more about that, please follow at Brighter command, and you should better then find out more about something else we do, which comes out of a huddle if we support that. OK, what's today's agenda?
So we're going to talk a little bit about something we all know to begin with, nice and easy. We'll talk about point-to-point services, client server. What we want to do is talk about why this is a problem, the problem of availability that relates to point-to-point. And we'll talk about very simply how we solved that. Now, all that stuff is fairly straightforward.
And we'll talk about then really why service discovery comes into play once you start to think about the solution to basically the availability problems on point-to-point communication. Then we'll talk about how that all works. Talk about basically there's different patterns we can use, local registry, dynamic sub-registration.
Then we'll go to health checks. Then we'll talk a little bit, there's two forms, client and server side. We'll focus mostly on client because it's the more interesting for you guys. We'll talk a little bit about server side discovery. And then if we have some time towards the end, talk a little bit about this notion of zero configuration. Hopefully that is roughly the agenda you were hoping to see.
Let's include things like DNS SD. Okay, what do I expect you to get from the session? I expect you to find out what service discovery is, why you might need it, and what are the patterns that you'll need to understand in order to use service discovery. I expect you to, by the end of this, be able to understand when people start talking about things like ZooKeeper, etcd console,
what they're talking about, and why they might have differences, what those differences mean, and how you can use them to implement these given patterns. One reason to come to this talk is simply to say, I have no idea what ZooKeeper is, and I'd like to understand what ZooKeeper is by the end of this talk, and that'll happen.
The other thing I want to try and point out is that this talk is essentially very straightforward. There's nothing here that's actually that complicated. It's just a bit of a mystery sometimes for folks because you've not had to go and find out about it. So take this as really a way of short-circuiting the process of you having to do all the reading and finding out about stuff, but nothing here is challenging.
So this is a nice, gentle, easy talk. None of our brains should be stretched too much, and by the end of it, we can walk away with a warm, positive glow that we know something new, and we can lord it over our colleagues by saying, so you don't understand ZooKeeper then? Right, great. Done the slide, just in case anyone didn't have a chance
to take a picture of it earlier. You can get the slides, and you can go and find the demos there. Just give a second for someone to pick up a couple of guys in their phones. Good, okay. Point to point, right, so everyone kind of knows how we do this, right? So this is a diagram essentially
of a client-server interaction, and we have essentially a server. We're offering some kind of service with some kind of API, and a client wants to talk to us and get the service from us. They want to ask some questions. How many orders did we make last week? Can you log this person in for me basically to their server?
Can you go and get hold of my document for me that you've stored, okay? And there may be, if we're doing something like HTTP, there may also be something like a proxy involved in the interaction, right? And the proxy may say, well, I can cache a certain amount of these requests, and I can have some kind of indicator whether or not they're fresh or stale,
and maybe I call you back and say, hey, is the version that I have still actually stale, or is it fresh? Do I need to get a new one? Can I serve my copy to the client? And that's still very much a point-to-point interaction. We may have a level of pass-through, but it's what we call point-to-point. The other thing to bear in mind is that we talk about client and server here.
It's easy to think by client, we always mean something running on the user's desktop or something running in the browser, but actually it's just a role. The client is the person asking for service, and the server is the person giving you the service. So this could equally be a model where we have two services running on our back end, and those two services talk to each other
in order to fulfill some request for the customer. Right, so this will be a quick demo, and what we're gonna do is just demonstrate something to you that you hopefully will understand already, which is how we do click the client server interaction, let me find.
So these are all kind of recorded in Camtasia to save me not being able to talk and type at the same time. Other people have mastered that particular mystery, but I find it's almost impossible, so okay, let's see. So here we've got just a self-hosted web service.
Okay, so this is basically hosted on a given host, just in a port. We have a configuration file in the self-hosted web service just running a web API, and essentially we have somewhere we say there's a URI we're actually gonna run, we've got default value. In our config file we're saying, okay, we're gonna run on this address.
So our API runs on the given address. Here we've got a standard load of setup for a self-hosted web API service. We're configuring tracing, formatting, and we're mapping some routes on the controller. It's all boilerplate stuff that you guys write every day, even if you don't actually use self-hosted or you use IIS.
And here we're exposing T methods on an orders API, get method and post method. And we're just, from a client, I'll go back a second for you there. It's a bit fast. Scrub back to that bit a second.
What's the daisy? Sorry, guys. Scrub four. Okay, we're just gonna make a standard call, basically across to the server, to say can I add this order model in and when I wanna get the content back, get basically a success code
and display out to the console the current set of orders. So I'm just ordering something very basic. So the detail's not really that important here. You've seen this a thousand times. I'm just really showing you this because what we're gonna go through is go and edit this one later on. It's just on replay, so that's why it's doing that.
Oh, actually, no, I don't want to. Sorry, let me just, towards the end of this thing.
So, we're gonna run the server here. Just gonna run, sit there waiting for requests. We're gonna run the client through a batch file and all it will do is make the request to the server. You'll see the server do some tracing and we'll actually get a response back which basically is an order. Don't worry about the actual details. It's just essentially saying
these are the orders on the server and we're doing that quite fast deliberately because you guys have all seen this, hopefully, and done this a thousand times. The basic problem with that model is this.
If that server goes offline for whatever reason, my client is now somewhere out of luck.
It's possible that if we've got a proxy here and we've cached the responses it sent us earlier and we don't require revalidation or we haven't got to do a last modified or we haven't got to do a e-tag to say, actually, the version I have in my data is still correct. It's still in time, but I may be able to grab hold
of that kind of orders list and get a stale version of what's going on. But in most cases, I'm gonna find that this is simply a problem for me. I can no longer communicate with the other service and so I stop providing my functionality. So, I have a dependency in A on the uptime of C.
Come on, problem. So, how do we solve that? Well, we solve this problem every day. In a sense, when we have a browser and a website and we have a web farm. So, we just introduce more servers and we say, well, okay, it doesn't matter if this one's down. I've got two other servers and they can service the request for me.
Somewhere on the backend, they've got a database they're all talking to so the data is consistent. But A is now gonna go to C or D and we're actually gonna then provide service to the user. So, we create availability by introducing redundancy. This is becoming more important nowadays
because many people are taking their monolithic architectures and breaking them up into a microservice architecture. And there are two kind of ways, there are two kind of schools that essentially are, you can think of as microservices. One is all the people like me who used to do SOA and you may have done something like Gorilla SOA where we've essentially dropped the service bus and went for smaller endpoints and done pipes.
But it's essentially an evolution of SOA. It's all that stuff like people talking about explicit boundaries and bounded context and business capabilities and that kind of thing. The other model is kind of the Unix pipes and filters model of microservices where they say, well, we want lots of tiny little services
that we can compose to create new pieces of functionality. Both those schools are equally valid microservices ideas but both of them, when you break up your monolith result in you having a large number of services that now will talk to each other. So one of the problems is that anyone moving to microservices now has to consider the fact that they have lots of dependencies between services
and if those services, the service I depend on goes down, is not available then it has a cascade effect and much of the service disappears on me. This is one of the reasons that people talk about microservices being more operationally complex because they have this web of dependencies. So my service A depends upon service B.
If I don't make service B redundant then what's gonna happen is that A is not gonna be able to provide its service when B goes down. And quite often, B may be down for quite transient network reasons or maybe I need to just actually go away and patch B and install something new on it or do a release of my service to B. It's one of the things we want to do in microservices
is be able to release independently. So maybe B is something I want to actually upgrade with a new version but I don't want to take down A just to basically create a new version of B that defeats all objects in microservices. So I'm gonna need a redundant version of B. So this redundancy problem has become something people have to deal with a lot more.
Now there are other ways around this problem. You can use something like a lightweight broker like RabbitMQ and that can solve the problem for you of some of the dependencies between your various services because you just rely on queues instead. That's called decoupled invocation but you still have a slight problem in that your client has to talk to essentially your RabbitMQ server
so it still needs to know the address of something and that needs to be redundant because otherwise you've only got one of it and you're still in the same problem. So the trick is to remember you can't have one of anything. You have to design for failure and in a modern microservices rather than monolith-driven world,
you really need to focus on this a lot more. Before, by the way, nearly everybody's always been distributed for a long time because your database is a separate process to your web server, to your code running in your browser so you're already distributed. There's no way around being distributed nowadays but here we're talking about a situation
where you explicitly rely on other processes to provide part of your functionality and that provides a problem. How does discovery play into this? Well, the real question becomes if I can solve the problem essentially of E not being available by having C and D running at the same time in other instances of that service
but the problem is how does A know to talk to C or D and not E? Now, when we looked at the point-to-point scenario earlier the client simply had the IP address of the server. It was, you know, local host 4672 or whatever it was. The problem with that is that referred to E.
C and D, if it's on my same box, are probably running on different ports but if they're running on different machines they're probably running on different IP addresses. So the client doesn't know where these are and probably doesn't know that this one is down. And service discovery is simply how do we solve the problem of telling the client what servers are available and what's their status?
It's nothing more than that, okay. So I've done this to introduce the question of who do we talk to? Instead of a single server we now probably have a pool of servers and discovery tells us basically how do we find the pool of servers so we can load balance between them effectively to avoid problems of failure. Does that all make sense?
This stuff is really quite straightforward. Okay, so there are two major kind of classes of patterns we kind of need to be aware of when we start thinking about service discovery. We can either do our service discovery essentially on the client or we can kind of do it on the server, right? Look at client and server models of service discovery. We're talking a lot more about client-side discovery.
Service discovery tends to mean essentially we talk about a load balancer, okay. Let's talk about the client-side model instead, okay. The simplest way for us to understand what we mean by client-side discovery is to do something without using any kind of technology like zookeeper or console in the mix for the time being
and just imagine to ourself well one solution would be for our, let's go back. One solution would be for A to have in its own config, in a local config file that we ship with A a list of all the servers with their addresses, right? So I simply give A a list saying
well there's C and there's D and there's E and these are the addresses they live at and you can talk to any of them. So a quick demo of that. But again this is code that you guys should be hopefully already pretty intimately familiar with
as a way of working.
So we're just gonna look at something that's a variation of the code we showed you earlier directly in point to point. This is our controller and you can see there in the controller what we're doing before we start the video is essentially we have a list of servers. We have a retry policy we'll talk about in a second. What we're gonna do is go to our configuration file
read in that list of servers by simply taking the config values creating a list of server items which will have a URI and a timeout value to hit that server. And it's that list we will then use to make our requests. Okay so this is list basically saying
go away to the config file, basically read from the config file this should give us a server list for each one of those items create a new server item object and add it in. So the config file we're just using a standard configuration section and we've just got a whole collection with a load of elements inside it.
And basically the key part of the elements is the URI. We have a timeout, you tend to have timeouts when you're doing any kind of cross-network communication. Well it looks like in the config file it's just a list of servers. So we've actually got two key addresses there. What we've done with our orders API in fact we've added the second config file
so we can actually have two configurations. This is a retry policy and essentially this is using a library called Polly and it says if we get an exception what we want to do is take an action which is essentially to log an error and try the next server in sequence. All that try next server does for us is simply we just iterate to increment a counter
and move on to the next server. You all want to use something slightly more sophisticated like a round robin algorithm but for the purpose of the demo that works fine. Essentially we just can do the same thing we've done before which essentially is add an order and essentially try and post it to the file side. So we're gonna run two servers, run a first instance here or a second instance here.
We're gonna run our client. This was before and we'll see what happens is the first server you can see a reaction but that's just a trace of it reacting when we get our information back. If I stop the first service what you'll see is we time out in the first call and then we'll switch over and call the second service instead.
So we run, we time out and then we call the second service and we get our response back. Okay, fairly straightforward. Everyone kind of with me? So that's, you know, if you can do that which most of you I'm sure feel pretty comfortable with you can do client side service discovery. So we can all go now and nothing else to talk about. But it's straightforward.
Sorry, my phone appears to be telling me about infrastructure alerts which someone else can respond to but I apologize for it beeping.
Okay, so the advantage of this particular pattern is it's very straightforward. Most of us understand to write config files or read through the list, how to use a list. I mean using a library like Polly, none of this is particularly challenging and it's fairly easy to do some kind of localized round-robin algorithm to load balance amongst our servers.
We can even use this to actually do proper load balancing not simply just coping with failure. We can actually balance potentially our clients calling out across all these servers. So it's an easy way and cheap way to get load balancing solved as a problem. Okay, there are obviously a few issues here. So the first is that this is quite difficult to manage
in that, you know, I may be actually operationally deploying new servers particularly if I'm in a cloud environment with new addresses on a quite regular basis and if I have to ship a new config file out to all my clients when I do that, that can be quite a hard problem of asynchronization. It's bad enough if we've got a microservices environment and those are all internal servers that are running
where I have to update all their configuration files now to basically take account of the new servers but it's kind of even harder if these things are actually offsite applications running on my client space where they may choose not to actually update those clients as often as I want to. They may not get the new list of servers.
So the problem is a sufficient number of services, server side, this is gonna break down for you and if you've got genuine clients inside your firewall, that's gonna be a problem. The other issue is there's no health checks in here. So I'm still talking potentially to servers that are dead and the only way I'm gonna find out about that problem
is essentially my error on my call and I iterate over to the next server. I put this slide in because you may think, well, this is a kind of just a naughty demo solution that Ian's put in. We're gonna move on to the real solutions later. No one would use this kind of model in production and the answer is that's not true.
So how many of you use Redis today? Number of you and how many of you use, say, the server-state Redis client for doing that? Okay, so the server-state Redis client does its ability to cope with essentially failure but Redis has a leader-follower model and essentially you can talk to the pool of followers on that given leader to provide availability of your Redis server, particularly if you're under load
and what it does essentially, if you're gonna conduct documentation for it, you provide it a list essentially of hosts, the read-only one where effectively you can read from and the right one you write to so that it has redundancy across those read-only hosts and that's done by you putting that information in the config file in your service. So that's a genuine model that lots of people
are using in production today to provide availability. It's not just a toy. However, we want to do something better. So what we're gonna look at is we're gonna look at using registration service and the simplest idea here is you have something essentially is a database that we talk to
and from that database we are gonna get information about where our services live. At a really simple level, you could go away and use any kind of key-value store for this service registry. So you could just register to hold this. You could even use your SQL server database. But what you're concerned about here is that this service registry has two things it needs.
The first is you don't want this to be another point of failure for you so you need to have redundancy. So the data needs to be across multiple nodes and you need to have some way of solving the problem of how do I find my service registry? Because I don't want that to be, it's the chicken and egg problem, but it's okay, I can talk to my service registry,
you will give me the answer, well where's my service registry? So those kind of problems are what service registry toolkits tend to solve. Okay, here's a list of the ones you may have heard of. Zookeeper, zookeeper's sort of the oldest and kind of the most famous. We'll talk about CP versus AP in a second. Generally services register on startup
and they deregister themselves when they're done. It's very usable from Java, otherwise you have to work with the C library, there are some .NET wrappers, but generally can be kind of a bit of hard work to work with. Airbnb's SmartStack, that tends to work basically with Hadoop trying to manage the cluster for Zookeeper. Netflix's Eureka, that's pretty good for REST and Java
and designed for use on AWS. If you're not in that scenario, it's probably not gonna be that helpful to you. etcd is a simple distributed key value store with an HTTP JSON API and Ralfa consensus. It's kind of the granddaddy of the more simple ones. And of those, there's really SkyDNS and Console, which kind of come further down,
which give you DNS support as well as HTTP JSON API. Right, what are these things like CP and Ralfa consensus mean? Anything else about CAP theorem in the audience? Most of you have an idea? Okay, so we'll just check for those that you don't. Consistency, availability, and partition tolerance. You essentially can't get all three.
When you have, so there's two distributed systems, when they have multiple nodes. Essentially, I've got a problem. When I ask multiple nodes a question, I want to get either a consistent answer, they all agree who won yesterday's Everton vs Arsenal game, an available answer, what was the last score that you actually had for that game, and essentially they can't really deal with partition tolerance,
which essentially says the network is divided and the nodes can't talk to each other. But you tend to either have to choose CA, or CP rather, which is basically I will be consistent when I give you a response, or AP, which means I'll be available when I give you a response. Either I'm gonna give you the right answer or nothing at all, or I'll give you an answer that may be stale. You can use CA, some people say you can't, you can,
but CA essentially says, in the event of our network partition, my data center is probably on fire, and the last thing I care about is whether I'm either consistent or available. So you can see nearly all of these are CP, apart from Netflix, Eureka, which is AP. So Netflix cares about availability
more than it does about consistency in its answer. But most of them, you can actually, their APIs, the CP APIs, respond to allowing you to say, actually I'm prepared to have stale results, in other words, be AP. Yes, that's a question.
There is still an entry. We'll talk about that later. So it's a good question. So the question was, services de-registering from a service registry, isn't that a problem if essentially the service goes down without being able to run de-registration? It would have an entry and therefore be failing, saying it's available when it's not. We'll cover health checks. We'll actually cover what really happens.
This is a bit of a short slide. The real version comes later, okay. So the examples, we're gonna do some examples and then we're gonna use console to do our examples when we talk about code today. I am not pushing console particularly over any of the others for any reason. I have no skin in any of these games.
We like console. It's got a nice straightforward HTTP JSON I. There are some reasonable .NET clients available for it and it seems to work reasonably well, okay. It's free open source. It's created by HashiCorp. People know who HashiCorp are? They create Vagrant, Terraform. So they're generally in this,
in the distributed systems management space, they have quite a number of open source tools and Vagrant is more lifecycle of virtual machines. So useful tool if you want to run VMs locally a lot. You just do Vagrant up and you get given a VM. Okay, provide service discovery by either DNS or HTTP plus JSON.
We'll show you most of the examples using HTTP and JSON but we'll talk a little bit about DNS, why you might not want to use it as a .NET dev towards the end. I mean it has sports health checks so you can essentially determine whether or not any service entries actually are currently alive, okay.
You can download it from HashiCorp. Console.io is the site effectively. And there's a useful documentation. It's quite well documented if you need to use that. Generally there are a couple of .NET clients that they wrap the HTTP JSON API. So you can just talk, if you want to, you could talk raw JSON and HTTP
if you wanted to to them. It's actually quite useful though sometimes to use someone else's abstraction built on that. The PlayFab one is pretty useful. They have some documentation that's not great. There are some examples in the project that are probably better way of finding out what it does. You'll see some code here as well. And the code, my code's available so you can go and have a look at my code and how it works as well.
Okay, so what console consists of, what we download from console is what's called an agent. And we run the console agent, it says all the nodes in the cluster. What we mean is that we run a console agent on the same machine as the service we want to essentially register. And then we run a number of other console agents
to essentially provide the service catalog. The idea is basically when we run our agent we run it on either client or server mode. Now server mode essentially says I am the catalog of all the services. And you need to run either three or five of those. It used to be an odd number so that you can essentially have voting. Because the servers use something called the Raft Algorithm.
And the Raft Algorithm basically is a distributed system piece of kit which essentially says I want to make sure that essentially the data across all of my services consistent. So I'll use voting to make sure that essentially if I try and write to the leader all the followers are updated at the same time.
The client essentially sits with the service on the server and it forwards nearly all of its requests to the servers. So it's a way of you saying I don't have to worry about where the servers live because all I need to know is essentially where the agent that's on the same machine as I am is and it knows where the servers are.
So how does it do its service discovery? Well the answer is basically it uses a protocol called Gossip. And essentially Gossip tells it where the servers are currently running. So when you talk to the agent it should know where the servers are that it has to talk to to get the service. The local agent also does the health checks and it calls you and says
I'm gonna do the health checks, it's more efficient. And then it gives the results back to the centralized service catalog. So he's making a local call usually for that health check. And a typical console deployment looks something like this. So you've got in one data center you've got three nodes. One of those nodes is the leader.
All the requests are forwarded to the leader and you have two additional servers which are there for redundancy and can take over as the leader. In the event that the leader disappears they'll get promoted to become a leader. And you can survive with three the failure of one node. With five you can survive the failure of a lot more. Okay, and you have, you can also basically have
more than one data center. And the data center can use basically a wide area network version of gossip to communicate information about the service catalog. Generally console has a number of HTTP JSON endpoints. But the one you care about most is agent.
So these are actually direct on the server. What you will tend to want to do is call the local agent to you and the local agent itself will forward the request to a number of these other endpoints. All right. So the first registration pattern we'll talk about for use with the service registry is called sidecar.
I don't know that term sidecar. I prefer the term sidekick. A different problem with sidecar is in my memory sidecars are kind of attached to the motorcycle. Whereas sidekicks can work alongside the superhero. And so actually what tends to happen is you tend to have another process running whose job is to do the registry of your services
rather than your service to the registry itself. The reason why that's a significant win potentially sometimes over the client registry itself is you may not control the client or be able to modify its code. But you still need to register its existence and the number of nodes that you have.
So we'll do a quick demo. And we'll do a demo of quite how that actually works in practice. Everyone following along? No burning questions at this point.
But most people tend to, as I've seen, tend to run their server cluster on Unix and the agents locally on Windows.
So what we're doing here is we've added a new Windows service into the mix. I use top shelf Windows services by the way in most cases. Or is this actually just a normal program actually,
to be fair? This is just a console application. The console application is just going to say I'm going to register essentially a set of services. Now all this console application we're going to look at does is previously we had in our configuration file a list of servers. And we actually had that in the client. All we're going to do in this case is just move that list of servers
into the configuration file of our sidekick. And it will register them instead. Then we're going to do this on one machine with different ports. But the reality is you would probably want to be doing this on actually separate actual side cards running on individual machines alongside the services they're registering. But we'll just show you this way of doing it.
So you can see the simple transition from the client has the list, the sidecar has the list, and it gives it to the service registrar. So here we've got basically our console application, which is a registration. Registration essentially read the configuration file.
We call an agent service registration class. We call the console and actually register it. This is just the same configuration file we had earlier. So PlayFab's code, which we take in basically by NuGet, does most of the hard work for us of actually doing this, figuring out how to call the actual JSON API.
I need to register just to get clear any old ones out when we're in the register afterwards. In our consumer, now what we have to do essentially is ask the server for a list of services. We ask the console for a list of services. Anything tagged with orders, so we know what services we're interested in, we get a hold of, and essentially add to our list of servers.
Same as we had before, but rather than getting from a config file, we're getting them off a registry, which is basically the config file used. First thing we'll do is spin up console locally. We're only going to have one agent because it's demo, but essentially a real infrastructure would use a number running your catalog and a local agent. We can get away with the purpose of demo just running one. It has its own config file. You can register stuff in the config file.
I wouldn't tend to use that that much. It's less dynamic. Then we will run a couple of servers. I'm going to run the sidekick first. The sidekick will register our servers. You can see the service registration happening there.
We'll run the first service and run the second service. Then what we'll do is run the client. We'll see it behave the same way as it did before. We'll basically just talk to the first service from the list we get from the client registration and get the response back. Then we'll do the same we did with the localized config. We'll stop that first server.
We'll run this. We'll time out. We'll start talking to the second server. It's just the same solution we had before, but the config file is now essentially being turned into registration on a centralized service. Let me get our response back. Okay, that make sense? It's all fairly straightforward.
You're essentially saying, I have a list of servers. I'm either going to put them in my config file locally with my actual client. It's going to know about where the services are, and it's going to talk to them in a low-balance way. Or you're going to say, that's too difficult for me to manage. I don't want to manage all of those pieces. So what I'm going to do instead is go for a model where I take those servers
and actually register myself in the service registry. I can essentially pretty much use a similar technique of putting things in a config file and then actually registering those with my service registration. Then my client simply gets hold of that list from a registration service. The agent, it's actually what I talk to, and it talks to the actual service.
Get hold of the list of the servers that meet my criteria. They've got a tag on them saying orders. Then I make that call directly to get hold of my list of servers, and I just start iterating through them with a round-robin algorithm, same as before. So my client takes responsibility, probably calling it client-side server discovery, for the work of doing the low-balancing
amongst the servers on the server side. Okay, people like this model
because we don't take a dependency within our service on the service registration that we're using. So my orders API service, it doesn't have to know anything about console or how I'm using it. Only my sidecar service needs to understand that. So once I'm gonna switch from console to SkyDNS, then I can just simply replace my sidecars.
I never have to go and actually impact my service directly itself. It's also very great because if I don't control the orders API service, someone else wrote it, or it's very hard for me to synchronize changing it and get its deployments out, I can simply use service registration still
without actually impacting the code which may belong to somebody else. And you've got some things like registrator if you're using things like Docker. Registrator essentially listens for new Docker machines coming into an environment, looks at exposed ports, and registers them. So registrator is a way of getting console
to work with Docker in a straightforward fashion. Obviously, if we're not using some kind of automated process like registrator, one of the issues here is that I have to maintain that list in my sidecar saying, what am I actually registering? And I have to remember to go away and do that so I deploy to some new location
that may be an issue for me. Change the port, I've gotta remember to go away and change the sidecar registration of the port at the same time. So you have to sync up. We have a dependency now essentially which we have to be cautious of between our service and the sidecar service which has to exist and run in order for our service to be registered
and the service registrator. So we have to make sure those dependencies are managed. This we can rely on things like console itself to manage basically the availability of that. We have to be concerned about is our registration service gonna run, do we remember to run it as part of our deployment, script, that kind of thing. Self registration says, well if I have control of the client
I can solve the problem a little bit of forgetting to actually do my registration, needing to have this dependency on an additional service but just saying let the orders API service register itself when it starts up. So I'm gonna talk to the registrar and do the work. And then from the client point of view there is no difference.
The client just simply still calls console agent, gets the list of the services and essentially stores them locally and then does some kind of round robin algorithm to load balance between them. Okay, so see that? Hopefully you guys are kind of picking up that this stuff is not exactly rocket science, right?
And someone actually, last time I gave this demo, live tweeted it to his team in the office who had implemented it by the time that we'd finished talking. So it's fairly straightforward. Okay, so here basically in our control we've got the code that essentially reads
from the console, goes to console agent's API, says give me the list of services and response. I'm gonna iterate over that list of services, find any of that tagged as being orders. Obviously it has to be somewhere distinguishing all the services registered tags and how you do that. And create an entry in my own internal service list
and register that. And all the only different systems are gonna be now how we look at basically doing the registration of the API itself. So, we'll flick over in a second. Actually we've seen this before. There's a poly policy essentially for doing the retry.
There's our client. Okay, so on our self-hosted server we're adding now a new single register service along with everything else we do with a self-hosted service. And then here we simply say, well let's talk basically to console effectively and do an agent registration. We'll store our value for our URI in our config file.
So it's the old config file we saw earlier that essentially says somewhere we have to store our URI in our port so that when we run we can identify where we are. And here we're just saying is our timeout. We've got two defines. We can actually show you switching between the two. We've got an ID and essentially though. So we just store an information to register with console.
So, we'll start up console again. And then we'll basically run a server. And we'll see as you run the server. It's probably very hard for you to read but you'll scroll past you can actually see that essentially they've registered this service.
Second one will come up. And again, we'll register the service. And now we're really back to the same model as we had before. Run the client. Client talks to the first service. Says, okay I've got my response back. We kill the first client.
Run it again. We'll time out talking to the first and we'll talk to the second because we have the listener. So essentially you have availability to survive across failure of one of your services. And you provided that information
out of a centralized service registry which essentially has managed to get you away from the problem of localized config file in that essentially you've got this one location that's much easier to manage. And the service itself is registering when it starts up. So there's really no problem about these additional sidecar services running when you basically deploy your service
and start it running. It's gonna register itself and say, hey I'm here, I'm alive. When you stop it, it can kind of deregister. Okay, talk a little bit now about this registration, deregistration. If I can't deregister, what happens? Just on a loop, that's why they keep playing again so you can look at it while we talk over it again.
Okay. From current slide. Okay, the advantage of registration dynamic. As we saw, basically service registers themselves.
Disadvantages are essentially that we, the client now depends on console. So if we have to switch basically our registration services, we have to go to edit all of our particular services and say we're no longer using console. We switch to SkyDNS for everyone to rewrite everything. And it doesn't work for apps that we don't own. And again, we still get the problem of no health checks.
Let's talk about health checks. Everyone with me? No burning questions you need to ask at this point. I'm determined by the end of this you will all be experts on service discovery. Health checks. So the problem we had before was essentially what we're doing is this weird thing kind of when we look at it that we're saying, okay, stop the server and then we'll call the first one. I keep saying to you, we'll call the first one.
It'll time out because basically you can't talk to the server. And then essentially then it will call our second server and that'd be correct. And that call to the first server is pretty wasteful, right? That server's not up while we're calling it. So what we really want to do is make sure that list is managed such that we know the items in the list that are currently passing a health check
that says the service registry could talk to them. And when it talked to them or when they talked to it, it knew that they were alive as of a certain time ago. Now, we still can't get away without retry. The problem is that essentially let's imagine that I use a model where my service
goes and talks to the service registry and says, I'm alive every so often. That's one model, right? So we call a lease. I go to the service registry and I say, I'm alive. And a few minutes later I go to it and I say, I'm alive. And I have to say, I'm alive with a certain frequency to avoid it saying, well, I don't think he's alive because I haven't talked to him in a while.
But the problem could be you could get a list of the server and you could say, I'm gonna refresh my list from the server to see which servers I need to use every so often. And it's getting basically a lease every so often. And in those time windows, I could ask it for data. I could get data back saying it's alive. I could then use that stale data to make a call
and that service could be down even though because I haven't got fresh information from my service registry saying it's down. Or it could go down during that lease window and the service says to me, it's alive. But actually it's not. It was alive and it's currently operating on its lease. So you still need to retry in the event of failure
because you still may fail. But hopefully we have a better handle now if we get the frequencies right. We have less instances of that and more instances of us being able to say, okay, I know this service is unhealthy, I'm not talking to it. And it's very useful things like deployments where you wanna take servers down where essentially you're probably taking them down for long enough that essentially that would make sense
to use a health check. Okay, so generally there are two models of health checks. Push versus pull essentially. So either the client has to say, the service has to say to the registry, I'm alive. And it says, okay, you got a lease for five minutes, come back to me and talk to me again in five minutes.
I'm alive. The advantage of that model, the lease model, is that essentially the service, if it doesn't talk to you because it's gone down, you know it's dead. It doesn't have to de-register anyway with you. Disadvantage is you have to have some kind of timer in your service saying, I need to talk to the registration service every five minutes, please. So you require some resources on your service
in terms of a thread and a timer, et cetera, working. The other model is poll, and that's essentially you register a health endpoint. So you say, I've got an endpoint, it's HTTP API endpoint. If you hit it, it's gonna give you 200 okay back and say, I am alive. And what happens is the registration service can call you and say, okay, are you alive?
And you say, yeah, I'm alive, right? And that's a very easy model to implement that just requires a call across to you. And that's probably the model most people go for for the first time around, but just create a simple HTTP API endpoint that the other guy can call is a very straightforward model,
and that's the model we'll actually kind of show you here, okay?
So what happens is here we've got, we've registered a health endpoint. We're just gonna say 200 okay back. We're saying healthy is true.
So we can change it in the config file and actually have one of them says healthy is false. So I'm just cheating the server is not well. And essentially what happens is we, when we get hold of, we create an agent service check, and the agent service check essentially says, go into this health status endpoint, do it every so often of your five seconds,
and time out after three seconds if you can't talk to us. The agent does the service check, that's because essentially that's a local call, and then uses gossip to communicate back to the service registry the fact. When I get my list, I then check the status of the service and the list of services, right? And I want to know that it's passing. So I still get the full list back,
but I want to know that they're actually passing. So what we're gonna do is again, we'll run console same as before, and then we'll run two agents, two services rather. And you'll notice, because I've done it before, that it's basically got some health status in there, but you can come back,
but for those of you who can see it, it's actually got some information saying, hey, I was looking for houses, I can't find it. This is gonna run, it's gonna say here's my health status, and it's actually gonna start saying, oh, I'm a bit in trouble, because this is the one returning false, and it's saying I'm critical. Saying I'm critical, right?
So it's just returning false to healthy is telling the service registry that I'm not working. On the second one, this one says, okay, I'm in sync. And it's passing, right? So you can see it's passing. One is critical, two is passing. One is critical, and two is passing.
So essentially, console now knows that this one is dead, and this one is alive. So we'll run a client. We're on the client, what we want to see this time is that we don't try talking to this server, we go straight to this one. So there we go, we ran it,
and we just talk straight to the server. Normally we go to one first, but we didn't. Again, we'll try it again. So we're just always going straight to two, because we know that one is not healthy. Try a third time, straight to two. So we lose that initial check that says, hey, go and talk to the first service, oh, it's not healthy, then I'll run it on the next one. The health check has told us the first one wasn't healthy
and we avoided calling it in the first place. That makes sense. Remember, that's the agent service check again, and that's basically us calling the agent service check by PlayFab's console library. Just making the HTTP JSON call for us.
Shut that one down. So we'll talk a little bit about zero configuration next.
I forgot to say service-side discovery first. Okay, service-side discovery. Everyone kind of happy that they get client-side discovery and you're happy you get how you work with things like console and by extension, etcd, sky-dns, zucubil, those kind of things, right? Feel good? Then you can go to your colleagues and say, I know everything now. I know what zookeeper is, I know what it's for.
Trust me. Okay, the other model, which we use all the time, is essentially what we call service-side discovery, which is essentially some kind of hardware software load balancer. This is what you use essentially in front of your web farm. You stick a load balancer. The load balancer's doing essentially exactly the same job. The load balancer has a pool of servers registered with it that say, essentially,
I know that all these service instances exist. It can do health checks, so it can say, hey, I'm gonna talk to these services and I'll direct traffic to them if they're healthy. If they don't respond because they're unhealthy, I won't direct traffic to them. I'll direct it elsewhere. So when your service goes down behind a load balancer, in general, the traffic gets directed to other live sites. So a load balancer can do all this for you
and you don't need to have a service registration tool like console or zookeeper. So why would you want console or zookeeper? Well, a couple of things. First of all, generally, your load balancer is external-facing and essentially, it is essentially your firewall. So in order to use the load balancer, you would have to expose your internal services
as ports on your load balancer and therefore make them available to anybody outside. So if these are internal services that you don't want to be client-facing across the firewall, you don't really want to use your load balancer to do that. And you could say, well, that's just fine. I have an internal load balancer
and that internal load balancer will do all my microservices as opposed to my external load balancer to everybody who lives outside. I hope you have a lot of money. Load balancers are expensive, so you may not want to do that. The other thing about those internal load balancers essentially is that people even, when they can afford production, don't tend to be able to afford to run that in all of their environments.
Okay, so what tends to happen is that they tend to want to use something else in their staging, whatever, environments. And actually, if you want the same thing in all your environments, you may be better off actually using something like console to work with rather than to do your internal microservices rather than having an internal load balancer in production but everywhere else using console to do it, right?
Kind of becomes a bit pointless. You've gone to all the effort of using console as well. But load balancers are useful for solving the problem certainly from when you're externally facing. Whoops-a-daisy. Okay.
All right, advantage to load balancers, well, they basically come for free. You don't have to do any development for them. They're pretty well organized and done. Disadvantages, load balancers can be single point of failure. You have to deploy them in pairs. You have to be active-active or active-passive. Generally, you can't have one load balancer and that basically means it's quite costly and as we said, external versus internal facing.
You can combine the service registration and the load balancing. So particularly if you use a software load balancer, so if you use one like HAProxy or Nginx, what you can tend to find is that they can actually ask a service registry where the services are and then actually then use load balance amongst those services for you. So console, I think, or console template.
Console template is essentially you spin up console template and you say, monitor my service registry and take this template and go and update this file. So commonly used in Nginx, what happens is that you say, okay, monitor my service registry. When I get new services in there or services drop out,
overwrite the Nginx config file, give it a new config file, it will restart and it will then now load balance amongst my new set of services. So console template lets you marry this two world of software load balances with service registries. Okay, zero configuration. We've not got a long but I'll run through a little bit of zero configuration.
Zero configuration is this notion that, hey, configuration is messy and painful and we don't do that with consumer items, right? I take my MacBook home and I plug in a printer at home and my MacBook discovers my printer and how does it do that? So this idea is called DNS service discovery. So it's a version of DNS that says, I don't want to just discover the names of my website
to go and look at pictures of cats but I actually wanted something more useful and so this notion of what called DNS serve and DNx text records and the DNx serve records essentially say, what is the results to the domain name and text system configuration. And the idea is what you query basically DNS for some pointer records
and they come back like the service text record pair and then you can get from the serve text pair, you can essentially get the IP address and port the service. You can get an A record which also helps you do that. Right, pretty complicated but and actually works like a box or something like console because console will essentially expose DNS as well as HTTP JSON.
So rather than using the HTTP JSON API to ask console about your services, you could use DNS. Okay. But isn't that useful from your own code? The problem is that .NET's built-in DNS classes take the machine's DNS server and say I'm gonna route or request to that
and you can't point it at a different DNS, an arbitrary DNS server. So you have to get to a third party DNS library and you get it in that says, okay, I can point to basically some arbitrary DNS server and ask it questions about my code. Once you've done that, you might as well have got down PlayFab's HTTP JSON client and use that to actually ask console where your services were.
But there can be some value to you in some circumstances in having DNS entries basically exposed. Generally you probably don't want to, you can do this model where effectively you get something like console to be your DNS entry for your machine and then essentially you get it to forward request it doesn't have to your main DNS but that's really only if you have a lot of value
in having DNS over HTTP JSON. Whoops, too far. The other interesting thing in terms of zero configuration though is that essentially nearly all, I think all of the service registration tools are essentially just key value stores. And it isn't just that we can store
our service discovery information in there, we can store practically anything in there that we can express as a key value pair in terms of configuration. When we think about configuration for a service there tend to be three things we care about that we can do. One is there are things we can configure at design time. Generally we should do those in code.
And there are things that we can need to configure at runtime. There are generally things that are affected by our environment or the need operationally to adjust the characteristics of the system. And generally the things where effectively we need to configure them operationally are the things we could put inside a service registry or inside a key value store
to say that on this, when we're running in staging, these are the addresses you need to use. When we're running inside QA, these are the addresses you need to use. When we're running in production, these are the addresses and values you need to use. Those are on feature switches, that's great. They're different in production to basically staging and so I'm just gonna read them straight out of my key value store instead of having them elsewhere.
If they're machine dependent, in other words, you can't get them out of a registration service using that machine you're running on, you put them in environment variables. And that means you can essentially look to a world where you don't have config files anymore. Don't have to manage config files, don't have to deal with that templatized config merging when you deploy and go through all that hell.
You can just say I can offload config to be a problem where I put it in environment variables on the machine or I put it in a centralized registration service where I'm configuring it in code. Particularly becomes useful with things like Docker where essentially you tend to have a service per Docker instance. So essentially the environment variables
you are configuring really belong to that service rather than anything else. And that's the dream which essentially is death to config files. I hate config files. We seem to spend endless amounts of time dealing with config files and template merging strategies and where does the globalized version live for each individual server and who controls the ops or development, et cetera.
This stuff just makes that problem go away. Cool. Conclusion. What did you, should you have learned? Hopefully you have learned the following things. If not, I failed.
You basically know that service discovery is a runtime discovery service location and that if you have basically a service, an environment with multiple services, you're gonna need to provide availability and you provide availability providing redundancy and if you have redundancy, you need to know where to find these servers. You've got more than one and the way you do that is essentially service discovery. Particularly important in cloud environments
where things basically may change their addresses on a regular basis. Zookeeper, console, et cetera, DS service registries. Just a simple key value stores. Most of them have some kind of HTTP JSON API. Some support things like DNS, et cetera. And you query them at runtime for configuration information. And all this stuff is fairly straightforward
and most of you should be able to go back to your colleagues and say, hey, I know all about Zookeeper and console. Because that's pretty much all there is to it. Cool? Smiling faces rather than puzzle faces? We have, I think, literally probably a couple of minutes. So if anyone has any burning questions, then they'd like to stand at the microphone
in the middle, which is quite intimidating. Or ask me their question and I will repeat it for the purposes of the mic. Feel free. Or you can always catch me later. This stuff is really quite straightforward. If you have any questions, you don't feel comfortable asking in front of an audience. Just feel, come and find me and I will try and explain. Okay, any questions?
Everyone good? Everyone happy? You know what service discovery is? You can cross that, you can tick that box. Cool. Thanks so much, guys.