Logo TIB AV-Portal Logo TIB AV-Portal

The Great Cloud Migration with Network Automation & Service Mesh

Video in TIB AV-Portal: The Great Cloud Migration with Network Automation & Service Mesh

Formal Metadata

The Great Cloud Migration with Network Automation & Service Mesh
Title of Series
CC Attribution 2.0 Belgium:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Release Date

Content Metadata

Subject Area
You need to migrate some workloads from a private datacenter to public cloud. The result? The unavoidable hybrid environment. How do you observe traffic and mitigate risky changes to each system? In this talk, I’ll discuss how you can supercharge your cloud migrations with a combination of network automation for your datacenter and a service mesh across environments. You need to migrate some workloads from a private datacenter to public cloud. The result? The unavoidable hybrid environment. How do you observe traffic and mitigate risky changes to each system? In this talk, I’ll discuss how you can supercharge your cloud migrations with a combination of network automation for your datacenter and a service mesh across environments. How to Migrate to Cloud, Supercharged Step 1: Re-platform application on public cloud. Step 2: Deploy service mesh in public cloud and edge of datacenter. Step 3. Configure datacenter routing with network automation. Step 4: Send traffic using blue/green deployment to the new application. Step 5: Continue to run in both datacenter and public cloud. I’ll demonstrate how to add observability and failure tolerance to this workflow. By combining HashiCorp Consul service mesh and Consul Terraform Sync, I can manage, automate, and shape traffic in my hybrid environment from a single pane of glass.
Migrations data management Mesh services bits
load code directions time Data Centre combination Contracts clients Part splitting mathematics Migrations different diagrams Security cloud services multi-platform bits instance connections Types means processes configuration pattern Board Slides track Mesh server services link factor Development Twitter number versions period workloads causal touch Direct testing configuration platforms architecture form user interfaces multiple regional server analysis code smart workgroup applications frame Migrations environment hybrid statements Data Centre platforms cloud diagrams
email load NET time Data Centre domain Part splitting DNS Migrations cloud area email multi-platform load Development instance Types specific data management splitting configuration pattern load-balanced sort reverse point track services share recovery number specific Authorization Representation level Direct share tasks Ionic NET domain Latent class model applications system call Migrations environment case Data Centre cloud Games Routing
services load NET time Part Twitter DNS splitting component-based rates different level Direct Security form cloud Maßstab app focus Mesh load NET bits applications measures Migrations communication Data Centre cloud pattern
choice dynamic load code time Data Centre combination Contracts sets rates Part IP address consultant DNS mathematics component-based rates different model descriptions cloud services binaries fit cloud platforms proof processes configuration input pattern Right load-balanced cycle fitness source point Mesh server services Open Source Firewall similar production templates fem goodness communication naturally Authorization Direct configuration addresses form conditions modules demon Mesh information NET workgroup catalog applications Migrations environment Software case topology Synchronous Data Centre cloud
services focus modules services Mesh load code Ionic load Data Centre combination workgroup applications demo Migrations provide Data Centre rewrite cloud configuration Right model
meter point modes services factor survey Part local clustering splitting mathematics data types default rules services focus Mesh NET code metadata bits instance catalog applications templates errors Void's case orders strategy Data Centre cloud organization convex hull Right reading
services modules services NET cloud platforms Part mathematics tablets Data Centre Bewegungsunschärfe tasks source cloud
email Actions states Data Centre Part total variables mathematics Migrations rates different model series errors Chi-Quadrat-Verteilung cloud area services Open Source metadata indicators variables provide configuration convex hull Rolling sort tasks source modes services files Open Source rules templates local clustering versions testing tasks form default rules response Super information NET Content code Actions applications errors versions Data Centre addresses
Meta modes Actions services load maximal sets instance functions total IP address rules HIP local clustering terms Representation configuration form cloud Chi-Quadrat-Verteilung rules services NET Part instance Actions applications templates case versions convex hull
modes services lines consultant local clustering mathematics Migrations rates Synchronous errors Security default cloud rules Mesh server NET files cloud platforms Actions applications open subsets processes errors Software case versions convex hull sort Board progress
Gateway Observation load integrators time Data Centre mathematics Migrations rates information Conversation model Gates cloud services construction Demo connections processes configuration configuration pattern figure tasks web pages flow Mesh control modules services second number fem goodness tablets Direct factor form app addition Gateway Mesh interfaces code Latent class model applications Cut Software case Blog Synchronous Data Centre Cats
welcome to the session on a great car migration and we're going to talk a little bit about doing that person never got a nation and the ever popular service mesh.
the journey to migrate to card usually starts with some kind of statement such as we watch fifty percent of applications on cloud in about two years this is a pretty practical statement write some applications don't necessarily it get any platform to some other applications might need to be completely reef actors. you might have a monolith that you break doesn't make her services their number of patterns when you're given this directive to move to public cloud or any club for that matter but when you put a time frame to it like two years it's really difficult to accomplish that perfectly the more accurate statement is that in the next five years. if you plan on running across two or more platforms whether this beacon seen orchestrators public clouds data centers the and basically you're going to have a number of platforms to rent process support for an indefinite period of time to really difficult to say whether or not you're going to commit to moving a hundred percent. to one platform and it's even more difficult to say that your workload will be able to all running containers or all running one kind of orchestration so from a very practical standpoint you need to be able to support accommodation for very long period of time and often usually in a hybrid environment as well. the solution to this then is to draw lot of diagrams and i've done a lot of diagrams just to articulate hybrid environments and multiple platforms over the years i think that in every card migration i've done i've probably done about ten to fifteen diagrams each time in applications had to move whether be white boards. or trying to get an architecture diagram together to get buy in to replant form it now i don't necessarily draws many diagrams doing direct cause migrations instead i take some of the patterns that i learned and today i'm going to hopefully articulate some of that you if you want to get in touch with me i recently won. been linked in and many other platforms you can also find me and jail h t m o n zero eight on twitter as well as get up and if you would like a copy of the slides are you would like the code this demonstrated as part of this particular session you can go to my website as well. so before we talk about how migration and the patterns associated with it let's begin with the premise great it's easier to start with an example application and talk about how you might migrate that example application and it will make the patterns a little bit clear so let's assume that we've got an internal application we want to move the.
it's a cloud we've already done a pretty thorough analysis we think that this is where three platforming it runs on a dedicated server and has multiple instances for availability so you can divide them a crushing graphic regions maybe you have them across three data centers any kind of combination were pretty much covered that it's. used by something called a user interface service the user interface service doesn't have the ability to move at least not yet it will stay in the data center for a while because maybe it's the client you can't just three platform it that easily into some kind of platform or and should contain or so we're not going to touch for now it's going to stay where it is. and finally we've decided that this is the application that we're going to migrate so we've got this hybrid environment of running a u.i. service that connects to public cloud for probably a couple of years or at least maybe a year. so when we do this approach there's a couple of steps involved when you actually do this migration the first one is really factor in the application there are a number of talks that cover how you be factored your application so its clout mean of or maybe have a monolith that you break down inside her services there are a number of different for me. patients in a different approach and ten you do this but the key part of this is a this middle journey is to configure everything else with infrastructures code if you're moving to cloud usually you want to configure with infrastructures code so all of your resources are managed in version control your. able to make changes very quickly you can keep track of your inventory more accurately so this is something that's another step that people talk about quite a bit am usually there's a whole other talk about why you should use infrastructure code as well as testing in pipeline cetera a number of them as well but something important to focus on is that there's usually a. direct connect involved so if you have some kind of connection between public clout and data center this is where you would and he would set it up you can have some kind of v.p.n. or a direct link either way you do need to do some configuration and it's better to do that with infrastructures cosy contract how you've done that. connection that there are steps that few people talk about but not very thorough in in some ways is the splitting of traffic between data center and cloud it's pretty trivial and some regard say you say fifty percent of traffic as to data center fifty percent goes to club and you run this way. for a little bit of time in your canarian add a couple in a couple more a request to the cloud over time in eventually you'll take away your connection to the data center and everything pretty much goes to your clan instance if your application but it turns out this traffic splitting problem can be pretty complicated it seems trivial first but when you. been running this for five years it's difficult to maintain how do you get the visibility and the security knowing that a certain percentage of traffic is going to one type of platform versus another it's really difficult to say it's also very difficult to get that visibility across multiple applications especially if you're migrating. are using something like the strangler pattern to the platform an application from data center to clock.
so today were not want to talk about stuff one are sceptical because those than themselves are completely separate discussions and also very very difficult depending on the type of pattern the use today we're going to talk about how his for migration and just about traffic splitting how do you get the visibility across multiple platforms and make sure that. when you're splitting traffic your game in the benefit and efficiency of the automation that you already have their couple patterns for traffic splitting that i've seen.
the first is what i call singleton just because i'm borrowing from suffer development patterns as singleton and basically in this kind of scenario described the global application dns controlling traffic braddy so if you've got something like data center eastern data center west you have one global tns that controls track. i think between data center west and data center east. this is something that you would probably can figure is part of some active active for potentially active passive configuration if you've you're familiar with data center in disaster recovery serious you've also got data center and clout right you can do similar approach in which have one application dns and you point between. during a data center load balance or at a club load bouncer. the cabbie up to this is that you have a rallying available so when you're you i service for example resolves to the club load bouncer it is able to grab to it that's why there's a direct connect bill will eventually maybe you decide that you want everything managed in clouds so you move the dns over and that the n.s.p.. comes sort of the main authority in your cloud environments and not necessarily in your data center in this is a completely optional steps not always the case. the next pattern that i come across is the reverse proxy usually when you have a number of services in a data center and you don't want to have a loan balance or preserve this or you don't want to configure as the number of tns entries for area patients because dns and low bouncing can be difficult at times to configure. what you'll do is use a reverse proxy so based on past and header you'll transform their request to this specific service that you're looking for so in this case if i have my application on the reverse proxy that hassle route to my application instances so a pretty trivial task to get the set up in data center. the cloud you're basically point the earth proxy to the dns the final pattern that i see is a composite dns or composite its traffic splitting approach which is you create a subdomain specific to the data center region and then you register that to a top level domain so in this case we have my application dot my company does net we have also. i got my application that data center this is representative of the data center instance of the patient similarly if you have this splitting in cloud what you would do is you would say ok let me split between application the data center and application dot cloud. in this case you get a lot of visibility into which one ear splitting into were and if you're managing this traffic split from the top level domain however it's not really amazing if you have low balances are planning to manage traffic stopping on the lowdown sellable.
great so i explained to me these patterns but why supercharge it why try to change it up it why try to find different ways of automating it well from is an engine for dns isn't always that easy to configure and it's always not the crew not the most visible way to determine how much traffic you're sending to the chapel. nation on top of that and sometimes the patterns that you use to manipulate the weight on the d.s. level or even the load balancing level as still take quite a bit of time so if you have one service that's fine but if you have ten services in your now responsible for trying to figure out when you should canary certain parts of certain at. occasions when she knew she do it how you should do it they all have different patterns because they're all managed by different teams that becomes really difficult to scale from an operational standpoint and it becomes even more difficult to have all if you're going through this trend will pattern to peace out certain parts of your application footprint injury cloud. got to be a volatile you have to be able to change it because sometimes a migration i now work you make my great halfway and never complete the rest of it so you have to be able to evolve that so let's talk about how you supercharge this.
maybe you can see for charges is to put some kind of infrastructure layer here between the dns and the waiting between clout and data center remember we're only just talking about traffic splitting so any of the other recount forming any infrastructures code components and they all kind of come as prerequisites for this approach but. the some infrastructure layer that you can insert you can argue could be a service national rate so as mesh in its abstract form is an infrastructure layer that facilitate communication between services at offer security traffic mention is that are out so little bit into it but service measures of the key focus on what i'm going to show is sort of out a day.
different topology of how to approach service mesh for migration in particular.
so you can say are eight i've got a greenfield environment in cloud i can deploy service much there that's easy because i have nothing here right now that we just put it out there maybe it i'm new to it i'm not that familiar with that i don't really have a team to manage it right now we just want to proof it out make sure it's good that so we deployed to cloud green. winfield net new that's easy the problem is retro fitting it into data center if you put it in your data center you might have to add new components you might have to retrofit binaries on servers that probably haven't been updated in some time and so the reasonable assumption is to not put service mesh in the data. center and instead use some form of network automation to synchronize between data center and the service mesh you have in clout so if you have this there is meshing cloud but that offers as a service catalog is a way for you to take that serves catalogue and synchronize it to your data center dns or your data center load balancing in a way. that you don't have to worry about the dynamic part of your cloud infrastructure instead all you need to do is make sure you manage your data center and you mean taking it in a way that's not going to affect your cup so the important pattern here is that you need some kind of network are a nation to synchronize to handle the dynamic part of cloud but also the more so. static nature of your data centers. and so today i'm going to demonstrate this with a combination of to open source tools first this consul's that and service must give military and consul tear from sink and the pattern here is something you can actually write yourself or if you have a different kind of infrastructure so tool that's not to reform you're welcome to actually be right at him. use a similar pattern right there's ways that you can retrieve information from your service catalogue in your service mesh and then synchronised it into the either lobell saying or data center or any other network on a nation you need whether the fire walls were bouncing. so the child got to this approach that i'm showing today is that the data center and cloud must be able to connect to each other course if you've got routing you've got some tns and load balancing between the two you need to be able to connect between data center in cloud at the good news about this approach is that you don't have to change anything she downstream applications so for example my you are. service does it mean to change the dns to my application my company to net if i have a layer of authorisation in front of it then i don't have to change that really either the point in this case is that you don't have to change the configuration in any of the applications that depend on this service rate so this allows you to strangle out. the bed the in certain the meteor application without affecting any of the other dependent what's so quick quick description of consul tear from sink if you want to implement it yourself the consulting firm sink by near he actually is a set of steps it's a holistic approach to network army. one from using a lot of the open source capabilities that are in cars already so i was actually implementing this ironically a couple months ago a couple months before this by year he was released when it was really cycle by denying to just wait for this to come out basically consul had changes services right and so as to. this is get updated in consul there's a way that you can set something called consul watch and the watch allows you to pull for changes in men of data so watch get set to end his pioneering friends again in that watches for service changes the damon then takes those service changes and templates them. out into some kind of tear from our nation's if you're using infrastructures code tooling already the idea is that you'll be able to a template this into some kind of infrastructure is good condition for use and so this consul template capability will give you a tear from configuration the caviar to this is that you have to have here. form module you can write your own but you have to make sure you take the right input with the service meditated so the key part of this ritual show is that you need to make sure that you it's here to a contract that is set by consul tear from sink and how it plans on out putting the services as a service in which to the module. so you can parse the service that a data with the module this what this will allow you to basically take ip addresses no addresses other cars on the data and then push them to the network device of your choice so you can write your model for a loan balances firewalls you can also read friends asked products as well his side. to do that. consulting firm single and run the terror form configuration will pull down the model for you anyway execute every time a service changes the holes this whole step is back is pretty much packaged into a binary and you could implement this yourself again i've done it but. the idea is that if you need to have dynamic changes in your closet environment and you need to get them automated to some of your data center or otherwise this biennial pretty much expedite the process for.
today's demo i'm going to show that the combination of never got a nation and the service much so let's say you don't want to back court service mesh capabilities into your data center you just have to focus on getting what you have making sure is automated what i'm going to actually show is all code on here however please know that the data said. for that i am mocking he is in eight of us us the east to do this because i want to make sure you can reproduce this without needing a data center load helens are particular ad but you can do this with any vendor as long as it is a tear from providers so today well i am showing that the configuration being done on in application the balance are in asia. the us you can rewrite models and use models that are using the cloud that that are using the data center but balance offenders what. ok let's start at this.
the first. let's examine what is been configured in the world bouncer eveleigh balance or this is an application the bouncer focuses on layer seven you can use other loan balances in aid of us but today we're just going to use this one the main focus is that there is a one hundred percent of traffic going to the data center great right now i have.
an application in cloud my application in cloud at is actually registered in consul already all of the health checks are passing that's the faintest about having a service mesh i can define all of the health checks to make sure everything is running correctly so here all the no checks are passing and all the serves checks are passing bees are all in cloud they're ready to go. so you can tell cloud because consul's telling me that says cloud. ok so nothing is going to the cloud instances right now i can demonstrate this if i make a curl me and to the a.p.i. right now all of the requests come back its data center so this makes a little bit of sense.
so when i am ready to finally my case so let's focus on traffic splitting part let's say i want to canary points a canary traffic to my clown instance i don't know if i read factor club correctly but i do want to make sure because you know who knows what's going to happen sometimes it's a little bit of more difficult to say. so let me keep the teal ended the planet when i'm going to do is at its the deployment to change the consummate it and put some traffic toward club. what i've done is created as some metre data to control weight as well as the host chatter that i want now there's some of the stuff can be controlled in other parts of khan's all in this case because i'm using the organisation i'm using this servicemen a data in order to do that these register its changes in the sand. the catalogue survey examined my service catalog what i'll be able to tell is that some of these are new instances coming up so these are all knew that coming up they're getting healthy now they're all healthy they're all new instances everybody is great but what's amazing is that i've run consul tear from seeing in the back so.
consul tear from sink is registering and retrieving all of those new. but services that are being rolled out as part of my cloud service to match what this will do is allow consul tear from seem to have registered those changes and say ok let me see if i need to change anything in my data center. so what's telling me is no one change one thing changed in my data center because those surveyed new services came out and what changed exactly well if i refresh my listener all remember i change the weight so fifty percent went to cloud fifty percent went to data center.
so another fifty percent fifty percent are happening what's going to happen let's see if i get the same kind of response i get data center eight but here i've got a bad day.
but that's interesting to find the u.a.e. someone who's using the u.i. service and somehow you know i get an error in the u.s. service same can't connect to my application while that's an indicator that something is wrong very so all go back and say ok this is not working my roll out my initial canary isn't is as successful so what i'll do is all. mitigate the weight by maybe putting ten percent. and now i can still get some traffic see what's going on debug those areas but i know effecting that menus and three am using part of my air budget to do that. to getting pods are so well that's rolling out what i'll do is actually show you what tear from sink is looking like in the that sort of in the back i have a configuration and what this configuration does his retrieve information from my watch all you can actually is in the model if you'd like. it's a it's basically configuring just the western a rule for the low bouncer itself and it only listens to my application changes it doesn't watch for any other application changes just does the service for my application consul on top of that i can set a couple variables have said a couple variables including low about. this year's target group satirists terror. when content from seeing friends it generates a binary called think tasks and the task me the same tasks and the test name contains series of hear from configurations including sort of like main tear forms hear from to fire his variables a t.s.a. for familiar with care for basically as long as that models there it. create a main file that has all of the templates rate so has the variables in it that includes source and the version when it will configure the provider for you so all of these are available i would show you the variables file however the variables filed you have does have secrets and so. i'm not going to show to directly and do make know that right now because of her from sink uses consul as a back and state back end and you can configure for a couple other different kinds as to just check on documentation that ran on just using the consul parents in one. ok so let's double checking make sure that everything is working so now my new applications have rolled out i can check the wastes just to make sure that ten percent now registered so consulting firm seen on magically picked up by ten percent weight change and now it's going to divert ninety percent said.
the center and ten percent to close you can tell that this is the case if i go to my instance is an example in the net it into the meditative here as wheat ten. a and in terms of target groups themselves the other important thing that's being changed here is the target group so when consul tear form stink output at these services there's a set of services as well as the ports and ip addresses when doing is basically using a model to.
first out the ip addresses as well as the ports and then registering them into target group so i'm not only manipulating the listener as at and the listener rules and able to manipulate the targets and register very dynamically the consul services into that into my low bouncer in my data center. that's what's important about this particular set up here now i'm going to update my service because i think we fix the problem this is a mark application so keep in mind that what is happening here is not fully representative of you know.
it's not that simple and the debugging process here and changing error rate back to zero so that it will successfully deploy a separate know you'll get that sort of a new role al if you're do using your application for example you could control the weights al by a security i process so this is where this. that kind of greater elegance comes in we are doing a canary deployment great and if you don't have this capability and you don't have service mesh across the board and you don't have all these a progressive delivery tools you can actually still use what you have an offer this streamline to delay the to roll out changes have very very clean soup. in this case i'm doing canary i'm really just changing it in the new cloud applications interface in this case could bring any used to make those weight changes and let consultation think that network automation synchronize that forty so i know nothing has changed here application is still fi services may be registered.
and all just try to access it now. so i'm still going to get data center ninety percent of the time. but because i fixed ed now it's going to divert ten percent of traffic to cloud. so as i can to use canary process if i make changes to cloud agents need to improve it i can still use the interface that i expect to use for rolling out my application in in a more dynamic way and have that network automation synchronised very very efficiently for me so all of these changes pretty much. just get run by consul tear from seeing you don't you don't have to see anything you run constant from st and tear form that would pretty much just continuously get executed whether or not the service change services change. so back to our original conversation this is a pattern that works really well for data center in ca connect and you don't need to make changes to downstream applications in addition let's say you can't retrofit the service mentioned your data center in this is actually a very low overhead way of making sure you get the automation as well as the scene less work flow to deploy to claw.
about the downside to all of this however is that it's difficult to mean seen as more services migrates the more services you have the more it starts to be difficult to maintain the economy shun for this rate and you might consider other models i think this will work for you know couple it will work quite for quite. sometimes special you've got services that cannot be to add it's the mesh or you don't want to reconfigure upstream at their downstream applications but it might not scale as more go to service mesh so if your data center in kochi can act as if you can have won a second step the immigration process you make considered. to play something like an inverse gateway to allow traffic into the match and handle the traffic there so then you would have to make changes to hear us service and so that it is actually referencing the council interesting way to keep that in mind you can also use of terminating gateways to anything coming out of the mesh going to your data center could go through that terminating gate. this is more conscious of the terminology but a number of servicemen just offer similar constructs. now if your data center and cut cannot connect all it just happened there is additional observed a leading figure believe that can be added if you deploy the service smash into data center so i did say it was difficult sometimes to retrofit the service mission to the data center it might be worth it if just because you get the observer ability and configure ability to cross. both meshes and you can do this with consul n you do have the ability to control network policy so in this case i could say fifty percent goes to my application data center fifty at per second is to my application dark cloud with pretty as you know pretty easy way you have to deploy cynical the mesh good way and the best gately provides. secure can connection between both service measures that's important to deploy especially if you decide to go with this model i was it is more ideal approach to it but again not everybody can retrofit a service mesh into the data center. if you want to learn more take a look at the data center low bouncing as examples that are available on the a five blog there are additional network vendor integrations listed on the council tear for instinct page you can examine those models there if you're curious solar more about cost.