Tunnels as a Connectivity and Segregation Solution for Virtualized Networks

Video thumbnail (Frame 0) Video thumbnail (Frame 1327) Video thumbnail (Frame 3527) Video thumbnail (Frame 4661) Video thumbnail (Frame 13158) Video thumbnail (Frame 18682) Video thumbnail (Frame 22284) Video thumbnail (Frame 24091) Video thumbnail (Frame 27446) Video thumbnail (Frame 31925) Video thumbnail (Frame 38373) Video thumbnail (Frame 42225) Video thumbnail (Frame 44816) Video thumbnail (Frame 46515) Video thumbnail (Frame 50950) Video thumbnail (Frame 52021) Video thumbnail (Frame 59680)
Video in TIB AV-Portal: Tunnels as a Connectivity and Segregation Solution for Virtualized Networks

Formal Metadata

Tunnels as a Connectivity and Segregation Solution for Virtualized Networks
Title of Series
CC Attribution 2.0 Belgium:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Release Date

Content Metadata

Subject Area
Join me for an architectural, developer oriented overview of (GRE and VXLAN) tunnels in OpenStack Networking. In the virtualization environment virtual machines are hosted on hypervisors. These VMs then obtain network connectivity via software switches run in the same hypervisors. Data centers that provide infrastructure as a service have (hopefully) multiple customers (Or 'tenants'). As you can imagine we don't want tenants' VMs interacting with one another. VLANs are a natural approach to achieve tenant segregation. However, how do we maintain scalability with a growing number of hypervisors and VMs, when the administrator has to constantly configure the hardware switches manually? Is there another way? We all use VPNs to connect to our office resources remotely, or to connect two office sites into one seamless network. VPNs are essentially encrypted tunnels, but what are tunnels? Tunnels allow us to wrap packets inside more packets. In our context - VM traffic in exterior IP packets. That way, to the intermediate networking hardware, it looks like traffic between the hypervisors. Since the hypervisors should already be able to talk to each other, this makes VM connectivity a breeze! Let's explore how tunnels are used in the cloud as a means to achieve an overlay network. What is an overlay network? How does traffic flow between virtual machines on the same hypervisor, and on different hypervisors? What are the similarities between a layer 2 learning switch and tunnel logic in OpenStack? How does Open vSwitch fit in? Is there a cost to using tunnels? This talk will be useful to developers interested in learning about new networking concepts - Minimal background knowledge will be assumed.
Computer animation Point cloud Bit
Proof theory Data management Multiplication Software engineering Scaling (geometry) Different (Kate Ryan album) Multiplication sign Computer network Quicksort Computer Frame problem
Point (geometry) Slide rule Data management Computer animation Different (Kate Ryan album) Multiplication sign Computer network Physicalism Right angle
Point (geometry) Greatest element Implementation Service (economics) Open source INTEGRAL Local area network System administrator Range (statistics) Zoom lens Virtual machine Water vapor Graph coloring Computer Number Broadcasting (networking) Latent heat Natural number Bridging (networking) Different (Kate Ryan album) Computer network Scaling (geometry) Physicalism Virtualization Bit Limit (category theory) Cartesian coordinate system Entire function Carry (arithmetic) Category of being Message passing Word Computer animation Software Personal digital assistant Configuration space Point cloud System identification Right angle Quicksort Table (information) Reading (process)
Laptop Slide rule Server (computing) Open source Multiplication sign 1 (number) Mereology Hurewicz-Faserung Computer Encapsulation (object-oriented programming) IP address Different (Kate Ryan album) Computer network Encryption Office suite Router (computing) Information security Address space Email Software developer Interface (computing) Type theory Kernel (computing) Computer animation Personal digital assistant Website Right angle
Complex (psychology) Randomization Open source Real number Virtual machine Encapsulation (object-oriented programming) Computer Field (computer science) Different (Kate Ryan album) Internetworking Computer network OSI model Endliche Modelltheorie Router (computing) Form (programming) Email Standard deviation Bit Virtualization Data management In-System-Programmierung Computer animation Personal digital assistant Data center Point cloud Right angle Communications protocol Routing
Mashup <Internet> Overhead (computing) INTEGRAL Multiplication sign Virtual machine Virtualization Database Computer Encapsulation (object-oriented programming) Computer animation Network topology Bridging (networking) Right angle
Email Arm Information Virtualization Computer Frame problem Field (computer science) Number Latent heat Message passing Computer animation Commitment scheme Different (Kate Ryan album) Atomic number Logic Computer network System identification Right angle
Classical physics Game controller Email Group action Mapping Information Open source INTEGRAL Unicastingverfahren Set (mathematics) Mereology Computer Number Message passing Latent heat Computer animation Bridging (networking) Right angle Quicksort Table (information) Address space Identity management
Game controller Greatest element Multiplication sign Virtual machine Device driver Cyberspace IP address Computer Revision control Broadcasting (networking) Latent heat Mechanism design Bridging (networking) Computer network Daylight saving time Address space Mathematical optimization Information Type theory Message passing Computer animation Right angle Quicksort Table (information) Local ring
Dataflow Game controller Interface (computing) Virtualization Open set Instance (computer science) Rule of inference Formal language Data management Message passing Exclusive or Computer animation Software Logic Bridging (networking) Computer network Right angle Table (information) Communications protocol Address space Asynchronous Transfer Mode
Dataflow Group action Email Matching (graph theory) Open source Firewall (computing) Mereology Rule of inference IP address Message passing Mathematics Computer animation Different (Kate Ryan album) Right angle Quicksort Table (information) Address space
Dataflow Data management Group action Message passing Game controller Matching (graph theory) Computer animation Right angle Table (information)
Dataflow Open source Interior (topology) INTEGRAL Unicastingverfahren Mereology Computer Food energy Broadcasting (networking) Different (Kate Ryan album) Bridging (networking) Average Extension (kinesiology) Address space Mathematical optimization Stability theory Area Email 3 (number) Message passing Computer animation Logic Right angle Table (information) Local ring
Computer animation Information Blog Computer network Quicksort Mereology Reading (process) Reverse engineering
Dataflow Real number Patch (Unix) Multiplication sign Virtual machine Online help Open set Computer Power (physics) Number Bit rate Average Bridging (networking) Synchronization Different (Kate Ryan album) Computer network Computer hardware Energy level Software testing Extension (kinesiology) Mathematical optimization Email Interface (computing) Physicalism Plastikkarte Limit (category theory) Computer animation Personal digital assistant Video game Right angle Summierbarkeit Quicksort Table (information)
alright so we're gonna be talking about I it's it's a very networking oriented talk to develop or that and working oriented topic that we are talking briefly about some of the lands in a cloud solution and approach specifically gonna be talking about OpenStack in OpenStack neutron of a bit about the that's just an introduction of the day how the other we use the answers to kind of a whole cup the ends and maybe
even segregate to different networks and stuff like that in the end other the shortcomings of the ions the and how to use tunnels so how these tunnels to achieve the solve the same problem that we are strictly solve of the and maybe even more so I'll be of so I'm stuff you the
and engineer and a software engineer the Red Hat on working on networking oriented of stuff in OpenStack and over which you already seen a multiple talks about and I'll be completely honest probably never talked to more than 40 people at the same time and I don't know if anyone's ever told you this but you're you're a lot of people and there's just a lot of you out so we're going to just a quick warm-up the so this is actually for my mother so so we're gonna do a 1 2 3 freedom sort of meal by the would so it's 1 2 3 we can do better we did do slightly better so 1 2 3 4 thank you the that was very alright the but I'm going to use the moving a lot but I was told not to do that so all of frame I so this is kind of got this is kind above a normal or a proof of concept or end of a small scale of networking deployment so as you can see we have 2 different physical networks the orange the each stuff that for the management traffic that's for like you should you know just having a computer that
it should bring up a new PM stuff like that that's for management traffic and then we've got another physical network just for the and traffic just for me and talking to each other and talking to outside of
the crowd right so you did not create
2 different physical networks of course you could also just use 2 different the land rights right but the point of this slide is just just mention other we'll we're only going to be talking about the VM data network right so this is just about how he and talk to each other and to the it right so just the the deal network in this so how did we do that a long long time ago you know yesterday so we
use the ends of the lens or a they're very kind of well known concept they come from the physical world so it's kind of natural for networking
people to just Yale the then it kind of every problem are so the and
have access sports and they have drunk works right so that's a that's a switch configuration you can enter the physical switch and you can configure each point to be either an axis portrait drunk or so what trunk or it is unfair it's basically turning OK this Pacific port is in the LAN 100 or the rendered read leader and the traffic coming out of that axis support will be completely untapped so they the the M for the host that's connected to that axis for those know that it's view but the switch those right service which has to access for and they're both in different the lands them thus which can do filtering right so it can be forward traffic from 1 point to the other or or not so that's sports and trunk ports are kind of different so you you again you enter the switch you configure specific for to be a transformed and it actually carries out had been on traffic right so the actual via an identification number which is just 10 bits number so in this case it would be like the color but it would be like the land 100 or 200 that's actually carried with the message of the messages carrying what the latter my coming from that's configured on the transport in so a trunk or it has a range of the lands that are allowed on that front so like that the dashed of the best of tables so there could carrying both the and 100 and the 200 so if they get a message for from the land of 500 then you know that traffic will be just filtered and drop that kind of the point of transport so we can see that we have for compute nodes right and that bottom left 1 that's kind of a blow up or zoom in so that's a physical switch but that's actually a virtual switch right so we are in OpenStack we used up the are in which was talked about in the last lecture that's called integration bridge so that's in virtual switch like a Linux bridge or often of be switched for the open source of applicants by because neutron the whole OpenStack neutron thing it's pluggable so you could have different implementations you can have on OpenStack what sorts I open source implementations for the API and you can have proprietary closed-source implementations for the for the same right so for the open source I limitations you have virtual switch inside of the compute nodes and so the bottom the bottom ports are axis sports right through the ends are connected to access words and actually the virtuous which is connected to the physical switch via truck or right so this works this is fine that's the kind of kind of off some of that there's limitations right so just kind of hear me out of this virtual machine that's in the blue that's in the blue network that's in the blue green and right the had 200 so it it sends a broadcast message should be that right compute nodes gets that brought we as human beings we can tell but this specific compute node isn't hosting any read the ends right you don't have any read ends in the red network on the computer so this broadcast that's a rich originating from our computer on the blue network right that message and even reach that of you know right there's no point so that the way to accomplish that this would be to interacting with the physical network and I is a software person I'm not really into that I kind of liked work virtual in software stuff so configuring the virtual switch with the other physicals which would be kind of bummer and I would have to I connected to each physical switch so I could have a hundred reason like cloud and I would have to fire enter each transport and manually configure each trunk ports to to carry water trunk the specific to the dance right and that would accomplish what we wanted to do so that that this broadcast reaches the physical switch the physical switch sees that this drug works now is only is only carrying the red so it wouldn't forwards the broadcast message so mission accomplished but obviously that's very manual as very tedious and very it doesn't scale are doesn't scale even for small deployments actually so what most thus system admins do they just enter each switch so for each truck or they just enter the entire range right because that's the only thing that's feasible so that means of great basically every broadcast reaches everywhere all of the computer don't even compute nodes that are in hosting the in the specific network right of that make sense so that's kind of a problem but also there is an of a philosophical issue of taking the of the Larine's and just that that's that's kind of a physical world concept but are compute nodes are aware of the same relents we're kind of taking the the lands and just bringing them into the virtual world which is not ideal so that's the land right so we can do better so tunnels are basically a
well-known and familiar concept you know them as just of deviance right we all use BP and we connect from our home if you wanna work from home and stuff like that so tunnels are basically
the I like a VPN but it's unencrypted right so or a better way of putting it will the that of the innocent with the time so I don't care about encryption I don't know anything about that but also who cares about security right so I was injured uh so we're going on even if they are going to be talking about tunnels but only that just the unencrypted stuff right so I could could figure it out and then just pack on encryption on top of that so that's kind of an interesting and also there's different types of kernels the gyri tunnels and the examples and that doesn't matter that's just that the headers and and stuff like that were just going to be talking about concepts so there's 2 use cases right just to that I could fit in this slide so there's I and let's say that I have of the a large corporation you know and have a bunch of different sites ones in Brussels and 1 is in part more our so really the kind of experience that I want to achieve is that our computers hosted in in different sites kind of feel like they're in the same sex by because for example are say Brussels is the kind of the headquarters and it has a bunch of company resources like company websites close for employees only and a bunch of different of company resources so I'd like the people from other sites to be able to reach those trying and I'm also using that are most on using private IP addresses some normally without the tunnel I wouldn't be able to paying from a Brussels computers to and computer bytes because I'm using that it's private IP addresses can reach those computers like our in our own home network we have private addresses people theoretically can't finger can reach or own computers behind that so also I want to be able like our this is a personal example I have lots of development laptop in my own office and I wanna be able to SA to reach up hypervisors or computer resources and I'm using and there are in different sites on so we can use a tunnel no mind you tunnel is a manually configured of things which which which we actually configure on the physical routers right so we basically we SSH or tell into the left rather and we just pellets this and if you want to reach the 172 16 network then you take all of those all of that traffic in new routers to eternal device and the panel devices also a 2nd configurations that sorry which I also manually configured right so I have this kind of tunnel interface a I could figure that'll interface I it what's the source IP once in the nation might be for the tunnel and then I just have to put figured out the routing just a static routing that I talked about earlier right so the left rudder knows that if it wants to reach this network and has to leave through the tunnel device and the other way around right so that's kind of that the magic of fibration that you do and you also have very similar if you wanna work from home right so it's kind of like this is exactly like other year so you have you create a tunnel from your own computer to the or the VPN server that you're connecting 2 minutes actually exactly the same thing so right so basically tunnels aren't magic they're just a silly encapsulation trick right if
you remember the OSI seven-layer models you probably heard of this before the TCP IP models there's a different layers and different headers and encapsulation right so what are we doing here
we're just taking the data whatever it is that you understand so this would be the the IP packets representing or wrapping that data right so the source IP is some machine in Brussels and the destination IP isn't some machine in the other side so we take that packet and we In our tunnel all had that would be Geary in this case right so the on the only important header In the GRE had the audio only important fields are is what's the next protocol so the next form protocol in this case would also be IP right because each each are header has data to send data is just a bunch of random bits so the question is how this computer know how to interpret it that they'd have to attend right so each headed has kind of what's the data inside of me what's the next and so the next header is also IP because in this case we're just taking IP packets and just wrapping the optimum more IP packets so the beauty here is that the outer the packets the source IP is the source of the tunnel the nation and the of this is the destination of the tunnel right so the these of the world right or the internet I just seeing a packet that's being routed from 1 1 1 1 2 2 2 2 2 so the only thing that we require former physical network is that these 2 routers get actually paying each other that's the only thing that we kind of require and the the ISP these or the routers on the internet they don't know what's this Brussels network or what's that more network right they just don't know and in that they just need to be able to rout from 1 routed to the next so that's kind of what we do our standards and that kind of the physical or the real world usage I'm so in OpenStack neutron and the cloud and the datacenter virtualization management by that whole complex so we kind of do it differently and basically these are our compute nodes or our hypervisors they're hosting deviance
so on we just create a full Bishop tunnels right we just poke up each compute node to every other computer we just create these tunnels and the 1st thing that kind
of should pop into your mind is that sounds expensive right because but you just kind of need to remember that these tunnels are the overhead is basically just in the database of the compute node and we know that that's time the so we just created this phone mashup tunnels between all of the compute nodes yeah and then it's kind of exactly like the the previous example right it's so if a VM want to talk to another via that's on another computer old so we just do the encapsulation tree can we do we just wrap up the packets exactly like in the previous example had 50 50 FR 2 computers wanna talk In the same then they just start to each other directly directly via the same virtual switch right so both of the machines will be connected to the same virtual switch b are in Saudi integration bridge like aNobii opened the switch of bridge or switch and they would just talk to each other but so what if we want to achieve segregation right like in this
previous example you can see that I still have to networks I had read that work in the blue network so alright so about segregation or
kind of the speeding up working so the arms on the same hypervisor they're still use in the labs which is kind of
confusing OK so where you we using tunneling off for the actual up for forwarding traffic between the 2 different between 2 different computer fields but the B R in on each compute nodes or or virtual switch is still using the axis sports to segregates between different networks right so the atoms connected to the same virtual switch they're still using the and axis sports so each network gets a different via so we're kind of mixing and matching but that's more interesting is what what's what's going on with our traffic on different computers what right we wanna talk about the tunnels so with the so we use trunk ports right we tagged the frames with the the land identification number so it's been 100 and 200 and that's our information isn't lost its being trumped for forwarded across the entire network so the other and knows what the Larry Our was this message originated from so I can either filter or not so we are basically doing the exact same thing alright so we're using the G R E headed for the VAX that have basically the tunneling had and we're using the field there which is called the tunnel lady so the tunnel lady basically the exact same thing right we take a network we tag with a specific number and that number is placed inside the tunnel header so it's so that we we can color or we can kind of 10 packets with the ID of that were right so that's how we do segregation across our commitments so we we kind of gonna talk about how we what's the what's the logic right how do we
go forward unicast traffic held we forward broadcast traffic so this is kind of a short review so physical just kind of classical switches to be physical switches could be virtual
switches are there called are just there to learning switches right so how do they work they have this kind of table which is a of binding or a map between the ports number right so I if I if the ports are numbered 1 to 24 so each ports is bound to a set of MAC addresses right so basically this which is kind of sniffing traffic as it goes through it it looks at the source MAC address right whereas this message coming from and what what what port is is coming from so if it a so if Mac aid is the source MAC and it came from pork 1 now I know that whenever someone wants to talk to Mackay I can just forwarded to me to port 1 right and that's where I will find MacKay so that's kind of what physical that's what kind of of learning to their to learning switches to virtual or physical so we're doing something that's very similar so we have to virtual switches on each group you we have the integration bridge so all the ideas are connected to that bridge and that bridge is collected through the timing bridge so the telling bridge we just controller and we can I can create this sort of table right so each message as each packet as it comes in off from any other compute nodes to into my own computer then I checked what's tunnel didn't come in come in on right was that the the 1st of a 2nd phone because I'm connected I have a tunnel to each other computer so I check what compute node this message came came in on right so that's the peer that's the IP parts the fear and we just said that he that the messages are right so I take the tunnel ID which is actually the network be and I take those 2 identity of just numbers and I bound them to the source MAC address so so it's basically the exact exact same concept as a learning which we just do that for different headers for the tunnel and the tunnel may the source MAC address so that way we know how to forward I information through a specific of right we need to know of 4 of the basically wet what what compute nodes should we said so it should be sent out messages to right so that's how we can do it but at that but at the at the OK our right so we talked about unicast traffic we talked about learning right we talked about how do we know which compute
node is hosting the VM that I wanna talk to so we talked about learning and now let's talk about broadcast traffic because broadcast traffic is kind of of different maybe we want optimize some stuff that they're I so generally speaking that
the kind of the the 1st approach that that you would think about would be for any broadcast traffic for multicast traffic as well for that matter that's leaving of the among on the red network right it would reach the tunneling bridge on that bottom rights compute old so we would just basically send it out through every time bytes that's kind of the basic approach and that works that definitely works but but we wanna be smarter rights because because we can and because we kind of no suffer engineers we just we played we like these optimizations it's something that's nice and we kind of can reason about how so we can do 2 things so we can just send less broadcast and weak and in about the broadcast of the left you know that we didn't optimise away we can kind of make them we can make them reach only the computers that they should reach so about minimizing broadcast turns out a lot on broadcasts just about our requests right if you have in this is in the physical and virtual world as well if you have 2 computers and they really want to talk to each other right so the 1st computer knows the IP address of the 2nd computer but it that there's no it's Mac address because it never talk to it so it just passed so it acts asks in a broadcast fashion so it's and broadcast what is the matter of IP address so and so and that other machine answers it so a lot of broadcast traffic is just because of ah so we go OK so there's a neutron controller and without i a dedicated SDN control right we in the previous session we talked about open daylight the and that sort of stuff so just in neutron and OpenStack neutron we know each VM what is its IP address we know what is its MAC address and we know where is it are scheduled right so what compute node is hosting the via and so that space so we we're basically capable of filling our tables right we can we can we know for each IP address what is its MAC address and we can send that information to all of the compute nodes so all of the compute nodes actually have these full are tables right of all of the be all of the BMPs so that means what that when this already and is trying to ping or somewhat or somewhere communicate with the other at the end so the local compute nodes actually received and are a full of art table from the neutron controller and it knows OK so this B M I know what's it's type the address I know what what is its MAC address and I'll just answered locally but there's no reason to take the brunt . broadcast and forward it to all the tunnels there is no reason to do that so this could be the will actually answered the local ARP request so that's basically minimizing broadcast like just a lot of this less broadcast that's that's good the other thing that we can do about the remaining broadcasts which there's still a lot of I we can just make them smarter so generally speaking against going back to that centralized neutron controller which knows all of the the where are they what are their IP addresses what are their MAC addresses so whenever I this PM sends out a broadcast message this compute nodes now knows OK I probably shouldn't forwarded that kind by because someone the control someone told me that there is no red ends here so that's the other thing of the you can and that there's there's kind of small asterisks this very works in it only works with a the EML to plug with a specific mechanize mechanism driver from from specific versions right but on the that's in the small the the small right so we're gonna do a very shallow deep
dive into kind of the flow tables or that of the the kind the logic i in the tunneling language right and I'm gonna be talking about the open we which specifically because that's that's what I know about some on
only on only going to be talking about something that I don't have so over these which each bridge or it's instance of an NLP open virtual switch can operate in 2 modes right to different modes of 1st node is just normal which is just the other earnings which just the regular switch but we talked about earlier just learning MAC addresses and all that stuff and the other mode is called flow mode which basically means that I can manually controlled the logic of that virtual switch I can put in rules which will which will dictate how that's which works I mean it's it's exclusive it's either or you can you can be either in normal mode on so about these about these flows of so you can create you can put in new flows into a virtual switch in kind of 2 different approaches you can do it locally via the command line but the out of this open the switch open flow control and you can just create new frozen manually put them all in the rituals which are you could of course also sent ssh commands so 2 2 that's a locally and you could also use our and EnOcean protocol like open flow so there's it's right so the open flow protocol basically you you can have this kind of centralized and node which can send these messages open flow messages to all of the virtual switches and virtuous which is speak the same protocol so they can configure themselves right they can and flows through the their own articles so a neutron but right now what we do is that we have this softer that the neutron open this which agent so the agent is sitting locally on every computable and it can receive a management traffic but lot rights like I of bring up the new the and we got the new virtual network interface so whenever that this piece of slots software receives management traffic it can basically what what it does is that it's administrates local open flow-control messages so it configures itself idea management traffic that's being received from the neutron control the OK so what are these flows right we're
basically are taking the rules or frozen we're kind of manually configuring each virtual switch so a flow is basically comprises 2 parts it's a match parts and it's an action part so matches basically should
this right so I have a bunch of different flows a bunch of different rules so should the 1st flow match the incoming packets or should the 2nd offered or should the 3rd so I can match on a bunch of of of all basically all the headers in there too high there a layer of 300 right so that the ports and the IP addresses and the MAC addresses and all that sort of stuff so OK so basically a specific flow matched or costs a packet so alright so it packet and now what should be at what should that this will actually do so what is the action parts of the action but it can change headers so I can do like not so I could change the source IP address I can change the destination IP address I can change ports ICL all sorts of stuff I can take the message and forwarded to a specific ports to all ports sort through ports I can provide can obviously do broadcast I can drop right I can feel the messages so I could basically implement a firewall I can learn new flows on the fly right so I introduce the flow that matches packets and according to those packets that can actually generate new frozen rights those new flows back into the table the and we'll talk about that in a minute and we can resubmit to another table so what a table so as it turns out that a table is basically just a group of flows and we can we can have a bunch of
different tables that all belong to the same virtual switch 10 just a convenient way to manage lots of flows so all of the
messages and the 1st table which is of course you know table 0 we already we always come from 0 because we're kind of it's of so these are messages can be resubmitted from table 0 to just any other tables and the and for each table right a group a group of flows so the floor is the flows are processed according to their priority which was configured on each floor and if for example a message came in on table 0 right so the 1st flow doesn't match the 2nd floor doesn't match and the final flow doesn't match either then what happens so that message is easy either dropped or if there's an SDN controller configured which just like regular neutron there isn't so the messages dropped that's kind of the fall and so kind of logically speaking or what's what's the flow the flow of how do we
use these these different tables or have kind of how how is the logic work so traffic that's coming in from the from of the and on the local computer right just of the and
connected to the integration bridge all the ends are connected to the average it so that's that enters up the tunneling bridge which is represented here so it has a bunch of different tables so 0 as we said that's the 1st table so we we basically say OK this is traffic traffic that's coming in from the integration bridge so we'll just resubmit to Table 1 Table 1 classifies unicast traffic and non unicast which is basically multicast broadcast so unicast traffic reaches Table 20 Table 20 has a bunch of flows by this is the unicast table so I This table should basically tell me should I for this message to Table 2 of the 1st tunnel the 2nd tunnel the 3rd tunnel right what what could you know to might be speaking to so I even know the destination MAC address of so I just forwarded to a specific problem or I don't know right if if this is an unknown I can just just because of that nite MAC address hasn't spoken to me yet so I forwarded to Table 21 in Table 21 is multicast broadcast and unknown destination MAC addresses so I basically forwarded to all of tunnel devices right why the the the quotes because we talked about earlier about the broadcast of an optimization right so if that's enabled then I don't need to send a message to all of the tunnel just to some of the things right so what about the other way around if a messages coming in from the from a tunnel and not from a local area so it's coming in on Table 20 it's it's traffic that's coming in from a tunnel and not the integration with so it's going to Table 2 Table 2 is basically taking the the tunnel headers just throwing them away adding the the lantern kind of weird and then other forwarding that's that then fawning that 2 table tennis table 10 looks at the source MAC address and does the learning part right which we talked about previously so it looks at the source MAC address what tunnel the this came in on but both the tunnel ID and the actual channel itself and populates Table 20 right which was this 1 that's the unicast stable and then finally it forwards its to the integration bridge so again we dropped the tunnel are headers all of the GRE or the extent stuff and we convert the Tunnel Lady 2 of the energy right because as we said that the the actual that the local NPR ints or the
virtual switches is still using the lands to mark specific on networks right so that's kind of the deal there at
I just kind of and the more information part so will start at the end of the shameless plug but basically I have a blog which has all this entire thing uh just verbally written so if you kind of didn't catch whatever something that I sent to you can just read everything examples that sort of stuff so got lower the guy I networking so he has an amazing wrong about everything I a bunch of 2 different commands that you can use if you wanna just kind of reverse engineering kind of see how everything
works so you can use the show command to see all of the different bridges and what but like the into the br time hardly connected what are they connected to you could use done flows to see the flow tables right these are kind of these things but just written in actual table of and that's that's about it so if questions spots any questions yeah the room and all I I imagine there's a limit I'm not familiar with it it's not a problem like that anyway OK yeah the the US the you know so you just have to write your own all OK so the question is we have as these the and i these are good there they consistent between different compute nodes so we're we're using tunnels between the different compute nodes so that the local freelance are actually only locally significant so your because you only need to tag networks but you don't need to care that it's the same the land and 2 different computers you just need to differentiate between 2 different levels they were questions yes so to speak to us so the number of the lands of should be 2 to the power of 10 so but in this specific case were actually not that limit comes from the tab and the size of the tag so we're not actually using that with tunnels where only using where basically work on using access sports so the limit would be theoretically just what can open the switch in support but is that limit is talking about the number of different networks so if you have you know 2 to the power of 10 networks then all the power to you yeah and yes yeah so that's and that can be handled via a tag inside of a tag that sort of stuff they were question was what's the performance hit in on average was speaking for at high what what what's the performance hit of these tunnels in real life so bamboo latency was yes so we have a bunch of guys running tests for a few months now so that so historically we were using the and and inside of the tunnel and we from can we kind of forgot to remove the via antagon that had a drastic that was catastrophic and we fix that and it's nearly the same as with the answer but are telling doesn't have hardware offloading which is a problem right now right because network the physical network interfaces cards of the physical machines they can do hardware offloading for the the stuff that's actually done at hardware level and the tunneling stuff isn't supported at the hardware but it will be in a couple of years so as you so as for actual numbers but I'm not the guy for that but it's it's it's close that's that's what I was told the right so weary neuropil adding additional headers right that the the GRE or the the extent Heather so there's an due consideration so basically in in any kind of real deployments you would basically use jumble friends or that the 9 thousand you which completely solves that each I and we're actually there's upstream patches a neutron right now through to the kind of help with that stuff the jungle friends to meet increased performance yet there you also this learning as the sum of the forward problem so you have to let you send it to the left of this kind of the the and and what we actually had to choose the the it is it is because of the synchronization of the OCR table is basically when whenever you're bringing up the ends right and the other solution would be to just send the arc requests which is done on on a much higher rates right when were at that depended not on the number of the and that's dependent on the amount of traffic so it's a very worthwhile optimization ya absolutely alright so I my name is a assigned of the the overt stance which is near the opens that stands now no I'm a total geek for this stuff so I'll be happy to talk about that working tunnels and they do and