Cilium & BPF - The Future of Linux Networking and Security
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 40 | |
Author | ||
Contributors | ||
License | CC Attribution 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/54401 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
openSUSE Conference 201936 / 40
7
11
12
13
15
16
17
18
19
20
23
25
26
27
34
38
40
00:00
Information securityComputer networkGoodness of fitProjective planeGraph (mathematics)SoftwarePresentation of a groupInformation securitySoftware developerKernel (computing)Latent heatComputer animationMeeting/Interview
00:43
Time evolutionSingle-precision floating-point formatMultiplicationVisualization (computer graphics)Physical systemKernel (computing)AbstractionProcess (computing)Socket-SchnittstelleDevice driverTraffic shapingBridging (networking)AufrufschnittstelleUDP <Protokoll>CodeBinary fileInterface (computing)Compilation albumOpen setMathematicsNP-hardMathematicsTable (information)Different (Kate Ryan album)SoftwareLevel (video gaming)CodeDevice driverComputer hardwareProcess (computing)Point cloudOpen setRight angleProxy serverComplex (psychology)Server (computing)Single-precision floating-point formatVirtualizationPhysical systemSpacetimeAbstractionData storage deviceOperating systemSocket-SchnittstelleRouter (computing)Virtual machineService (economics)Point (geometry)OSI modelSoftware developerRule of inferenceSemiconductor memoryCartesian coordinate systemDistribution (mathematics)NeuroinformatikBefehlsprozessorImplementationPhase transitionPointer (computer programming)BitInstallation artKernel (computing)Game controllerRobotCognitionRevision controlMultitier architectureCore dumpSlide ruleCompilation albumBridging (networking)Computer animation
06:51
Stability theoryProcess (computing)Kernel (computing)Thread (computing)Group actionAddress spaceNumberContext awarenessPhysical systemComputer networkSpacetimeComputer hardwareAsynchronous Transfer ModeSystem callInterrupt <Informatik>Event horizonChi-squared distributionPoint (geometry)NamespaceMultiplicationSoftwareQuicksortMultiplication signSemiconductor memoryLimit (category theory)System callVirtual machineData structureOperating systemEvent-driven programmingRun time (program lifecycle phase)IP addressEntire functionExecution unitFunctional (mathematics)Exterior algebraDampingComputer configurationRewritingKernel (computing)Perspective (visual)Complex (psychology)Right angleCartesian coordinate systemSoftware frameworkMereologyGoodness of fitSpacetimeBlock (periodic table)Computer programSlide ruleCode2 (number)Process (computing)Fitness functionOrder (biology)Figurate numberTask (computing)CalculationPoint (geometry)Event horizonComputer hardwareAdditionContext awarenessBefehlsprozessorNetwork topologyNumberGroup actionForm (programming)Asynchronous Transfer ModeInterrupt <Informatik>LogicUniqueness quantificationMehrplatzsystemData transmissionNP-hardThread (computing)MathematicsComputer animation
12:58
Type theoryPersonal digital assistantComputer programEmpennageSystem callCompilerJust-in-Time-CompilerComputer networkOpen sourceKernel (computing)Service (economics)Hash functionLinear mapBefehlsprozessorElectronic mailing listRule of inferenceAlgorithmAverageCluster samplingSimulationInformation securityIdentity managementMenu (computing)AuthorizationIP addressData centerProcess (computing)Open sourceData structureComputer programTable (information)Just-in-Time-CompilerSpacetimeSystem callImplementationMappingScalabilityCodeEndliche ModelltheorieScaling (geometry)Virtual machineType theoryBytecodeStructural loadAffine spaceWritingSoftwareMultiplicationContent (media)Array data structureBuffer solutionPhysical systemMereologyInformationPoint cloudFlow separationGraphical user interfaceVector potentialOrder (biology)Parameter (computer programming)VirtualizationSampling (statistics)Plug-in (computing)Level (video gaming)LinearizationService (economics)Projective planeElectronic mailing listRule of inferenceCuboidFirewall (computing)BitProduct (business)Group actionSinc functionFacebookLatent heatAuthorizationTelecommunicationIdentity managementGene clusterHash functionPolygon meshMathematical optimizationRun time (program lifecycle phase)Right angleDenial-of-service attackLocal ringForm (programming)Kernel (computing)Information security2 (number)LastteilungTheory of relativityRootProfil (magazine)Quality of servicePlanningCartesian coordinate systemFigurate numberComputer animation
21:01
AuthorizationCommunications protocolDirect numerical simulationComputer wormPolygon meshService (economics)TelecommunicationKernel (computing)Slide ruleComputer networkLastteilungPersonal digital assistantInformation securityRoutingTwitterInformationWebsitePoint cloudContext awarenessIP addressService (economics)CASE <Informatik>Right angleInsertion lossTable (information)Communications protocolFirewall (computing)Entire functionBitPolygon meshTelecommunicationDirect numerical simulationConnected spacePoint (geometry)Structural loadWechselseitige InformationOverlay-NetzOverhead (computing)Asynchronous Transfer ModeProxy serverImplementationMereologyMatching (graph theory)Automatic differentiationServer (computing)Operating systemDependent and independent variablesCloud computingRepresentational state transferRow (database)Front and back endsResolvent formalismAuthenticationIdentity managementLink (knot theory)Gene clusterEncryptionMultiplication signKernel (computing)TwitterInformation securitySlide ruleIPSecInternet service providerFood energyContent (media)Program slicingPower (physics)Network socketCartesian coordinate systemVideoconferencingSocket-SchnittstelleAffine spaceCubeMultiplicationCategory of beingVirtual machineExplosionWebsiteComponent-based software engineeringSemiconductor memoryFacebookLibrary (computing)Transport Layer SecurityLocal ringMeta elementDegree (graph theory)
29:04
VideoconferencingHypermedia
Transcript: English(auto-generated)
00:06
Good morning or good and Morgan. My name is Thomas graph I'm one of the founders of the psyllium project and the co-founder and CTO of the company behind it Which is called ISO Wayland Today, I'm here to talk about
00:21
Psyllium and PPF and why we believe it is the future of networking and security My background is very Linux specific. So I I've been a kernel developer for about 15 years not for SUSE I was working for Red Hat for 10 years But obviously we all were all friends
00:40
So, let me grab this live presenter So I would like to introduce to you why we actually why we started with psyllium and for that I would like to give you some background and Before I even started working in computers computers were a thing and this age That I'm about to present I didn't even experience myself
01:02
But I would like to kind of walk you through how we have been running applications over the last 20 years plus In the very beginning there was this dark age Where we had single tasking, right? The CPU was not even shared this I did not experience I was not I was not in the compute computers when this happened, but we were already running applications or code
01:24
We went into a phase where we are introducing multitasking and all of a sudden the CPU memory was shared But the application would still run and directly consume CPU memory and so on This was the age when Linux distribution started popping up like SUSE got started, Red Hat got started and so on
01:42
We then entered the stage of virtualization We figured I don't want to actually deploy my application on a server and install it I would like to virtualize this run VMs and run many applications on a particular server But but inside of a VM at this point we started virtualizing Literally everything we had virtual routers virtual switches virtual storage everything we had before was done again
02:07
But the V was put in front of it What we're what we're going for right now is we're coming back. We're hiding out of VMs again, and we're running applications directly Consuming Linux API's again. So applications are actually like containers
02:23
We are consuming Linux system called API's again, and we're making applications share the operating system So we're kind of going back to the multitasking edge in some way and this changed back This is why we started Solium because most of the infrastructure tooling we have today was actually written for this
02:42
Virtualization edge like where where we would typically serve network packets or storage for virtual machines and not for for applications specifically So, what does that mean like how does the Linux kernel cope for this new age of micro services in cloud native world? Let's take a look at some of the problems that kind of arise when we run micro services or containers on Linux
03:04
First of all Linux the Linux kernel basically consists of a ton of abstractions that have been introduced over the years I'm listing a couple of here There are many many more we have kind of the driver level on top of top of that We have kind of network device level for example and traffic shaping built on top then routing IP tables
03:22
Filtering then we have sockets with the different protocol layers. We cannot actually bypass many of those We're forced to consume each of them in the right order and over the years If we have accumulated a lot of code in the Linux kernel and right now this definitely Increases the chance that you hit a for example a performance penalty some of what we would actually like to get rid of
03:44
In the last couple of years we've seen some of the complexity move to user space for this purpose because not everybody was willing to Pay this cost. We identified and said this is actually not ideal Let's find a solution that we can work with the existing abstractions, but
04:00
But bypass them when necessary for example, we'll go into the details Another thing is that this is kind of the Unix way of doing things every single Subsystem and in the Linux kernel has its own API, right? So we don't have one big tool to control everything every single subsystem is controlled by us by a separate tool
04:23
Like for we have for a bit not working specific, but what we have East tool We have IP we have if convict we have set comp we have IP tables. We have TC We have TCP dump. We have bridge control We have OVS cattle and so on a wide tiers of tools and users have to consume every single tool And users that's not necessarily an actual
04:42
Human that could be an automated tool that controls the system and all of these tools are calling calling these API's It is becoming very difficult to actually Orchestrate all of them together a very specific example is if you have five six tools on your machine on your nodes all Consuming IP tables and trying to install IP triple table rules that then actually conflict with each other
05:06
The last kind of example that makes it difficult is that cloud native Cloud if computing requires that the operating system continues evolving because it now again consumes the operating system in a very native way
05:22
The Linux kernel development process has some good sides and some bad sides. So like the good sides are definitely It's on open and it's an open and transparent process. This is probably the biggest Biggest benefit of Linux that it's completely open Excellent code quality at least we think so
05:41
It's very stable because a lot of people are running it and has been has been stabilized over many years It's available everywhere literally runs on every piece of every piece of hardware. It's almost entirely vendor neutral But then there's some bad things as well My slide pointer is a bit slow here. That's why I'm bit struggling
06:03
It's Really really hard to change So getting a Linux kernel change in literally takes weeks or months from intense to implementation to getting a change in Takes weeks and then it takes month or year until that change actually makes it down to the users So once we have identified a need for a change it takes us years to actually get that to the end user
06:23
For consumptions. This is why we see most of the kind of tooling that we built Consuming very old API's right like cloud native computing tooling is currently built on for example IP tables Which has been built 25 years ago It's not been intended for this at all But it we're really struggling to do something else because it's so hard to change the kernel and make that change available
06:45
to users quickly It says a very large and complicated code base and this is simply because of backwards compatibility We were never actually removing code We're only adding adding adding and then everything we ever added we have to support for the next
07:01
How many years like we're never actually removing anything ever again? Upstreaming code is hard. I'm not just from a complexity perspective, but also from a consensus finding perspective. Everything will change Pretty much everybody has to agree to it. This is making it hard time consuming of course and then
07:22
Yeah, I already talked about this. It can take years to become Available. So these are some of some of the problems we have been struggling with And then also the last one the kernel doesn't actually really know what a container is or what kind of the base the base
07:46
Base unit off of an application is at this point So let's figure out what the kernel actually knows and what it doesn't know So what the kernel knows is it knows about processes and it knows about thread groups, right? It doesn't actually know Specifically, what is an application it knows about cgroups container is consuming cgroups
08:06
It has limits like it can do accounting it can do it can limit the CPU. I can limit memory can limit network This cgroups is considered typically configured by the container runtime that we see it knows about namespaces This is where the confusion or kind of the assumption is coming from that containers are some sort of isolation
08:25
But literally all this is is that the kernel will kind of name space certain data structures And for example have multiple network namespaces or multiple user namespaces or multiple mount namespaces and so on It doesn't actually still don't know what a container is. All it knows is that I have multiple namespaces for data structures
08:47
It knows about IP addresses and port numbers This is called this is configured by the container networking and it knows about system calls made and he knows about the SELinux context This is pretty much what the kernel knows about it does not actually know that I'm running this particular container
09:07
So examples here of things that the kernel has no clue about The kernel does not know what kubernetes is the kernel does not know what a kubernetes part is The kernel does not know what the container ID is. No clue. The kernel does not know what the
09:22
What the application actually would like to run So if you're running a kubernetes part Which consists of multiple containers the kernel does not know that these containers are actually supposed to kind of work together So all of these things kind of makes makes the kernel struggle to provide a good application framework because it's there's no
09:40
Concept no native concept such as a container in the kernel It only provides the tooling and the container runtime on top provides kind of The instruments for a container runtime to to use that So, what do we do like Containers are clear thing and containers are winning. So what what do we do?
10:01
We have a couple of options We can give all of kind of give the harder a way to use a space in your space can kind of rewrite Everything from scratch like we've seen that a couple of examples would be DPDK udma Typically this has been done for performance not for functionality Not for functionality needs
10:20
Another another alternative would be unique kernels, right? We can start like kind of just rewriting a new kernel subsystem unique kernel and start half applications consume their own own pieces of operating system on only consume what they actually need We can move the entire operating system to user space like user Most you know user mode Linux has been a thing
10:41
So it has been tried and some people are using it or we can decide to rewrite the entire Linux kernel Which is probably a hard task and quite expensive the calculation up on the slide is very old It's probably way more expensive to actually really do it. But this is an option that we could we could follow
11:03
Come on So we're not kind of fading into This is the background like so it's clearly not a perfect fit So let's look at like how we could do it better and in order to understand BPF is what what we're using we need to understand what the kernel actually does
11:20
It's fundamentally an event-driven driven program We have interrupts coming from the hardware side and we have system calls coming from application and processes and the kernel will execute code Based on these events that's fundamentally what the kernel does. There's not much more that it actually does So it takes about one minute or like ten seconds to go to the next slide
11:45
So what is BPF so BPF is consuming this base? Assumption that everything is event-driven and it makes the Linux kernel programmable So it introduces what we call a highly efficient in kernel virtual machine Which means that we have some sandbox concept where we can run code in a safe and efficient manner
12:06
every time certain events are being handled or are being are popping up inside of the Linux kernel and We'll look at a couple of examples in the next slide So we can run a BPF program every time a system call is being made or
12:20
We can run a BPF program every time a block IO device is being accessed We can call a and run a BPF program every time a network packet is being received or sent We can call it for every trace point So we can call it for example when a tree CPU transmission event happens We can call it for kernel probes So for arbitrary kernel functions and even for user space application functions you probes
12:42
So you can run a BPF program when your application code calls a particular function Wow, so we can we can extend and program the Linux kernel with arbitrary additional logic when certain events happen This is the promise of BPF and this is why so many people are excited about this BPF in the wild
13:02
Seems to struggle to kind of load some of the logos So the first example on the on the top left is is Facebook. So Facebook is a heavy heavy heavy user of BPF All infrastructure local and saying DDoS mitigation local and singh is all done in BPF today second example
13:20
Google QS traffic optimization network security profiling we don't know that much about this because they're just consuming BPF in a straw form and To do all of these things, but don't tell the world a lot about it There is you can go find information at some conferences, but I do talks, but typically they're not broadcasting everything publicly
13:41
Then Susie Susie is using solid like BPF via Via psyllium to do networking advanced security local and singh traffic optimization cloud fair is using BPF to do DDoS mitigation Cystic Falco is using BPF for container runtime and behavioral Security profiling right at he's using BPF for profiling tracing and they're working on an IP tables replacement upstream
14:06
Then of course psyllium which we'll talk about next and then even Chrome is using BPF So when you have Chrome plugins and you run them BPF is used to isolate The plugins and make sure that can only execute certain system calls
14:21
So all of you you're already using be heavily using BPF, but so far it has been well hidden That's kind of a kernel level implementation detail Now they're coming up So How does how does BPF look like so what like it's a virtual machine
14:40
What does that mean? So what it really means in practice? I can write a program like this Simple example and I can say this program Runs when the exec system call is is executed and the returns and this in this example I'm collecting some samples and For example measuring how many of those system calls am I making but I could actually make this program more complex
15:03
And for example say no You are not allowed to make this this system call or I could modify the system call the system call This is the call system call arguments. So I have a lot of flexibility in what I can do Well, this is a very simplistic example that shows you What you can do. I will do a very quick introduction of kind of what you can do with BPF. So
15:25
Nutshell you write code in pseudo C code You compile that you load that into the Linux kernel the Linux kernel will verify that the program is safe It will check compile it We'll talk about later and then run it in order for these programs to kind of communicate to the outside world
15:41
Which would be user space you can use BPF Maps which are data structures that can be accessed from both BPF programs And also user space. This is how you can expose for example data that you have gathered with a user space process There's many types of BPF Maps hash tables arrays perfering buffer and so on
16:06
We can do we can call BPF helpers or BPF helpers allow BPF programs to interact with the Linux kernel So not everything has to be done natively in BPF bytecode and BPF code. You can actually call kernel helpers For example to change content in a network packet or to redirect the packet to another network device and so on
16:26
So all of this is done by BPF helpers. We can do tail calls so we can We can call other BPF programs. It's similar to function calls We can use a JIT compiler, which means we write software bytecode Which is arbitrary runs on any an infrastructure and the JIT compiler in the Linux kernel will then automatically compile that in either into
16:47
x86 into R and PPC whatever so it will run at native at native execution speed This is a snapshot of the BPF contributors list to kind of understand who is behind BPF and there's many many companies behind this
17:02
It is maintained by two main engineers Daniel Borkman and Alexey Staravoytov Daniel is working for Selimfrost, Alexey is working for Facebook. Well, you can see contributions from Reddit, Netronome Facebook, Cloudflare from us and so on so it's a it's not a Selim specific implementation in any way
17:21
This is widely, widely supported Who uses BPF? Well Facebook is probably the most prominent example, but I think they started at wild scale first Basically, I think in 2018 one of the traffic engineers came up and talked and basically said at the conference Well, every single packet into a Facebook data center since May 2017 has been that's going through a BPF program
17:45
And the world was kind of wow, like nobody had any clue that they were using this in production for so long So let's let's transition into Selium So I talked about BPF and it sounds exciting, right? Who wants to write low-level C code or like actually write these programs
18:02
So this is why we saw this potential like this incredible potential of BPF and figured How can we apply this to this cloud native world? How can we apply this to Docker or Kubernetes like and so on and this is why we created Selium So Selium is open source open source project Apache licensed and it provides networking
18:22
security and localizing for cloud native world I will dive deep dive into several of example a very simple one is Kubernetes networking It's called CNI in this kind of simple model. We simply provide networking for Kubernetes So if you run containers if you run parts in Kubernetes
18:41
Selium will do all of the routing all of the networking for these parts and ensure that parts can can talk to each other We implement Kubernetes services Kubernetes services are a way to have to make applications scalable and give them a Virtual IP or a service IP so you can reach many replicas of the same container
19:01
While one single IP this is how you can make your services highly available Selium with BPF can provide a BPF based implementation which scales better The main reason it scales better to the traditional IP tables Model is an IP tables model. It's a linear list of rules So you literally scan through the list of rules until you find a matching entry and then execute this
19:23
The BPF implementation uses a scalable hash table that just is faster and better We can do cluster mesh so we can we can connect multiple clusters together Not only on the networking level, but we can also do service load balancing across multiple clusters So for example say that this service should be highly available
19:43
So I will distribute it or deploy it over multiple clusters and have Selium do the load balancing That when all the replicas in one cluster fail It will automatically fail over you can define service affinity and say It should always prefer a local a local replica first and if no local replicas are available move over
20:01
So we can connect multiple clusters together We can do identity based security. What does that mean? Very simple typically firewalls used to work on IP addresses So you would either directly configure the firewall to say allow from this IP allow to this IP or allow this subnet What we're doing is a bit more modern
20:21
We're actually giving an identity to every service to every container and we're encoding the identity in all network in all communication all our packets that are being being Emitted you can see this here this yellow box here and then when we receive those packets We can actually authenticate and validate the identity of the sending container. This is more secure and much more scalable
20:44
We can do API where authorization like what's what does that mean? it's again, it's kind of a step from like the Vmh into the container age because typically would have done something like this We would have either allow kind of an l3 firewall rule or you say this service can talk to this service or this container can talk to this container and
21:02
Typically would do this based on IP addresses or container names or pod labels You then kind of say, okay. I want to be a bit more fine-grained and lock it down to a particular port Let's say you can only talk in port 80 But this is still a problem in this new cloud 80 age because everybody's using GRPC rest API's and so on
21:20
So literally as you open up, let's say port 80 you open up your entire rest API So what we can do is we can for example lock it down and say yeah You can talk on port 80, but you can only do a get to slash foo and everything else is blocked So if you do a put to slash bar, we will block it automatically That's kind of a cloud 80 for a container aware or an API where firewall
21:41
This is what we believe is necessary for this new edge that is coming up Give you a simple example. We support many protocols HTTP is obviously one but Cassandra is another one as you can go as deep and say hey I actually want this container to be able to talk to my Cassandra cluster But it should only be able to do a select and only on this table So no inserts no updates and you cannot access any other table
22:03
So you can really start locking it down and this is absolutely Fundamental in the age of kind of containers and micro services because you will have many services talking to shared resources Cassandra, Kafka, Redis, memcached, all of them will be shared and you need security to actually lock this down properly
22:23
Getting going deeper, right? You have will have services that talk to outside of your cluster. It's not just service to service communication You might have a service that is talking. Let's say to SUSE.DE How do you secure this like SUSE.DE may may only be backed by a couple of dozen IPs or something like this But as you start talking to something like AWS
22:42
S3 or drive.google.com these services They're literally backed by thousands of IP addresses and there's no way you can you can whitelist that based on IPs It's not even but there's not even a known subnet that would represent that service. So, how do you how do you specify? Security that allows the service to talk to S3 or to drive.google.com but not to anything else
23:03
In this case, we're using DNS to our policy So a simple example, there's a front-end service and it's doing an HTTP request to SUSE.DE obviously it would do a DNS request So it would resolve SUSE.DE and in this case in the case of Kubernetes
23:23
The DNS server would return back and say hey, this is the IP address of SUSE.DE With Cilium, we can define a policy that says hey you can talk but you can only talk to something that resolves to start at SUSE.DE and Cilium with BPF will come in and look at the DNS communication and will record
23:44
The IP that was returned by the DNS server and then only whitelist that particular IP So it's not kind of polling or trying to look up all the possible IPs of the DNS name It's actually looking what the DNS server response and then only allowing that communication So that's another example of cloud native security that we need
24:05
Then we can do fancy stuff. Who knows about service mesh? Couple of hands, great So service mesh very briefly concept that you're running a sidecar proxy in every Kubernetes pod or in every pod and all the
24:20
communication between services is going through that sidecar proxy and it's basically getting proxied This allows to implement mutual TLS, retries, tracing local lancing for example path based local lancing, counter releases and so on. The downside is that this introduces a lot of overhead because instead of having one connection between services
24:41
you have a connection from service to proxy, proxy to proxy and proxy to service So from one to three So the memory consumption explodes, the latency explodes and so on This sidecar proxy is always running on the same node on the same machine as the service Why do TCP? So TCP was done to survive a nuclear blast. Why would we want to do TCP there?
25:03
So what we do is we recognize this connection and we see that both sockets, the socket of the application and socket of the proxy are on the same node and we simply start copying the data between the sockets and this gives us like a 3x performance degrees You can see it on the slides there. It's like
25:21
It's fantastic All thanks to the power of BPF which gives us this flexibility and then kind of looking into into the future We can do something like transparent SSL visibility. Maybe some of you have heard about kTLS, kernel TLS. It was done by Some of the big providers of video streaming contact when they started enabling TLS, they really started to care about how
25:44
expensive it is to to basically produce that video or deliver that video with SSL encryption and it turns out if we offload the SSL encryption from the application library into the Linux kernel gives us a three to four percent increase of performance So this is why kTLS has been done
26:02
Right, we can use kTLS to basically even if the application is using SSL encryption to gain insights into the data that the application is sending and for example Do the do the layer 7 or the HTTP or filtering even if the application is using SSL If you want to learn more about this, there's a kubecon talk from last year that goes into all of the details of this
26:26
So Selim use cases we kind of went through them This is a summary. So Selim provides container networking, right? It's highly efficient It's using the same the same techniques and the same meta as Facebook and Google and all the others are using internally It can use it can run in multiple modes. You can run it in kind of routing mode
26:44
You can do overlays. You can do cloud provider native modes. We support IPv4, IPv6 In fact, we have been IPv6 only for the first year We tried to go like really native and say everything will be IPv6 at some point. We can do multi cluster routing We can do service load planting like really scalable
27:01
We're not doing any l7 no path based routing, but we're doing efficient l3l4. We implement kubernetes services replacing qproxy We can do service affinity. We can we can do cloud native security all the examples we provided identity-based Like layer layer 7 aware DNS aware and so on we can do encryption so we can encrypt everything
27:24
Transparently can basically turn us on and we will encrypt everything inside of cluster and across clusters And we can do the service mesh acceleration and all of these are key components to run Services or containers in a very efficient and secure way on Linux So all of this we do as part of the Linux kernel
27:42
Which means it's all completely transparent to the application because it's basically it looks like it's a property of the operating system So if this this all the slides I had I'm sure you guys have several questions, I think we have some time for questions Yeah, yes, I will also repeat the question. So feel free to just shout
28:17
The question is does it support mutual s? Solemn itself does not do mutual TLS, but you can run envoy is to link ID on anything on top
28:25
Solemn does support encryption and audit on it authentication, but we're not using TLS So we have a method that we can integrate with for example Spiffy's beef isn't a service identity provider But we will use IP sec in the Linux kernel to actually enforce it
28:40
So you get the transparent on the indication, but it's not empty less specifically Any more questions? All right. Thank you very much. If you want to learn more here the links slack GitHub website Twitter and so on