Kubernetes networking : is there a cheetah within your Calico?
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Subtitle |
| |
Title of Series | ||
Number of Parts | 287 | |
Author | ||
Contributors | ||
License | CC Attribution 2.0 Belgium: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/56960 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
00:00
Computer clusterSoftwareDisintegrationSoftware developerComputer networkComputer networkPlane (geometry)RoutingControl flowVideoconferencingService (economics)Internet service providerBefehlsprozessorSource codeClient (computing)SpeicheradresseServer (computing)Directed setArchitectureArmCryptographyCache (computing)Interface (computing)Virtual realityPlug-in (computing)Computer configurationIntegrated development environmentOperator (mathematics)Chemical equationScheduling (computing)Interrupt <Informatik>Asynchronous Transfer ModeDevice driverAddress spaceData modelRegular graphComputer networkSocial classComplex (psychology)Configuration spaceMultiplication signLevel (video gaming)Term (mathematics)Computer hardwareBuildingFitness functionState of matterComputer networkINTEGRALServer (computing)Dependent and independent variablesPlanningGoodness of fitProjective planeSource codeDirection (geometry)Computer clusterClient (computing)Flow separationMaxima and minimaAuditory maskingProcess (computing)Functional (mathematics)Plug-in (computing)Variable (mathematics)ImplementationIntegrated development environmentEndliche ModelltheorieProof theoryService (economics)Decision theoryTable (information)Slide ruleMusical ensembleSet (mathematics)Game controllerPhysicalismSoftware frameworkInformation securityScaling (geometry)ScalabilityWorkloadOrder (biology)TelecommunicationAddress spaceWordWindowSoftwareRoutingBefehlsprozessorRange (statistics)Wind tunnelVector spaceCodeStructural loadLatent heatEvent horizonMathematical optimizationDemonSoftware engineeringWeb-DesignerLogicIP addressOpen sourceVirtual machineComputing platformMixed realityControl flowShift operatorWhiteboardOffice suiteSystem callRight anglePower (physics)SummierbarkeitView (database)Hacker (term)Execution unitGroup actionSoftware developerTwitter1 (number)Device driverStudent's t-testComputer animation
07:16
Device driverAddress spaceComputer networkData modelRegular graphThetafunktionComputer-generated imageryInterface (computing)RoutingService (economics)Network socketCryptographySinguläres IntegralLevel (video gaming)Mathematical optimizationRadical (chemistry)Communications protocolStreaming mediaConnected spaceComputer networkDirect numerical simulationSocket-SchnittstelleProxy serverClient (computing)Linear mapUDP <Protokoll>Duplex (telecommunications)Thread (computing)Order (biology)Computer networkAddress space2 (number)Functional (mathematics)BitLibrary (computing)Mobile appSemiconductor memoryArithmetic meanContent (media)Set (mathematics)ImplementationPrimitive (album)Endliche ModelltheorieMultiplication signConfiguration spaceSoftware testingCartesian coordinate systemRootNamespaceServer (computing)Link (knot theory)Connected spaceClient (computing)Instance (computer science)User interfaceShared memoryVery-high-bit-rate digital subscriber lineSocket-SchnittstelleInterface (computing)IPSecRegular graphDataflowSinguläres IntegralNumberEvoluteDirection (geometry)Computer-assisted translationLogicAreaScaling (geometry)Table (information)1 (number)Standard deviationDevice driverPlanningService (economics)Game controllerCodeLevel (video gaming)ResultantComputer clusterFront and back endsCommunications protocolStreaming mediaLimit (category theory)LastteilungUtility softwareWritingProjective planeVotingSanitary sewerNeuroinformatikState of matterLogic gateBuildingTunisMeasurementReading (process)Analytic continuationRight anglePolygon meshPoint (geometry)Condition numberClosed setFocus (optics)Statement (computer science)Open setWhiteboardFilm editingKey (cryptography)MassHTTP cookieGreatest elementParticle systemRational numberNetwork topologyGroup actionUltraviolet photoelectron spectroscopyRow (database)Uniform boundedness principleAdditionComputer animation
14:27
Linear mapService (economics)Duplex (telecommunications)Thread (computing)UDP <Protokoll>Link (knot theory)Proxy serverRippingComputer clusterSinguläres IntegralComputer-generated imageryComputer networkProcess (computing)Computer wormBefehlsprozessorCodeWorkloadComputer configurationLevel (video gaming)Integrated development environmentKeilförmige AnordnungBeta functionOnline helpMathematical optimizationStapeldateiWorkload2 (number)Figurate numberMultiplication signDeterminantRevision controlLevel (video gaming)Direction (geometry)Ocean currentCurveRaw image formatMobile appOrder (biology)Address spaceMatching (graph theory)Projective planeBeta functionScaling (geometry)Link (knot theory)Pairwise comparisonDisk read-and-write headInterface (computing)Core dumpBitComputer networkComputer clusterRegular graphProxy serverINTEGRALMedical imagingPlanningPoint (geometry)Server (computing)Configuration spaceMultilaterationPlotterNumberGame controllerSoftware testingComputer networkDot productSinguläres IntegralGraph (mathematics)Virtual machineBefehlsprozessorAsynchronous Transfer ModeAdditionBit rateIP addressCycle (graph theory)UsabilityAreaDiscrepancy theoryCASE <Informatik>CodeComputer configurationTouchscreenLimit (category theory)Structural loadPoisson-KlammerData structureView (database)Group actionResultantRight angleFreewareSet (mathematics)AverageInequality (mathematics)Personal digital assistantSuite (music)Extension (kinesiology)ModemIntegrated development environmentUltraviolet photoelectron spectroscopy
21:37
Multiplication signMessage passingInformationGroup actionQuantum stateHydraulic jumpCausalityArmStability theoryLevel (video gaming)NumberReal numberProduct (business)BitLine (geometry)Natural numberComputer animationMeeting/Interview
22:32
BitTable (information)Stability theoryMultiplication signCapability Maturity ModelPlanningLevel (video gaming)Real numberNumberProduct (business)Archaeological field surveyDegree (graph theory)Meeting/Interview
23:13
INTEGRALPoint (geometry)BitInformation managementMeeting/Interview
23:36
Open sourceLink (knot theory)Computer fileTouch typingLattice (order)Multiplication signOnline chatWebsiteProcess (computing)BitExecution unitMeeting/Interview
24:10
Shared memoryOperator (mathematics)Data recoveryFocus (optics)Software repositoryDirectory serviceSoftware testingConfiguration spaceRevision controlCartesian coordinate systemMeeting/Interview
24:42
Amenable groupNormal (geometry)Price index1 (number)Software maintenanceRight angleRankingElectric generatorRule of inferencePlanningRegular graphConfiguration spaceComputer clusterCartesian coordinate systemEncryptionWorkloadSemiconductor memoryLatent heatMeeting/Interview
25:43
CASE <Informatik>Interface (computing)WorkloadDifferent (Kate Ryan album)Cartesian coordinate systemStack (abstract data type)PlanningSemiconductor memoryTraverse (surveying)Flow separationMeeting/Interview
26:12
BitVery-high-bit-rate digital subscriber line2 (number)Dependent and independent variablesProxy serverCASE <Informatik>Raw image formatDifferent (Kate Ryan album)Term (mathematics)EncryptionDirect numerical simulationInterface (computing)Transport Layer SecurityCharacteristic polynomialRule of inferenceArithmetic meanHypermedia1 (number)Meeting/Interview
27:42
BitArithmetic progressionPoint (geometry)AbstractionInterface (computing)PlanningService (economics)1 (number)AdditionUser interfaceMeeting/Interview
28:58
Computer animation
Transcript: English(auto-generated)
00:09
Hi, and welcome to this FOSDEM talk, Kubernetes Networking Is there a Cheetah within your Calico? It's about even faster Kubernetes clusters with Calico, VPP, and Memif.
00:21
I'm not actually the lead speaker today, that falls to Nathan Skribzak. He's a software engineer at Cisco and a Calico and VPP integration contributor. He's a biking and hiking enthusiast and even enjoys sea kayaking. We will get to him in just a couple of slides. In the meantime, my name's Chris Tompkins.
00:41
I'm a lead developer advocate at Tigera, the primary contributors to Project Calico. Today's obsession for me, I'm trying to learn Japanese on Duolingo, but I'm getting nowhere quick and I'm listening to lots of music. I especially enjoy Rusty. If you like music, check him out. My role is to champion user needs
01:00
and support Project Calico's users and contributors. I'd like to start by giving you a quick overview of Calico, how it works, and some of the lower level design decisions that the Calico team made that have helped to enable some really awesome work done by Alois and the VPP team at Cisco. We have a short talk today, so we'll need to be brief
01:21
in order to allow time for questions. Keep in mind that you can learn a great deal about Calico at projectcalico.org and about VPP and its use of Memif at fd.io. With that said, the Project Calico community develops and maintains Calico and Calico is an open source networking and network security solution for containers,
01:41
as well as virtual machines and native host-based workloads. Calico supports a broad range of platforms, including Kubernetes, OpenShift, Mirantis Kubernetes engine, OpenStack, and Bare Metal. It's really battle tested and can operate at huge scale. You can lock in step scalability with Kubernetes clusters without sacrificing performance.
02:03
It offers granular access controls, including a rich framework and security policy model for secure communication and full Kubernetes network policy support that also works with the original reference policy implementation of Kubernetes.
02:22
The main benefit that we're building on for today's talk though, is Calico supports multiple data planes, including VPP, IV tables, and EPPF for the right fit across even heterogeneous environments. So whatever the feature set you have available or don't on your cluster in terms of Linux kernel,
02:42
hardware support, and underlay physical network, we should have the data plane that gets you the best performance and features. I won't spend a long time talking about what a data plane is after all the audiences, you know, are all network engineers. So you probably know a control plane is responsible
03:01
for figuring out what's going on in the network, the consensus of high level things, such as routing. It's typically implemented on a general purpose CPU. It manages complex device and network configuration and state. The data plane is different. It's responsible for moving around traffic
03:21
and should be responsible for nothing else. Therefore it can make really good use of hardware acceleration features. It should be designed to be the simplest possible implementation of the required packet forwarding features. It implements a fast path for your traffic. And I like to give the success of MPLS
03:41
as an example of a great data plane. It has a lot of unnecessary functionality torn out of it. So it doesn't have things that IP has such as variable length, subnet masks and checksums. And therefore that leads to minimal processing per packet and fast affordable devices.
04:03
So control plane and data plane separation achieves a lot of things. It achieves specialized minimal data plane code and a targeted data plane feature set. It achieves code reuse in the control plane and future proofing.
04:21
It means that the data plane can be very adaptable and it provides agility for the end user because you can match the feature set to what you really need on your clusters. Catacore offers several data planes.
04:42
The Linux IP tables data plane which is heavily battle tested, offers good performance and great compatibility and wide support. We have the Windows host networking service which allows Windows containers to be deployed and secured. And we have a Linux EVPF data plane which scales to higher throughput
05:01
and uses less CPU per gigabit. It reduces first packet latency for services and preserves external client source IP addresses all the way to the pod. It also supports direct server return for better efficiency. But today we'll be talking about a new data plane and the features it offers, especially around MEMF.
05:22
And that data plane is VPP. So I'll hand you over to Nathan to tell you more. Thanks, Chris. So first, a few words about VPP. It has been presented in many talks. You most probably even have already seen us like this one. So I won't spend too much time on it. But in short, VPP is a user space network data plane which is highly optimized for packet processing
05:41
and at the API level as well. It relies on vectorization to provide a wide range of optimized L2 to L4 features from NAT, Tunnels to TCP and QUIC. It is also easily extensible through plugins which is something we are leveraging a lot for the Calico integration. If you'd like to learn more, don't hesitate to go on fd.io. There are plenty of resources available out there.
06:04
So Chris did speak about data planes and the fact that Calico already supports a few ones. So the question is, how do we become one? That's what we ask ourselves when starting the Calico integration. In order to make this happen, we did build a control plane agent running as a daemon set on all nodes.
06:21
And we did register it as one of the available data plane options. This agent is responsible for starting VPP, listening for Calico events and programming the data plane VPP accordingly. We also built a couple of custom plugins with optimized implementations, doing NAT for service load balancing, implementing the Calico policies specific logic and so on.
06:43
We tweaked the VPP configuration to make it friendly to use in a container-oriented environment. Using enter mode, for example, enable running without huge pages, leverage hardware and software offloads and so on. With all this, we add all the bricks to run VPP-powered Kubernetes clusters.
07:01
So let's do that. Okay, but first, what happened under the hood? Essentially, what we do is we swap the network logic that was previously happening in Linux to VPP. Now, because VPP is a user space stack, we have to do a few things differently compared to what was previously done by Linux. In order to insert VPP between the host and the network,
07:22
we will grab the host network interfaces specified in the configuration and consume them with the appropriate drivers. We then restore the host connectivity by creating a turn interface in the host root network namespace. We replicate the original uplink configuration on that interface, the addresses, the routes, so that things behave similarly from the host endpoint.
07:44
Pods are connected just like the host with a turn interface in each of the pods namespaces, and the Calico control plane is running normally on the host, and it configures the data plan function directly. Since we use turn interfaces also and not vith, we don't need to worry about layer two in the pods,
08:03
which better matches the Kubernetes model. But now you might ask, why do all this? What does this allow us to do? First, having the data plane in user space makes evolution easier to implement and deploy. So this allows us to add new functionalities, for example, Maglev load balancing for services,
08:21
or IPsec or service six. It also enables experimenting with the network model, for example, exploring how to expose multiple networks to a pod. But most importantly, regarding performance, with this, we can look into optimizing both the network logic running in VPP as well as the way pod consumes it.
08:41
That gives us two good areas to start optimizing how fast the Calico cat can run. Okay, so let's focus on performance. The first question is, what are we trying to optimize? So the way application usually consume packet is with socket APIs. It's quite standard, but you have to go through the kernel, and it's a code path which wasn't designed
09:01
for the performance levels of modern apps. That's actually why we came up with GSO as a network stacked optimization. Here, as we have VPP running on the nodes, it would be nice to be able to somewhat seamlessly bypass the network stack and pass all packets directly to VPP without having to touch the kernel. That way, we might also spare a few copies on the path.
09:23
And to do that, fortunately, VPP provides two ways for an application to attach and consume packets without touching the kernel. The first one we have are memory interfaces, or MEMFs, and they are a standard for exchanging packets over a shared memory with several highly optimized clients implemented.
09:40
You have ones in Go, in C, you have ZPDK, and obviously, VPP's supporting them. Basically, from the app standpoint, when using those clients, you get a handful of functions for receiving and sending packets, a bit like what you would do with AF packet in Linux. The second way is VPP's host stack. It's a set of optimized L4 protocols implementation
10:00
living in VPP. We have TCP, UDP, TLS, and QUIC, and a few others available. And this allows VPP to terminate the connections and make the stream or datagram content available to the client app through a shared memory. This memory can then be consumed with a dedicated library called the VCL, the VPP Comms Library.
10:21
And similarly, when linking against this library, your app will be able to leverage, connect, accept, receive, and send primitives talking directly to VPP. So those two methods allow us to build two consumption models. If we're requested, we expose a memory interface, a MEMF, in the pod with the same configuration
10:42
as its regular interface. This can, for example, then be leveraged by an application handling small UDP packets at high speed. It can do so with either gomemif, libemif, DPDK, or maybe another VPP running in the pod. And we can also expose VPP's host stack in the pod if we're requested again.
11:01
That way, an application handling TCP, TLS, or QUIC flows can, with the libvcl, connect or accept directly in VPP. That way, bypassing the protocol implementation in Linux. All this is exposed with simple pod annotations, and it enables full user space networking with zero copy from the app to VPP,
11:20
while still being able to also run regular services like DNS through the API, because we also keep the regular NetDev configuring in the pod. So let's see how fast this can go. We'll take a reference configuration using regular Linux. So here on the right-hand side, we have a server running a traffic generator, which is t-rex, sending UDP packets
11:41
as fast as it can over a 40G link. On the left-hand side, we have a Kubernetes node running a test pod where we measure received traffic. We don't send the traffic back to the generator in order to keep the setup simple, but it shouldn't impact results this much, because in the end, we are speaking about packet processing capabilities,
12:00
so adding the return traffic is just about doubling the number of flows, or packets per second. In the client pod, here is Linux, so we will use the EFPS utility directly on the pod interface to see how fast Linux drops the received packets, as here we have no application actually reading packets. But if we add an additional limitation of this setup
12:22
to keep in mind, is that you will often need AF packet or AFXDP to get the best performance with small pocket out of the NetDev which is exposed in the pod, and thus that will require elevated permission for the pod, so that's something to know. So if we take the same situation
12:40
and we install Calico VPP instead, the uplink interface will end up being owned by VPP, and the pod will still have an interface which will be a turn interface with the VirtIO backend. So here we will benefit from user space networking with the uplink side, but our packets still have to go through the kernel which will still limit performance
13:02
even though we are leveraging the VirtIO optimized backend. Now let's modify our setup to use the mammoth instead of a den, create a third configuration. Here, our client pod on the left has to support to be able to attach to mammoth interface. That's quite straightforward here as we are running another VPP instance within the pod
13:23
which obviously is able to attach to mammoth. And by doing so, the packets will be fully handled in this configuration in user space from their reception on the physical interface to the delivery to the app. So let's see how those free setup compares when receiving packets.
13:40
If we send small UDP packets, 64 bytes, what we see is that a regular VirtIO interface is able to sustain about 3 million packets per second. VPP worker can handle 8.7 million packets per second when processing 10,000 different flows. This drops a bit when the number of flow grows, mainly due to the filling of the common flow table.
14:01
And here we are showing the performance of 10,000, 100,000, and a million flows. This scales linearly with the number of VPP workers, meaning that with four workers and 10,000 flow, we are able to receive about 33 million packets per second, which is about eight times four.
14:21
All this concerns traffic going directly to a pod address. It also works with status IPs, for which we have a performance penalty of about 5% compared to pod IPs. This related to the rewriting of the address and port and the fixing of the checksum that we have to do. But packets per second are not always very explicit,
14:42
so let's see with bits per second. We'll send 300 byte packets and extrapolate the throughput out of packets per second. So with bigger packets, the link quickly becomes the bottleneck, because it's only for TG. But we can still see a pretty linear scaling, at least between one and two workers, with one VPP worker being able to process
15:01
15 gigabits per second and 229 gigabits per second when handling 10,000 flows. Calico Linux received about 300 megabits per second, which translates to about a million packets per second, which is roughly the Linux limitation of screen interfaces. Right, but you might say memfs are great,
15:21
but I'm doing TCP. And on top of this, I'm using Envoy as a proxy. So can we also optimize this? I'd say this qualifies as a great use case for VCL. So let's build another testbed and see how it performs. So we took the same machines, the same two machines. On the machine on the right, we changed the packet generator to being WRK,
15:41
in order to track requests per second instead of raw packets per second. Data is served by Nginx on the same server. But obviously, we don't request to Nginx directly. We have our same cluster on the left, running Envoy in a pod, acting as a proxy between WRK and Nginx. So basically, we'll be benchmarking packets going through Envoy.
16:03
The same way, if we enable the VPP data plane in Calico, the setup will end up looking like this. This should already allow some performance gain with the benefit of running Envoy unmodified. Similarly, if we request VCL support and leverage the VCL Envoy integration
16:21
that Proenco has implemented, we are able to make the proxy DC determination happen directly in VPP. The Envoy team built daily contrib images with this integration, so it's really easy to leverage in a pod. So let's see what figures this free setup gives us regarding WRK.
16:41
So the requests per seconds we get are as follows. With all data planes, Envoy scales a bit sublinearly. On Linux, it goes from 13,000 requests per second with one worker, to 100,000 requests per second with 10 Envoy workers. EBPF, which is the Calico EBPF mode,
17:01
performs quite linearly. As we are running Nginx and WRK out of the cluster and we are targeting pod IPs, we are benefiting a bit less from its advantages. Calico VPP with regular Virtio net dev, so the second configuration, goes from 16,000 RPS with one worker
17:20
to 130,000 requests per second with 10. And finally, Calico VPP with VCL, the third configuration, gives the best results, reaching 200,000 RPS with 10 Envoy workers. But comparing setups is a bit tricky here because with VPP,
17:42
so both VPP Virtio and VPP VCL, we have one dedicated core handling the networking. Whether with Calico Linux and EBPF, the networking happens in the kernel on the same core than Envoy. So the proper comparison would be between N Envoy workers running with VPP and N plus one Envoy workers running with Linux.
18:03
But an even fairer comparison would certainly be to plot RPS by CPU usage on the cluster and see how those configurations scale in regard to each other. But first, let's take a look at latency to see how those configurations behave also. Here, we have the various latencies measured
18:21
when scaling Envoy from one to 10 workers. We see it globally improves as the number of Envoy worker increases. And in some cases, we start to see latency re-increasing, mostly when the data plane starts to struggle. We can see that VPP with VCL performs quite well because as we are terminating TCP directly in VPP,
18:42
it allows us to skip the extra hop of going through the tunnel, and thus, allows us to keep the latency quite under control. And finally, if we come back to the request per second results, and we plot them alongside the global CPU usage measured on the machine running Envoy, we get the following graph.
19:02
So dots represent tests with an increasing number of Envoy workers from one to 10. The N, N plus one comparison issue by machine lawyer appears clearly at the bottom left of the graph. Envoy Linux with two workers falls approximately in the same ballpark as Envoy VPP with one worker,
19:21
and this due to the extra VPP worker we use in the VPP case. This leads to the performance discrepancy only making itself clear with higher number of workers. For example, with five Envoy VCL workers and one VPP, we are serving as many RPS as with 10 Envoy Linux workers.
19:41
An additional improvement area is that we are still running VPP in poll mode. So switching to interrupt mode should improve CPU usage as here we are busy looping one CPU with a VPP that's not fully loaded. So we're essentially wasting some unused CPU cycles. So we are definitely planning on making this work and testing it soon. That's it for the numbers we got
20:01
in the last batch of optimization. I would like to thanks Chris and the whole Calico team for the help and support that allowed us to build this. And I'll let Chris conclude on the next step and how to stay tuned on what will be happening in the coming month. Thanks. Thanks Nathan. So in summary, VPP is a great match for Calico
20:22
and it's going from strength to strength. This is a new user space data plane option for Calico and using Memeth offers a code path which can handle the incredible performance levels that we've learned to expect from modern apps. VPP compliments Calico's workload protection
20:40
with incredible wire guard performance. And it lets you stay ahead of the curve by offering advanced support for additional features. VPP and Calico are pushing forward and achieving great results. Currently the project is expected to move from tech preview to beta status in version 3.22 which may well already be live by the time you see this.
21:03
So if you'd like to stay up to date on this project don't hesitate to join the VPP channel in the Calico user Slack. We publish our releases there. And if you'd like to try it out head over to Calico documentation which has set up instructions. If you have any questions at this point or any later point don't hesitate to ping us
21:21
on the Slack channel as well or you can ask them straight away. Thanks for listening.
21:56
Okay, well, great. That was a fantastic action packed presentation.
22:02
You guys packed a lot of information into a short amount of time. Thanks for doing that cause that actually left us with some time for questions. So I see we have a few already and I'm sure more will come in. So let's jump to those. So Ray had asked, if you could tell us
22:21
how robust is VPP Calico compared to say Calico with Linux deployment and how well battle tested is it? Yeah, so I'll take that. Nathan and I were talking offline a little bit about this. The IP tables data plane has been in production for some years and it's in a huge number of deployments
22:43
and we're really proud of that. So it's reached a real level of stability and maturity. The VPP data plane, although it's exciting and the features are there, it's not to the same degree yet. We're still moving into beta. So it's an exciting time and the features are there.
23:04
It's a good time to be involved but the stability is not yet comparable to where the IP tables data plane is. I think it's fair to say. And we talked a little bit before, there was a great talk last year about this integration and it's really, it's great to see the progress,
23:22
the continued progress here. So I'm imagining next year, we're gonna hear about how much better it is now. Maybe we can target GA at some point. It would be really nice. Definitely. So there were some setup steps and YAML files
23:41
that were mentioned. Are those open source and available for people to find and use? Yeah, I'll take that one too. I've shared a link in the written chat. If there's anything that's missing from there, I would love to personally hear about that. I'm in the Calico user Slack and now's a great time to get involved.
24:01
So if one I've shared is there and it's meeting your needs, then fantastic. If it's not, then get in touch with me and I'll make sure that we can improve things. And I can add, also we try to open source most of the configuration we use for test. So for example, for Envoy, for MIPs, all of that. There is a test directory on the Calico repo
24:21
that we use where we typically version the YAML that we use so that things are pretty simple and easily shareable. Great. So we had another question here from Ray. Do vanilla Linux applications really, do they work and do they benefit from this configuration?
24:42
I can take that one. So typically, so if you run without the specific memory VCL integration, normal application should be just like in a regular Kubernetes cluster. And they should benefit from the speed up that it provides.
25:01
Obviously not all workloads will see the same kind of speed up. For example, if you're doing just regular iperf and for example, encrypting it, you can still see quite some improvement compared to the encryption that made Linux, for example.
25:24
So regular application should keep working the right way and regarding some, so obviously if you're limited by your NIC, you will still be limited by your NIC. But in configuration where the next data plane is the bottleneck, you could see improvement.
25:42
Okay, and kind of thinking along those lines, are there particular workloads or use cases where the memory interface versus say the host stack or where really the disintegration shines. And conversely, are there applications
26:01
or things you can think of where a VPP is just probably not the best and you should really use a different data plane with Calico. So we try to target a few use cases. So the first one we thought about was encryption because that's the most, so if you need to encrypt all the traffic
26:21
between different nodes or do really high speed encryption, VPP really helps because typical Linux performance will be quite low. So it's easier. So the gap shows itself a bit easier. Another use case that will shine would be the,
26:44
so if you need to send a lot of packets per seconds, so typically small packets that you need to handle. So for example, doing proxies or maybe DNS responders or something like that, that's also a place where a typical Linux interface would be limited
27:03
around a million packets per second. So if you need to go about that, you would really show the performance benefits from the mammoth. And the last one, so VCL is a bit, has a bit the same characteristic.
27:21
One of the things where it shines also is doing your encryption. If you do TLS or TCP, even it spins up a bit also. But I'd say really the main true use cases is that we are targeting our encryptions and small packets in terms of raw performance.
27:42
Okay, great, thanks. In the talk, you mentioned these multi-networks and Kubernetes, can you tell us a bit more like, you know, what do you mean by that? So that's something we're exploring. So one of the good points that Chris mentioned that we are still not battle tested
28:04
so we can still play a bit with the data plane and add new features and try new things. And so we have been exploring adding a couple of features. So we played with Maglev, for example, we played, so we got some extra contribution about supporting service six
28:21
as a node to node transport, for example. And one of the things we wanted to explore was, is it possible to expose several interfaces in a single pod and expose some kind of the Kubernetes abstraction in addition to, so for example, if you run with multi,
28:41
you would get multiple interfaces in the pod but the next ones won't have any particular magic done to them. And the question we're asking ourselves is, would it be possible to somehow extend the Kubernetes log?