We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Building production-grade networking software with FD.io CSIT

00:00

Formal Metadata

Title
Building production-grade networking software with FD.io CSIT
Subtitle
FastData.io - CSIT-CPL Project
Alternative Title
Building fast and robust networking software must be data-driven!
Title of Series
Number of Parts
561
Author
License
CC Attribution 2.0 Belgium:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Numerous open-source projects develop software-defined data planes targeting network use cases such as Discrete Appliances, Cloud Infrastructure, Virtual Network Functions and now also Cloud-Native deployments. These may be based on foundation toolkits such DPDK, eBPF/XDP, Snabb and so on, and may implement a diverse range of network functions, applied in many combinations and on different compute platforms and devices. The need for a consistent and repeatable, use-case driven performance validation and benchmarking approach has never been greater. Enabling both the development community and the end-user to understand, measure and verify expected performance. Achieving great performance with Network Software, should not be accidental, it should be data-driven! This talk explains how the FD.io CSIT project aims to meet this need by developing and providing Continuous Performance Lab platform for benchmarking, validation, performance trending and regression detection. Founded on multi-vendor collaboration, CSIT leverages deep multi-platform understanding and telemetry tools to analyze and correlate benchmarking results leading to consistent, repeatable and reliable performance validation the FD.io user base can rely on. This talk will cover A brief overview of FD.io CSIT Recent FD.io VPP release performance data is presented and discussed in this context. Recent CSIT Community Innovations Performance Trending graphs, MLRSearch and PMU Tooling are presented. Debugging a real system setup issue with FD.io CSIT is also covered
SoftwareBuildingComputer networkGradientCodePopulation densityService (economics)Open setAxiom of choiceComputing platformSoftware testingLocal GroupStandard deviationMultiplicationBenchmarkLimit (category theory)Normed vector spaceSoftware engineeringRange (statistics)Frame problemBlogInternet forumYouTubePatch (Unix)Standard deviationRepresentation (politics)Series (mathematics)Real numberTraffic reportingSoftware testingArmBenchmarkRoboticsoutputNumberMultiplication signCASE <Informatik>SoftwareElectric generatorCodeMetropolitan area networkSet (mathematics)Plane (geometry)Open sourceBitIntegrated development environmentDifferent (Kate Ryan album)Statement (computer science)SpacetimeUniform boundedness principleRight angleINTEGRALSurjective functionDrop (liquid)Digital photographyProjective planeSemiconductor memoryMereologyBit ratePhysical systemKey (cryptography)Axiom of choiceMathematical optimizationExecution unitComputing platformFunctional (mathematics)ÜbertragungsfunktionNetwork topologyInformation securityGroup action2 (number)Directed graphCommunications protocolImplementationCore dumpScaling (geometry)Open setPresentation of a groupGame theorySoftware frameworkExtension (kinesiology)Partial derivativeAnalytic continuationFiber (mathematics)WebsiteDirection (geometry)Volume (thermodynamics)Drag (physics)Overlay-NetzContinuum hypothesisCompilation albumWeightPerformance appraisalMaxima and minimaComputer animation
Pointer (computer programming)MultiplicationLimit (category theory)Convex hullIntelHydraulic jumpMathematicsVacuumArithmetic meanDifferent (Kate Ryan album)Point cloudVector spaceMultiplication signSoftware testingProcess (computing)Set (mathematics)Computing platformMicroprocessorArmMereologySoftwareParallel portPatch (Unix)Mathematical optimizationRoutingComputer architectureCloningDisk read-and-write headTable (information)Electric generatorLaceComputer hardwareOpen sourceOpen setBoom (sailing)PermutationDigital rights management2 (number)Real numberCASE <Informatik>Series (mathematics)Scaling (geometry)Level (video gaming)Point (geometry)Wage labourNumberGoodness of fitTraffic reportingIntegrated development environmentCryptographyBenchmarkLinear regressionComputing platformFunctional (mathematics)LengthSpacetimeGroup actionPredictabilityAreaRevision controlCore dumpChainPerturbation theoryMoment (mathematics)ResultantUser interfaceFuzzy logicArithmetic meanMikroarchitekturGame controllerCountingRadical (chemistry)Range (statistics)MeasurementSemiconductor memorySingle-precision floating-point formatBasis <Mathematik>Row (database)output1 (number)InformationCross-platformSource code
Continuous functionSoftwareRootFunction (mathematics)Patch (Unix)Computer hardwareType theoryCodeRing (mathematics)Computing platformAddress spaceStrategy gameLoop (music)FeedbackServer (computing)TorusPole (complex analysis)Cache (computing)BefehlsprozessorRead-only memoryIntelDatabase transactionNetwork socketNumberSource codePerformance appraisalOpen sourceMathematical analysisAreaBand matrixProcess capability indexMaxima and minimaService (economics)Population densityMeasurementBenchmarkInfinityDampingTopologyGraph (mathematics)Software testingDensity matrixChainExecution unitThetafunktionPlane (geometry)Open setBuildingLinear regressionSoftware testingChainBridging (networking)CASE <Informatik>Graph (mathematics)Statement (computer science)Address spaceNumberDomain nameObject-oriented programmingSource codePoint cloudCircleSuite (music)Physical systemService (economics)Goodness of fitWage labourPopulation densityPoint (geometry)Multiplication signPatch (Unix)PhysicalismBenchmarkHierarchyNormal (geometry)AreaVirtual machineSoftware developerInflection pointIntegrated development environmentMathematical analysisMoment (mathematics)Cycle (graph theory)Series (mathematics)Level (video gaming)Interface (computing)Analytic continuationVelocityBit rateMeasurementMicroprocessorEntire functionWeightArithmetic meanGraph (mathematics)outputInformationSoftware engineeringProjective plane2 (number)Slide rulePlane (geometry)Basis <Mathematik>Open sourceBitQuicksortSoftware bugOutlierComputing platformComputer fontLimit (category theory)Line (geometry)MathematicsInsertion lossStatisticsRoutingNetwork topologyWave packetRight angleComputer animation
Point cloudComputer animation
Transcript: English(auto-generated)
So my name is Ray Kinsella, and I work on Fido C-sit, and I'm a software engineer Working for Intel. I'm also a Fido TSC member Fuck I should probably ask what a Fido TSC member is
Back when I started on open source you just flipped stuff on to SourceForge and That was that was open. This was before GitHub and now we have oversight committees We have marketing committees, and we have technical steering committees, and there's a technical steering committee for Fido
And it's called the the Fido TSC And I'm one of the representatives on the Fido CSC and we have input into the technical direction of Fido and Fido is a Collection of projects. Fido is really three key things Fido is VPP which you heard about earlier
VPP is a is a high-performance network stack built with the same kind of optimizations as DPDK so DPDK is a very efficient layer for IO a Fido is a network VPP is a series is a network stack is a series of protocol implementations on top of DPDK
so that's one part of Fido a Second part of Fido is the integrations it gives you so and you talked you heard Ratislav talk about it earlier and Giles talked about it earlier, and then you also heard Billy talk about the same thing which are all of these Integrations that sit on top of DPDK so with DPDK sorry on top of Fido so with Fido
We don't just flip a network stack out the door and say guys just kind of take care of yourselves We take care of the integrations into Kubernetes And we have so many integrations you have guys arguing with each other about which is the best way to integrate Fido with Kubernetes And there's I think Billy's presentation called out very well. There's three different ways to achieve it, right?
There's integrations with open stack I think Giles talked about integrations with open and with strong spawn was a Giles strong swamp And we have net comp and yang support, so we have all these integrations on top so that's the second part of Fido is integrations and the third piece of Fido is what I'm going to talk about today, which is
benchmarking now Usually You'll take a software how many software engineers in the room Any software engineers how many test engineers in the room? One one two three four okay, so that's probably like a probably like a 22
There's probably like it's probably like a ten to one so in CSIT is hugely important in Fido CSIT for Fido is is is how we maintain and we ensure that The Fido data planes performance does not degrade over time
So what and and I'm not you know now It's no commentary on any other open source project But the danger when you move into open source is that when you get when patches come in the door? You need a very very very tight and very elaborate CI and CD
Infrastructure to make sure that as you take new patches that you're not trading off performance patch by patch by patch by patch And that's the important role CSIT plays in Fido, and it's a significant role, and that's what I'm going to talk through today
So what were the treekeeping tree parts of Fido? Three question question. I know I'm picking on you. It's like four o'clock and everybody's like we've been to the bar already Ray You're asking this questions now. What do you do? I said there was three parts of Fido One was the data plane second piece was the what? Integrations third piece was the benchmarking. This is the benchmarking so today
I'm going to talk about why code is not all no longer enough, and this is essentially why benchmarking is important I'm going to talk a little bit about CSIT what it is And then we're going to talk dig into three interesting problem statements with CSIT or three new problems. We're trying to solve with CSIT And then I'm going to summary have a bit of a summary
So today you heard about a number of different open source projects you heard about snob from Luke You heard about open v-switch from Kira and Bruce What else you heard about Fido from Giles and Russell Av and Billy McFall and you heard about psyllium and the who's presenting on psyllium
Yes gentleman there, and then we have a and then there is a few others that you may not have heard of so there's a galaxy of Open source projects, and if you want to learn more about the galaxy of open source projects I'm giving a talk at 20 to 7
and if you're not already in the pub you can come to the lightning talk room to learn about the open at the the user space networking ecosystem I'll probably just be presenting to myself, but there's a galaxy of open source projects in user space today There's never been more choice
Well, how will it perform for me? You know You know there's a whole bunch of typically vendor reports from either also Intel Mel Knox produces them There's lots of arm produces as well as a series of academic papers as blog posters youtubes But you always come back to the same set of questions
How are the benchmarking are the benchmarks repeatable are the tests quite applicable to my use case? And if I had a dollar or a euro for every time somebody said yeah, but is it a real-world use case? I'd be a very rich man. You know and the difference between Synthetic benchmarks versus real-world use I see Javier smiling there
Our tests for platform and vendor neutral and that's kind of an important one to what extent are is vendors You know skewing the skewing the the game to their own interests So these are all important questions, and they really all come back to how do you evaluate software data planes?
How do you test and how do you ensure neutrality well? Without data and without a consistent benchmark, you know without data You're just another pen a person with opinion You know this is from Edward Deming and Myself a magic shows this because I think a kind of illustrates very well our sentiments towards sees it and again
It's not a criticism of anybody else But it's this is more of a statement of our intention the intention of Fido sees it is to bring Volumes of data that reflect the kind of performance you're going to get from a Fido deployment in real-world use cases
So to bring significant to bring that kind of clarity so that when you're asking the question about How will it work for me that you'll have the kind of data that you need to answer those questions? How am I doing for time? Well, I'm doing okay so Fido sees it is a sub project inside Fido. So if you go to the Fido website and
You navigate through there's a whole bunch of sub projects and there's one that's called as sees it CPA sees it CPL sees it stands for a continuum system test system and integration testing. I think that's right now That's my first fail of the day
So what is Fido sees him Fido sees it uses standard industry benchmarks and tools So we get everybody's heard of RFC 2544 I think that RFC 2544 will be written on my epitaph when I die, okay
And the metro ethernet forum standards we test ranges of packets we test 64 byte packets we test All the all the packet sizes up to 15 18. We test I mix we test Jumbo frames so we test a large Not just one or two different sizes. We test all the different sizes. You might possibly use including jumbo frames
We use only open source tools We use only open source tools Our traffic generators t-rex which just happens to be another Fido project We use the everything's orchestrated and run by the test robot the robot test framework And then we use Jenkins as the CI so Jenkins takes care of running a nightly bills and running the tests and those kind
of things We test multi-core scaling. So, you know, we test with a single core We test with two cores and we test with four cores and sometimes we test with even more and I'll talk with that
Later on, we test there's a typo here. We test NDR. It is non drag rate non drop rate Sorry when then yours when you don't drop packet PDR, which is partial drop rate I think we we have a tolerance of 0.5 percent pack a drop and we test MRR who knows what MRR is
Maximum receive rate is where you throw the kitchen sink and you see how much actually gets true And we test the whole range of different network for the whole range of different network functions things You might be likely to do l2 switching ipv4 ipv6 routing ACL security groups Overlays and we have a slew of unit functional and performance testing
Everything's open and fully documented test environment. I Want to say that again open and fully documented test environment That means that and we have lots of people who do this we have lots of people who are involved in the fighter community who actually go and clone everything and
Reproduce it internally because it's all open source. All the test cases are open source. All the tools are open source You don't need any proprietary tools so you can reproduce the whole thing internally It supports Intel and ARM architecture ARM architectures. So What what does that mean we actually have an open lab that's hosted by the Linux foundation that gets hardware
Contributions from people like Intel from people like Mel Knox from the the ARM ecosystem and we have a very sizable lab That's run inside the Linux foundation That's free for anyone involved in CC to use where all this stuff gets tested and then that allows us to do lots of
Different permutations of tests we can do a same test on different generations of Intel hardware I'll talk about that later different same test with different NICs Same test on different platforms same test with an ARM platform same test with an Intel platform. It's the same test And very rigorous the same tests run through the same set of permutations the different packet sizes
different numbers of cores different platforms different NICs to give you a very very large corpus of data about how this thing for info runs in the real world It's multi-platform multi-vendor so we have it runs today on top of CentOS
SUSE Ubuntu I think I'm not sure about Fedora I think that might be gone anyway, and then we also support cloud native You've heard extensively about cloud native also and cloud native is the hotness at the moment I know but it also supports open stack environments as well And we also have support for some accelerators like crypto accelerators
So we have an open lab where everything gets tested in the open where patches get tested where nightly bills get tested Where release reports get tested all in an open lab in the Linux foundation And we all use only open source tools and open source tests
So if you don't believe the numbers that are produced by the open lab you can go reproduce the whole damn thing yourself and satisfy yourself that it's completely authentic and That's the idea is that we want to drive good practice and engineering disciplines at the Fido dev community So if you submit a patch
To Fido and it causes a performance degradation and we'll come on with showed you that later causes the performance degradation And the alarm bell goes off somewhere and somebody sees that and we can take action So the idea is that by having this elaborate CI CD environment that we catch those kind of regressions Early on I'll talk more about that more in depth later that we also
You know when so much of Fido and so much about user space networking is about performance or you know Achieving the best possible performance that you can through software Well again five a o phyto sees it provides you the kind of tool chains that you need to measure the performance and
Then it also can prevent things like performance regressions. So it's all fully automated as integrated into G It's intimate input integrated into Jared so that when you submit a patch at all tests get kicked off We runs functional tests performance tests. We have use case driven test definitions So if you want to do things like test things like VX LAN terminations those kind of things we have test cases for that
And it all gets executed in the open environment. I've said all this and integrated with CI. I've said all this that's fine So, what does this look like in practice now clearly And you know, I did account
Last week to see how many test cases in total I could I could find and I counted 998 test cases that are currently now these are these are performance benchmarks That are currently being run in phyto see sit on a regular basis now That's if you go through every permutation of packet size across every platform across every nick
Of course every microprocessor generation across every use case and you can see here that you know We actually break it out. There's 144 layer 2 tests 216 layer 3 tests 300 You know very very significant number of test cases in each area
So when you go through all the different permutations of test cases across you come up with a very very large number I clearly can show the result of a thousand test cases and You know at the same time, so I just picked one and one or two in particular that I quite like So I quite like the ipv4 test get an ipv4 routing test cases
So and we have two test cases that we run ones called ipv4 base that runs phyto That runs a performance test of ipv4 routing with one route So you've only got one route in your lookup table and then we have ipv4 scale and
ipv4 scale runs with 2 million routes in your test in your in your in your routing table And you can see this this is a haswell generation microprocessor. So we have two Intel platforms in the lab We've got haswell and we've got skylake And you can see the ipv4 base with one route gets around 11 million packets per second on haswell
And then it gets with the 2 million routes. It gets well about 9 million packets per second on haswell. Okay So then we went to phyto vpp's 18. That was with 1804 We then went to 1807 on haswell and we did some software optimizations
Came along and we also introduced a second NIC So the software optimizations caused a small performance bump from about 11 million packet seconds to about 12 million packets of seconds for the million packets a second for the single route test case And there's another small performance bump from about nine to nine and a half or something like that for the two million route test case
Well, then we were also at that was these test cases here where we're at 10 gig NIC we're able to do the same test case now with the The x710 and the x710 again is a 10 gig NIC, but it's 4 by 10 gig. So it's a 40 gig NIC that's It's a 40 gig Ethernet controller that has 10 4 by 10 gig physical interfaces
And you can see here that you know performance is much the same as it is with the 10 gig as with the 10 gig NIC But now we have the satisfaction that we're know that we're getting roughly we're getting pretty much the same performance across the 10 gig NIC and a 40 gig NIC
And then we introduced the Intel Skylake micro architecture in phyto 1807 as well And you can see that performance again is roughly the same on Skylake. So we've also introduced a new microprocessor architecture We've introduced a new Introduced a new NIC and we have that warm fuzzy feeling that performance is more or less the same
and then phyto vpp1810 came along and phyto vpp1810 made very liberal use of AVX 512 instructions now AVX 512 instructions, so I presume everybody knows in the room knows what vector instructions are. Everybody knows vector instructions
I see nodding heads and I see hands up. So AVX 512 is the laces Is that is laces improvement in vector instructions? So before this was AVX 2 which came came out and has well was it a fix to his house? Well, AVX 512 is on Skylake
Which basically allows you to process even more packets simultaneously is the idea So vector what vector instructions typically do or how we typically use vector instructions in both DPDK and phyto VPP is we use them to process packets in parallel So if you are using let's say a scalar instructions
You typically do a one packet at a time if you use vector instructions You typically do four packets at a time with AVX AVX 2 We typically did four packets at a time with AVX 512 We typically do eight packets at a time if memory serves me correctly, and I hope I'm not wrong
But what happened when we introduced all the AVX 512 optimizations into phyto vpp on Skylake You can see what happened to the baseline performance there. We went from About what 12 million packets a second About 12 million packets a second up to something like 19 million packets a second just simply by using this
By using this part inherent parallelism the microprocessor that AVX gives you Similarly the scale test case with the 2 million routes went up from something like 9 million packets a second to something like over It is you know to over
To over 16 million to around 16 million packets a second So you can see here the general point and the probably labored it now at this stage But the general point is you know You can look at the same test case across a series of NICs and across a series of microprocessor revisions And see what the impact of both of you know
What the impact is without optimization the impact of it is with optimization across a whole range of 90 not Nearly a thousand test cases. So this is how we bring a real breath depth and Predictability what I mean by breath is the breadth of test cases when we depth of measurement and I mean predictability
We do this for every patch. We do this nightly We do this for every release and we generate a huge corpus of data as a result So and I think I've got ten minutes left So I'm going to talk very briefly about that area Let me draw back for a minute any questions on what I've just said go ahead funny you should ask let me come on to that
Any other questions Go ahead. So no actually Oh nightly
Nightly nightly not every patch because that just becomes insane to your point, right? You can do with everybody Okay. Sorry. I'm just I'm being told so and you're asked the question You're asking is how frequently we do run the entire test kit. Yes
So I come on to that in a bit of depth. So let me let me come to it so and we talked about this and Previously, which is the continuity problem, which is that if you find performance regressions at release time
It's typically very expensive anybody here and there's lots of software engineers in the room If you have a measure performance regression at release time Suddenly, it's get bisect fun, right? So you're typically really get bisect and you're running the same test case and it's typically very very painful
Particularly, it's right the velocity of communities like DPDK and Fido VPP The rate of velocity is huge means that we have a huge input of patches Which means you really need a very strong safety net to catch performance regressions So, how do you maintain best-in-class performance and how do you address the danger of creeping normality?
I think the more common name of creeping normality is boiled frog syndrome where you're just things slowly degrade over time and you don't notice Well, I have a pretty picture for you. So this is our performance trending Timeline so we published this and all the developers have access to this and you can see this is a whole bunch of l2
Test cases here. You won't be able to read this but I can show l2 bridge domains with one Mac With one Mac address up to 100,000 Mac addresses, I think we actually might do an even a million Mac addresses, you know It doesn't matter but you get the idea that we have a whole series of l2 bridge domain test cases here from a single
scaling the whole way down to 100,000 Mac Mac addresses in a single bridge and we can see how that test case performs over time And not only that where you're actually able to run it against what we know are, you know Good numbers and bad numbers so we can say, okay
Well, what's in the green and what's in the red and we can make judgments and you can see here the red Circles are regression where regressions have happened and green circles are where performance improvements have happened And this is a dashboard that the entire community has on a regular basis this is how you generate a warm fuzzy feeling that you're not losing performance over time and Because I said earlier that we have a very very large number of test cases
We have a very nice dashboard in which people can just log on and see whether performance is increasing What's the short-term percentage change? What's a long-term percentage change? What's the trending number of millions packets per second and is there how many regressions you're getting? How many outliers and that kind of thing?
So what are the challenges here and I think the two questions before outline the challenges very well We have a lot of tests We have a limited number of physical platforms and so you're very neatly very judicious about when you run so the entire test suite Definitely gets run every release and we typically run the entire test suite on a nightly basis, but that consumes
consumes Consumes a lot of systems when you submit a patch We tend to be more judicious about how many test cases you we run against any individual patch Because that you know consume that can really snarl up number of tests resort sources You know, this is not an unlimited lab we have we're not an open source project with unlimited funds far from it
So we need to be very judicious about what we run and when Yeah, and one of the challenges is how you get how you can ensure you get coverage So how you know that every patch is getting the kind of coverage test case coverage it needs. Okay, I'm going to keep going
deeper diving so moving beyond the symptom, so I'm gonna move past this And maybe on to Yeah, probably onto this slide So, you know, I've kind of labored the point at this stage and you know You'd be forgiven for throwing rotten fruit at me at this stage. Well labor at the point at
About you know, how many benchmarks we have but at a certain point you need to move beyond just benchmarking the problem You know it at a certain point you need more data to try and and Try and root cause the problem it's it's it's one thing telling you that you know with number software engineers in this room
You know that it's one thing telling me there's a performance Regression or there's a bug somewhere but give me a clue give me some sort of an idea as what the actual underlying problem is and You know it's well known today that Linux ships with a whole bunch of tools that you can use for next level analysis of where performance
Regressions are coming from you know tools like Linux perf We also have things like PMU tools for okay for pulling PMU stats from the microprocessor that tells you where cycles are being born And we actually also have very good introspection actual Instrumentation in Fido VPP itself to tell you where cycles are being poor at born
So what we're starting to do now is we're starting to use these tools And again, they're all free and open source tools to start to generate the next level of information So when you go in and you see a performance regression now You can actually drill down and get to the next level of detail and look at the graph pipeline in VPP
Well, I think Giles there earlier talked about the VPP graph hierarchy and you can see in the next level Where cycles are being burned and that interface and that data again is being generated and it's there So when we get a performance regression, we don't just leave you hanging and say hey guys, there's a performance regression Sorry, we give you the next level of data say actually there's a performance regression in this graph node as a perform as opposed to a performance
a system-wide performance regression Okay, and then with lastly we'll talk about service density and this is you know when I we talk about real-world use cases, so, you know, there's a lot being a lot of discussion today about
The cloud and cloud native deployments, but there isn't really a whole lot of data around How data planes react to cloud native deployments are certainly not enough data around how cloud Deployments react to cloud native environments. So we've been doing some work to try and understand that better
And we've been comparing comparing service densities for virtual machines as compared to containers There's a lot of industry discussion at the moment a lot of oops discussion in the community around Hey containers are more efficient than virtual machines hands who poofs heard that statement containers are more efficient virtual machines
Okay, lots of hands But they're not well there is that as well, but there is a that's a there's a lot of talk around that But that in my experience there isn't I haven't seen a whole lot of data to substantiate that So what we've been doing in Fido again is trying to put data behind that to understand what that looks like
And then where the inflection point is So we're we have we're benchmarking today is we're benchmarking a VNF service chain where you have a whole bunch Sorry a whole bunch of virtual machines in a chain all connected by a v-switch And then we also have the same setup for as a container service chain again a whole bunch of containers
Connected by a v-switch And then we also have a container pipeline where the container where you you inject packet into the first container And then they just pass it each other also to understand what they a point-to-point performance looks like So and in that way what we do again is we test with one Sorry one virtual machine on the switch we test with two virtual machines for virtual machines eight virtual machines
We test with one container two containers for containers a containers And then we you know we typically test again you get the idea of labor the point across all the packet sizes And we produce this kind of data that you can see here that again that illustrates very well
Where the inflection point is for you know where When you start to get a switchover cost in performance from containers versus virtual machines And I've got one minute left to go So three problem statements that we're working on Ensuring that you don't lose your performance as your as your data plane evolves
Also understanding where performance regressions are coming from are the next level of information And then also how cloud how data planes perform in a cloud native environment So that's the Fido CSIP project it's an open and welcoming project And same as every other open source project
There's always there's always plenty to do so we'd love to love to see you turn up any questions Okay thank you very much