Welcome to KernelCI
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Subtitle |
| |
Title of Series | ||
Number of Parts | 490 | |
Author | ||
License | CC Attribution 2.0 Belgium: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/47334 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
FOSDEM 2020471 / 490
4
7
9
10
14
15
16
25
26
29
31
33
34
35
37
40
41
42
43
45
46
47
50
51
52
53
54
58
60
64
65
66
67
70
71
72
74
75
76
77
78
82
83
84
86
89
90
93
94
95
96
98
100
101
105
106
109
110
116
118
123
124
130
135
137
141
142
144
146
151
154
157
159
164
166
167
169
172
174
178
182
184
185
186
187
189
190
191
192
193
194
195
200
202
203
204
205
206
207
208
211
212
214
218
222
225
228
230
232
233
235
236
240
242
244
249
250
251
253
254
258
261
262
266
267
268
271
273
274
275
278
280
281
282
283
284
285
286
288
289
290
291
293
295
296
297
298
301
302
303
305
306
307
310
311
315
317
318
319
328
333
350
353
354
356
359
360
361
370
372
373
374
375
379
380
381
383
385
386
387
388
391
393
394
395
397
398
399
401
409
410
411
414
420
421
422
423
424
425
427
429
430
434
438
439
444
449
450
454
457
458
459
460
461
464
465
466
468
469
470
471
472
480
484
486
487
489
490
00:00
Kernel (computing)CollaborationismCASE <Informatik>Physical systemKernel (computing)Computer animation
00:35
CollaborationismKernel (computing)CodeOpen setKernel (computing)Software testingGame controllerCASE <Informatik>Projective planeIntegrated development environmentSoftware bugPoint (geometry)Computer animation
01:13
Term (mathematics)Stability theoryOpen setKernel (computing)Software maintenanceCollaborationismGoogolComputing platformCivil engineeringWordSlide ruleStatement (computer science)Multiplication signProjective planeWebsiteMereologyKernel (computing)ResultantComputer animation
02:14
Open setCollaborationismElectronic mailing listEmailTraffic reportingSoftware maintenanceKernel (computing)Software developerEmailElectronic mailing listObject (grammar)Projective planeSoftware testingComputer hardwareComputer animation
03:22
CollaborationismOpen setSoftware testingPhysical systemKernel (computing)Personal digital assistantConfiguration spaceKernel (computing)LeakOpen sourceSoftware testingDifferent (Kate Ryan album)CodeNumberBitSemiconductor memoryAndroid (robot)Medical imagingSoftware developerResultantPhysical systemOpen setCASE <Informatik>Product (business)SurfaceComputer hardwareSupercomputerComputer animation
04:58
Open setCollaborationismEmailModul <Datentyp>Arrow of timeConfiguration spaceTerm (mathematics)Software developerTheoryCodeModul <Datentyp>Expert systemLoop (music)Descriptive statisticsComputer fileArmLink (knot theory)Position operatorBlock (periodic table)Software testingPatch (Unix)ResultantMoment (mathematics)Traffic reportingSlide rulePhysical systemVirtual machineLinear regressionSimilarity (geometry)AbstractionAngleBuildingWeb pageArrow of timeImplementationComputer animation
07:03
Open setCollaborationismKernel (computing)System programmingAbstractionDatabaseLie groupSoftware testingDatabaseCoefficient of determination1 (number)ResultantEmailStability theoryBranch (computer science)Kernel (computing)Multiplication signElectronic mailing listTraffic reportingSlide rulePhysical systemDifferent (Kate Ryan album)Computer animation
08:25
Open setModul <Datentyp>CollaborationismArrow of timeEmailDatabaseKernel (computing)System programmingAbstractionTerm (mathematics)BitMereologyDatabaseData storage deviceKernel (computing)Different (Kate Ryan album)BezeichnungssystemBuildingBlock (periodic table)Projective planeSoftware testingMathematicsDefault (computer science)Computer hardwareCore dumpBranch (computer science)Radical (chemistry)Web 2.0Point cloudPhysical systemComputer animation
10:18
Open setCollaborationismEmailComplex (psychology)Suite (music)Software developerComputer hardwareSoftware testingSoftware testingDifferent (Kate Ryan album)Bit rateEmailKernel (computing)ResultantElectric generatorBuildingDampingTerm (mathematics)Interactive televisionBootingLoginBitProcess (computing)Traffic reportingSoftware developerElectronic mailing listDebuggerWeb 2.0Moment (mathematics)Projective planeOcean currentSoftware maintenanceGroup actionMessage passingMereologyPerfect groupCurveCASE <Informatik>Variety (linguistics)CollaborationismKey (cryptography)ArmType theorySuite (music)Computer animation
14:20
Core dumpFunction (mathematics)Interface (computing)Open setCollaborationismResultantRootState of matterKernel (computing)Physical systemExpert systemArithmetic progressionTouch typingFront and back endsSoftware testingProjective planeInterface (computing)DatabaseComputer filePoint (geometry)Line (geometry)WikiInstance (computer science)FeedbackComputer animation
15:49
Open setCollaborationismProof theory12 (number)Software testingControl flowMultiplication signFlow separationElectronic mailing listCASE <Informatik>EmailSubsetPatch (Unix)IterationProjective planeMoment (mathematics)Sheaf (mathematics)Kernel (computing)Scaling (geometry)Product (business)Subject indexingMessage passingDemo (music)BitComputer animation
18:18
Suite (music)Open setSoftware testingCollaborationismKernel (computing)SoftwareSpacetimeMereology1 (number)Physical systemMultiplication signSoftware developerHypermediaSoftware testingINTEGRALElectronic mailing listTelecommunicationSoftware maintenanceKernel (computing)Computer animation
19:32
CollaborationismLimit (category theory)Multiplication signSoftware testingProduct (business)Formal languageProjective planeGastropod shellKernel (computing)Fluid staticsScripting languageMathematical analysisComputer animation
20:51
Open setSoftware testingCollaborationismKernel (computing)Suite (music)EmpennageRevision controlKernel (computing)High-level programming languageResultantSoftware testingMultiplication signParallel portPatch (Unix)Process (computing)AverageSingle-precision floating-point formatPhysical systemSuite (music)Branch (computer science)Scripting languageWebsiteMereologyBitNetwork topologyLevel (video gaming)Binary codeRun time (program lifecycle phase)Goodness of fitElectronic mailing listSoftware developerIntegrated development environmentLimit (category theory)Element (mathematics)Figurate numberComputer architectureComputer animation
25:43
Rule of inferenceStatisticsDifferent (Kate Ryan album)Branch (computer science)Software testingKernel (computing)Revision control1 (number)Computer fileMoment (mathematics)Stability theoryBitSoftware maintenanceScaling (geometry)Patch (Unix)Software developerNetwork topologyConfiguration spaceElectronic mailing listGeneric programmingDisk read-and-write headEmailIntegrated development environmentMultiplication signLine (geometry)Computer animation
29:41
Open setModul <Datentyp>CollaborationismArrow of timeRevision controlEmailTerm (mathematics)Configuration spaceResultantElectric generatorSoftware testingDatabaseBitSocial classAntiderivativeProcess (computing)Front and back endsCASE <Informatik>Multiplication signCuboidLine (geometry)Computer animation
30:43
Point cloudFacebookOpen source
Transcript: English(auto-generated)
00:06
Hello everyone. Thanks for waking up early and attending this talk on Saturday morning. Can you hear me? Can you hear me at the back? Yes, okay, cool So there's no sound system. This is only for recording in case you're wondering So my name is Guillaume Taka. I work at Collabra and this talk is about kernel CI
00:23
So I had a talk last year already about kernel CI for them. This is kind of an update on what Things have happened in the past year, but also about the future really So What is the purpose of kernel CI in case you haven't in case you're not familiar with the project and also
00:40
What it wants to achieve in the longer term, so it's kind of climbing a big tall mountain really without trying to reach the point where kernels are released without bugs and You know have at least quality control so we know which bugs are in the kernel and extending test coverage
01:01
This and also having Reusable tools so the idea is to test the upstream kernel, but anyone using the kernel in their own downstream environment should be able to reuse these tools We now have an official mission statement As written here, so that's basically what I'm trying to explain here
01:20
But this is like the official word if we have now People spend some time making it making this up so they deserve this at least a slide here So yeah last year I said maybe it will become a Linux Foundation project and this has happened so in October kernel CI has become a new
01:41
Project of the Linux Foundation with all the members listed here so Collabra, Bailey-Bray, Red Hat, Google, Microsoft, And result.io and Civil Infrastructure Platform, which is another Linux Foundation project Now we're still in the early days of we know what does that actually mean for the project So, you know, we have a mission statement We have a budget and some stickers, but there's a lot of other things coming soon, hopefully
02:08
There's a website also part of the website explaining what that means for the project So although it has become a Linux Foundation project the the aim is still to be really community oriented
02:20
So of course there are things that are done by the members for the members, but it's really to facilitate Things to happen for the kernel community at large So it's still focused on upstream Completely, I mean the tool again can be reused for other things So if Red Hat wants to test a federal kernel they can do that Or maybe the project will help make it happen as an example, but it's still about testing the mainland kernel upstream kernel
02:46
And it's still about sending email reports to mailing lists and engaging engaging more with subsystem maintainers and developers in general To basically help people Add their own tests to kernel CI a lot of well
03:01
Every subsystem have different workflows and some maintainers have created their own small CI which works perfectly well for them probably But then they might be reinventing some wheel and also not have access to all the hardware that kernel CI provides so anyway, that's the kind of consolidation this that that is the main objective we're trying to achieve so
03:24
One kind of philosophy behind kernel CI Is what we call the open testing philosophy because you know, there's only one mainline kernel So in the same way that people from many different origins for many different reasons contribute to the same code base the Linux kernel
03:40
You know when kernel 5.4 is released That's what 5.4 is gonna run on a on a supercomputer on an Android phone or on anything It's the same kernel the same code base. Although it does a lot of different things So the reason why there are so many different test systems is because of these different use cases, but we could also have a Common way of at least sharing the results because some tests are relevant to all of these cases
04:07
If we want to like test for memory leaks in the kernel you can do that in it, you know Wherever the kernel is running So everybody is doing their own product tests But this is the idea is to have Have a test system that allows people to contribute their tests and contribute their hardware in the same way that they contribute the code
04:29
So yeah, like I said, so you this large number of contributors So yeah, basically, that's that Want to explain here The image is trying to say that we should try to like have
04:42
The same surface of between tests and the code so it's a bit like test coverage if you want to like Align the two things like the whole Idea of having open source development and having open source open source testing open testing on top of it, basically Now how do you achieve that so this is a like an abstract
05:03
Pipeline expert explaining how can all see I works for all of these blocks. There is At least one implementation but you could have different things for each of them different instances, so Let's start from the beginning. So you have a developer in the top left corner. So someone writing some code
05:21
So the code ends up as patches can also ideas and test patches directly at the moment But it can be done other systems do it or get branches. Then of course it gets built the purple arrows are like when files and results are pushed to then you have build artifacts and After that When you have a building can run tests and then process the results to detect when it was a regression
05:45
or basically See what needs to be reported from the results And then your results are stored and analyzed to be able to then provide a report to the people and The developers and this is when the loop is closed. It's like a
06:07
Try to automate An arm for example like a mechanical army want to know where in which position is going to be if it's completely open and you Just move something and you don't really look at it
06:21
You can guess where it's going But if you have some sensors that will actually know where it is and you know the angle or everything then you can have You know closed loop automated system and this is kind of what we're doing here Except that it's doing this around developers and not around machines But this is it's a similar kind of thing Yeah, sorry I'll speak louder thanks, there's a there's a link here you can note so these slides are variable as PDF
06:46
there's a link on the On the page for this talk and on this slide seven There's also a link to a shared document that has a more detailed description of the pipeline design modular pipeline
07:00
So this is like the theory now in practice. What is that? How does that work so Right now we have more kernel CI dog days one database that's being used as being used for a long time But we have a new one using Google BigQuery and the idea is to have to use that as a prototyping database but with access to more
07:22
more testing systems, so All the tests currently the test results currently available on currency. I don't also send to this Database and Now we have also Red Hat submitting test results to it And we would like to see maybe other people so typically typically member companies would be the first ones maybe to contribute to that
07:44
So we're kind of refining the database schema for like storing bill results and test results And the the first outcome we would want to have is have a unified test email report so based on all that collective data We could then send an email say for you know a stable branch release or something instead
08:03
I don't know if you've noticed on stable branch mailing lists or stable in mailing lists You have a lot of test results from coming from different test systems, so that's one of them The targets were having a common Upstream kernel testing test system is to have to have only one report that's gathering all the data
08:21
So having this database is the first step so if you look on the previous slide. That's basically the The the store bit so yeah stories artifacts and database, but this is at least the database part Yeah, and then we can build kernels in different places
08:41
So that's you know the block block number two bill How do you build way if we if you have different people providing different like? Cloud compute and different ways of doing it if you're more delay enough, then you can include them so That makes the project easier to scale And right now we're using Jenkins, but We've made it in we've changed it in a way so that it can run anywhere
09:03
so you can run all the kernel CI steps in a in the terminal and Some people have tried also using it in GitLab already So Kevin is not here Kevin Hillman has done this in a as a GitLab CI way So he's using the kernel CI command-line tools in his
09:20
in a branch in a kernel branch So automatically when there's one your new changes are being pushed to the branch It uses GitLab CI to build and test things like in all the GitLab CI projects But this could be used in other test systems as well And also obstructing the lab API so you know test labs are where hardware is available to test kernels
09:44
So by default we use lava, which is you know the linear or project for automating tests? But it's not the I mean there are plenty of other things So we've made an abstraction in in the kernel CI Core tools so that it can submit things to any lab so right now
10:04
It's only sending things to lava, but we can send it to any any other lab that has a web API And can accept a test definition So these are things we were working on So we added development goals
10:22
A bit more specific so the command-line tools are like a CI build to reproduce a kernel CI build from scratch CI tests are to generate test definitions So these are the things we're trying to improve so people so they become more portable
10:41
Also we want to make it easier like I said a bit before to Make it easier for people to contribute tests So if you're a maintainer or developer, and you have a test or maybe you work on a part of the kernel And you make some tests for it. It should be easy for you to enable this in kernel CI and right now this kind of
11:00
Steep learning curve if you want to do this at the moment you need to be you need to know too much about the Internals of kernel CI to be able to add a add a test to the list But that's one of the things we try to like lower the bar basically And We need some good interaction with that so we need some collaboration from people who make the test as well, so We started with like a key people to try to
11:23
Try to do that So maybe next year I'll come back and see Maybe if you follow the project you'll see maybe things will become easier But right now it is not very easy It's it's still a good thing you know if you have a test you really want it to be in kernel CI You can send it to the mailing list on IRC kernel CI on Freenode or kernel CI at groups.io mailing list
11:46
Or just on the on the subsystem mailing list people know about kernel CI now Also, yeah, we're trying to improve the web dashboard, so I don't know if you've seen the current kernel CI dashboard But here we're showing mostly
12:02
Builds and boot results. It's not really there. There is a test tab, but it's not really good at showing test results Because kernel CI was originally about building kernels like a variety of different arm dev configs And then about boot testing and it's still kind of stuck in that phase So we've we're starting running tests
12:21
But the results are not shown properly on the web front end on a web dashboard So that's that's one thing we're fixing actually right now to unlock a lot of things There's many tests that we can have held back because maybe we could run them and then Email reports have some limitations and if the front if the web dashboard doesn't show all the results then the test kind of
12:41
Get hidden, and that's that's a waste of results But we still also need to improve the Email reports, that's also a two-way process because it's hard to design something From like from the beginning It's more like, you know, sending email reports and seeing how people react to it and whether they you know
13:00
What is useful? What is not useful and different subsystems or different mailing lists need different types of reports actually So the key message I guess here is we need feedback And if the email you get is not perfect Often people say if you send another one like that I'll just you know, put your filter to put all your kernel CI emails in spam or in junk
13:24
Well, that's maybe yeah, it keeps getting bad all the time, but normally we try to like improve things as we go, so The key message is you know, we can't get to a perfect thing from from day one, basically And also a slightly more longer term project goal is to improve test bisection
13:47
So we have bisection for boot tests for you know Just playing boots at the moment So like booting to a login if you can log in there to boot And that's very easy to bisect because it's only one pass-fail result per test per run
14:02
But as we're running more test suites that have a lot of tests Then it becomes harder to bisect things when you have different test cases passing or failing at different rates So that's we have I'll come back to this in another slide, but we have some some Some ideas on how to do this as well
14:22
So reusable tools, yeah, so we have kcibl, kci-test, kci-rootfs and there's To create root file systems. There's a wiki page that might Might be written in a not too distant future But right now there's a wiki page kind of work in progress that gives you at least a starting point
14:41
If you want to start if you want to see what these tools do We also have the interface to the BigQuery Database so there's a common line tool for submitting kernel ci results to the common kernel ci So if you have a test system and you want in your testing upstream kernel you want to submit results
15:00
You can get in touch and use that tool and you know, we can give you a token and then you submit your results Then also docker containers, I haven't worked on it myself Recently, so I'm not an expert in the state of all of these but I know that there's some good progress that has been done To have containers with a lava instance with a you know Jenkins configured for kernel ci and the back end that's
15:24
The back end currently used on kernel ci.org with MongoDB and front end, which is here on kernel ci.org So there are containers to make it easy for someone to recreate the whole system in their own environment And again, if you try it and find some issues it's all on github on the kernel ci project on github
15:42
But we need we need more people to use it basically to get some get some feedback on it Sorry about advanced pi sections, so there's this this tool works on called scalpel, which is inspired by Easybench Which was written by Martin Perez at Intel. So we've been talking because he made this tool for
16:06
for graphics testing It's actually not really used in production Easybench at the moment But there it had a much advanced much more advanced by section feature than the normal get Bicep come on you have because this can deal with different with several test cases at the same time
16:22
So when you run one test you can provide the results for many test cases It can also request test to be run several times if they are not stable So The scalpel project is basically Extracting this and boiling it down to a more portable thing that can be used and as a if you look at the project
16:42
on on on this gitlab Project you can find as a demo that will run like create a git history We like dummy pass fail things and it will find a final problem in it And it's kind of ready to be used in kernel ci Will come a bit later I think but that I hope that will really help for some things
17:04
So people say maybe we should be testing patches as people Send them because when you have one patch on the mailing list You can test it on its own and you know if it's good or bad before it gets merged And yes, that's ideal, but it means there's a lot of patches So adding, you know supporting all of it and you know building if you want to build all the kernel configs
17:24
We build or even a subset of that say if you want to build 10 kernels for each patch that is sent and tested With 200 different platforms, and you know that doesn't scale very well I think so maybe in some cases if we do it wisely We can only test a subset based on which subsystem the patch was sent to but I can take a lot of tweaking
17:45
So the idea of the whole advantage of having bisection is when you test index next for example Where all the new patches that have been applied are merged together and if something breaks So if something breaks at this point, then you only need maybe 10 iterations to bisect from yesterday's next to today's next
18:05
So you build 10 kernels basically to find when you know that there is actually a problem so you can be very efficient So that's why bisection is such an important thing for kernel ci Okay so tests, wish lists, so these are the things that have been identified so far
18:24
But again, maybe some people would think that network keeps failing all the time all the things maybe are Also important to test, but these are kind of obvious things that are directly So yeah, the ones at the top are the ones that kind of come with the kernel
18:43
And we're not really doing them too well right now, but I think that's Like an obvious list of things we should be doing And the next test project, I know that LTP that was mentioned a bit earlier That's at least some parts of it because some parts may not be completely relevant to the kernel Maybe more like user space oriented, but there are definitely some things in there that we
19:03
That you would expect to be run on a kernel test system And then some are more subsystem centric like you know for media subsystem You have visual to compliance and other things, an IGT and XFS test There's a lot of them basically
19:21
And this is where more integration with well more communication with the subsystem Maintainers and developers needs to happen And that's basically like a summary of where we're trying to be, where we're trying to go There's a lot more that can be said it's a very big project
19:43
It's kind of old because it has been around for five years But it's new at the same time because now we have a budget and we have some ideas We have a lot of limitations have been removed since we've joined the Linux Foundation So I'm hoping it's like a rebirth for the project We need also some engagement from the people, we need people to realize that now
20:04
You know this has been selected by the Linux kernel community basically as the main project for testing upstream And you can reuse it for you and your own products as well So does anybody have any questions? Yeah
20:26
Okay, so the question is about the technologies used for testing in general So there are many different things
20:40
So the tests that come with the kernel are normally written in C like a self-test or you have some shell scripts as well and Well the static analysis cox in L is a separate language And k-unit is also written in, I mean Casan and these are like built-in things so the kernel configs you enable basically and k-unit is a bit the same
21:03
It's part of the kernel so it's all written in C Now you can have There are also some test suites a lot of basically the quick answer is a lot of it is written in C Because it talks to the kernel so it does a lot of system calls, and it makes sense to have a low level
21:23
Sorry to, so I can repeat Yeah
21:42
So the question is whether there's a high-level language to define the tests and then run them so we have for for example the Test run in lava. There's a YAML definition that explains What needs to be run and then you will have you'll have some commands there that might be Binaries or maybe shell scripts or maybe Python so it depends each test suite has a different way of being run
22:08
Yeah, I think you were second So for a single test run, what's the average run time and what would be considered like a reasonable run time for a single test run?
22:22
Okay, so the question is for one run for one test run What is the usual run time and what's the expected time for that? What would be reasonable? Okay, so Of course each so the question can be answered in different ways
22:41
So if you want to test everything then you can add, you know Time how long it takes of course some things run in parallel So it's going to be like until you get the whole results It's going to be what all takes the longest that can also be done by Splitting test suite into smaller things to run more in parallel. So there isn't a simple answer to that. But what is it? What is
23:04
What is desirable basically from a time duration? I think it depends on where in the process you are where in the Kernel workflow you are what you know, so if you're saying if you're sending a patch for example You want the result to be quick enough? If it's if it's Linux next because there's a version every day you expect you have the results maybe within like six hours
23:25
That wouldn't be too bad Or maybe 12 hours even if it's running a lot of things If it takes more than that Then you have more kernel versions to test then at times it take to actually run the tests if it's for Then a stable release maybe maybe you can run more tests that will take two days and it will be valuable
23:42
So it depends on which Version of the kernel at which stage you are in the development So basically we have to design things to work within the limitation unless you want to go for the paid version let's say
24:09
Okay, so the question was about integrating it in Travis CI to as an example, yeah Yeah
24:21
Yeah, yeah Yeah Okay
24:41
So it was a comment about if you wanted to integrate it in Travis CI with element 250 minutes basically You need to accommodate things around you can also like build on the one kernel version if you know the difficult figure architecture you care about I guess so the question is about does it make sense to run Docker you can use Docker to build a kernel and run things that
25:06
We can use Docker to build a kernel at least so you have an environment to build it anyway, yes, yes
25:21
So right now there's a list of good branches that are monitored every alright So the question is how I can all see I test triggered and the answer is right now There's a list of good branches that are being monitored. You can see them on you know on the Website here you have trees and branches so all these are monitored every hour
25:42
And the system knows the last version that was tested And if there's a new version then it triggers the whole thing and it builds a bunch of tests So for request for master it's the maintainers trees It's I mean even if one more even if one more commit was pushed to the branch That's a different revision at the head of the branch, and that's enough now like I said also
26:04
We've proved that it can work in a GitLab CI environment where it's when you know a typical GitLab CI Workflow and that can trigger things as well, so you can reuse the same tools in a different setting if you want to I think maybe you were next yeah, and then okay
26:20
Like we need okay, so two more questions So the question is if
26:45
Yeah, so the question is if we start testing patches, how do we determine which tests to run basically? so First you know which mailing list it was sent to that could be a clue And then of course you can do some stats on the the files that it's touching
27:06
And you can also what right now we have some
27:26
So we have a yaml-based configuration at the moment to For each git branch which tests to run we could have something like that for patches So do you know depending on which mailing list it was sent and maybe some people we care more about some things like You know some maintainers will say every time I push something or well
27:42
Maybe my tenors don't send that many patches But you we could have any kind of arbitrary rules to I think there was a question here if we're monitoring across
28:03
So Well, yeah, we're monitoring a lot of different branches So the question is can we are we monitoring any random branches? So we're monitoring the main ones of course like mainline and next and stable And we're doing as much as thing as we can on them Then we're monitoring subsystem ones And we do a lot of testing but maybe a bit less and then the further away you are from the master
28:24
Mainline master bunch the less testing you get basically so it scales in that way So you could have an individual branch, you know from like a maintainer or any individual developer Maybe if you have like only one or two kernel bills And you don't you do only a few tests on that then it will you know for one Linux next if you do
28:42
200 bills that's like 200 people doing one bill basically so that's the way kind of scales If the community think it's useful to have a branch like when someone wants to have a branch added they send Request basically if people think it's a good idea, and if you have enough bill and test resources, then it can be added There's no as well the main role is I guess that it needs to be of trim oriented
29:02
So if it's a downstream branch for your own product, that's not something that would be on kind of Seattle normally Okay Okay, so the question was about
29:27
Generic Way to talk to labs So right now, so there's the there's a common line tool called ksi test
29:41
If I show you so this is basically it's about the Bit when when we have a bill then you go to the run box basically that's when you start to start to test so By now we're using so this common line tool ksi test is in Python It is like an abstract class with some methods like There's one like to generate test definition another one to submit to test definition
30:03
And we could add other things like receive the results right now the results are sent directly to the back end in some cases we could like have The test labs and the result back to the place where ksi test would be used as well to receive the results and
30:20
Process it and store it in a database so these are kind of the main primitive functions and To really have it work in practice. We need to test it with more labs right now It's kind of only working with lava. So it's still not very mature. That's basically where it's going So I think we've run out of time. Yep. Thank you very much