We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Testing software on multiple Linux distributions

00:00

Formal Metadata

Title
Testing software on multiple Linux distributions
Subtitle
How Cockpit is tested on multiple distributions and regressions are reported upstream
Title of Series
Number of Parts
62
Author
License
CC Attribution 4.0 International:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Cockpit is an easy to use web-based interface for your servers which relies on a lot of external dependencies for it's functionality. This talk describes how Cockpit is tested, tests are run on multiple distributions and issues reported upstream.
Keywords
9
Thumbnail
57:49
44
45
48
Thumbnail
45:29
60
Thumbnail
1:00:03
FreewareOpen sourceHypermediaServer (computing)Software developerSet (mathematics)Server (computing)Front and back endsStatistical hypothesis testingWeb browserSharewareLecture/ConferenceComputer animation
SoftwareKeilförmige AnordnungPoint cloudPhysical systemInformationBefehlsprozessorMiniDiscRead-only memoryComputer networkStructural loadComputer-generated imageryBendingMetric systemService (economics)SoftwareVideo gamePhysical systemInformationRadical (chemistry)Source codeJSONComputer animation
SoftwareComputer-generated imageryTime zoneRead-only memoryServer (computing)Server (computing)Normal (geometry)WindowFile formatView (database)System administratorComputer animationSource code
FreewareOpen setSoftwareOpen sourceVideo game consoleGamma functionFront and back endsVirtual machinePlug-in (computing)Server (computing)Computer configurationWeb 2.0Public domainHuman migrationMereologyAuthenticationSoftware frameworkDisk read-and-write headPhysical systemNormal (geometry)
Physical systemClient (computing)Computer networkStatistical hypothesis testingFreewareSoftwareMultiplicationOpen sourceScripting languageComputer-generated imageryPoint cloudShooting methodTheoryWeb browserGamma functionMeta elementStack (abstract data type)Open setPasswordComputer clusterDistribution (mathematics)Digital rights managementWeb browserPay televisionVirtual machineSurfacePoint cloudMedical imagingDifferenz <Mathematik>Task (computing)Configuration spaceFunctional (mathematics)Linear regressionGastropod shellScripting languageStatistical hypothesis testingParameter (computer programming)Software bugLink (knot theory)SoftwareBridging (networking)Server (computing)Physical systemProduct (business)BitLibrary (computing)Lipschitz-StetigkeitIntegrated development environmentProcess (computing)Fitness functionMathematicsNetwork topologyComputational intelligenceArithmetic meanGame controllerKey (cryptography)Laptop
Statistical hypothesis testingStatistical hypothesis testingDiagramWeb applicationNormal (geometry)Web browserPhysical systemSoftware frameworkQueue (abstract data type)Web 2.0Functional (mathematics)Task (computing)Broadcasting (networking)Point (geometry)
DisintegrationJava appletScripting languageInterface (computing)Web browserVirtual machineStatistical hypothesis testingCommunications protocolGraphical user interfaceSoftware frameworkProcess (computing)
Set (mathematics)Modal logicPhysical systemNormal (geometry)InformationWebsiteComputer-generated imagerySharewareGame controllerMedical imagingIntegrated development environmentStatistical hypothesis testingVariable (mathematics)SharewareAsynchronous Transfer ModeSound effectComputer configurationWeb browserSoftware frameworkSocial classMeeting/Interview
Data storage deviceBefehlsprozessorRead-only memoryPhysical systemInformationData modelGamma functionPoint cloudNormal (geometry)Software repositoryMeta elementBitBootingStatistical hypothesis testingLine (geometry)Computer configurationEndliche ModelltheorieWeb browserSource code
Physical systemNormed vector spaceBefehlsprozessorInformationData modelRead-only memoryPublic domainChi-squared distributionPoint cloudFedora CoreComputer virusDigital object identifierDiscrete element methodMeta elementWebsitePrime idealAsynchronous Transfer ModeWeb pageStatistical hypothesis testingLine (geometry)Computer animationSource code
Meta elementWebsiteLemma (mathematics)Computer-generated imageryStatistical hypothesis testingMultiplicationDevice driverCommunications protocolMessage passingError messageStatistical hypothesis testingMultiplication signDigital rights managementState of matterVirtual machinePoint (geometry)Different (Kate Ryan album)Scripting languageSet (mathematics)Medical imagingDataflowProcess (computing)Raw image formatGraphical user interfaceWeb browserSoftware developerJSON
Multiplication signStatistical hypothesis testingRow (database)Virtual machineMeeting/Interview
Computer-generated imageryINTEGRALFunction (mathematics)Source codeJSONXMLUML
Statistical hypothesis testingElektronische ZeitschriftTask (computing)Hecke operatorCheat <Computerspiel>Open sourceoutputRenewal theoryDirectory serviceNetwork socketSineBefehlsprozessorProcess (computing)Computer fileError messageDuality (mathematics)FlagInteractive televisionWechselseitige InformationAngleBoom (sailing)Total S.A.Video game consoleData modelRevision controlKernel (computing)Linear mapFluidRadarCodeTime zoneWeb pageMIDIAreaHash functionLocal GroupWeb pageStatistical hypothesis testingFunction (mathematics)Distribution (mathematics)DemonSurfaceNormal (geometry)Field (computer science)Source codeComputer animation
Computer-generated imageryGroup actionLinear mapHash functionKernel (computing)Revision controlCodeAsynchronous Transfer ModeMoment of inertiaMIDILocal GroupTask (computing)Maxima and minimaMenu (computing)Statistical hypothesis testingFlagoutputWindows RegistryOpen sourceError messageDuality (mathematics)Directory serviceNetwork socketDew pointLibrary (computing)AngleComputer fileElektronische ZeitschriftFreewareStatistical hypothesis testingMedical imagingSoftware bugComputer fileElectronic visual displayRepository (publishing)Multiplication signTrailTraffic reportingFile systemDefault (computer science)RobotSoftware developerComputer animationJSONXMLUML
Open sourceFreewareVolumeLogicComputer-generated imageryGrand Unified TheoryModule (mathematics)ThetafunktionKernel (computing)Wechselseitige InformationData storage deviceMiniDiscPhysical systemTelecommunicationModulo (jargon)Computer fileDemonGame theoryComa BerenicesMultiplication signTraffic reportingStatistical hypothesis testingCore dumpSoftware bugSource code
Statistical hypothesis testingComputer fileJSON
Open sourceFreewareOpen setSoftware frameworkTraffic reportingLinear regressionStatistical hypothesis testingLecture/Conference
PixelMathematicsOpen sourceFreewarePhysical systemStreaming mediaStatistical hypothesis testingStatistical hypothesis testingScripting languageCommunications protocolModule (mathematics)Revision controlMultiplicationProjective planeComputer filePhysical systemMathematicsPixelSuite (music)MereologyDistribution (mathematics)Graphical user interfaceLinear regressionOcean currentSoftware developerSoftware frameworkGastropod shellCASE <Informatik>Peer-to-peer
FreewareOpen sourceShooting methodParsingBlogStatistical hypothesis testingBuildingStatistical hypothesis testingFreewareDistribution (mathematics)Metric systemMultiplication signVirtual machineTable (information)Data storage deviceDatabaseRow (database)BootingSpacetimeSubset
Statistical hypothesis testingQueue (abstract data type)Graph (mathematics)Statistical hypothesis testingError messageSuite (music)WebsiteVirtual machineKernel (computing)Multiplication signLengthWaveLecture/Conference
outputIndependence (probability theory)Statistical hypothesis testingDistribution (mathematics)Medical imagingComputer fileStatistical hypothesis testingReverse engineeringBitSoftware bugLipschitz-StetigkeitTraffic reportingSound effect
Game theoryComputer animation
JSONXMLUML
Transcript: English(auto-generated)
Okay, welcome everybody. So I am Jelle. I work at Red Hat. I am also an art settings developer and I'm also involved in the reproducible builds project. So today I'm gonna
introduce Cockpit for everybody. Well, maybe first who knows Cockpit already as ever used it. So it's like five people. Yeah, and so I'll give a small introduction. What we want to, so why do we want to test Cockpit and
So how our test infrastructure looks like and Yeah, that's it. So Cockpit is basically and Some people, so someone in my team calls it GNOME settings in your web browser. So it's basically a UI which a web-based front-end which you can use to See everything on your server. So I have like my demo here so
In the general overview you can see your your software update. You can see systemd services. There are metrics you can look at and performance information and This is all so on your life system. So we also have a terminal so you can see I'm running on container. I
Can stop this container? Oh, wait, now it's after completion and If I look at my GUI nothing is running. If I start something here
What do I want to start? I see it's running. I'll look at my server. It's running here as well. So everything is basically what you what you normally would do with SSH. You can also do in Cockpit. It's just a
It is mostly aimed for Windows administrators, but also normal Linux administrators who want to quickly see a visual view of their server.
So this is everything you can manage with. You can format disks, you can configure looks, rate, firewalling network, etc, etc. And then there's troubleshooting so there you can see if you're running rail for example or fedora you can see your sa Linux issues and
You can fix them and I think we also have an option to generate an answerable playbook And then you can apply them on all your other servers We have plugins. So there's a plug-in system. It's fairly easy and one of our two of the most prominent plugins we have is the Cockpit machines. So this is a
libvirt front-end. You can manage your virtual machines. You can do migration between two libvirt domains and You can basically Yeah, do everything a normal User would like to do so with virtual machines. The same with
Poplin containers so you can run your create containers manage them interactive and so Cockpit itself is There's C part and there's a the front-end is JavaScript we react with heads
React UI framework and Cockpit authenticates with PAM. So if you log into Cockpit you log in with your normal username and password, basically
So The R6 looks like there's a there's a Cockpit web server this handles HTTP this serves the HTML JavaScript access and it talks to the Cockpit bridge and the Cockpit bridge
Popman to libvirt to almost any anything which has a dbus API On Your server and this can be on your local machine, but we also be over SSH to a remote machine
And we can also call from so from the UI you can also run shell scripts and So Cockpit supports quite a lot of the distribution sports fedora realm Send to us Debian wound to our six clear links and tumbleweed and
Now we're all into testing so as you can see we have we want we have quite a lot of distributions to support and We have some goals we want to achieve when testing so when somebody
Introduces a change or a bug fix. We want to run those tests On all everything we support If we update for example our dependencies So this would be parameter fly or react or some JavaScript library. We Use we want to run our tests and if the distribution updates
It's packages. We also want to test that and we want to see if there are any regressions or issues in conference, so those are so To run Our tests we need a test environment, and we want to have some
Yeah, we have some things we need to We want for this test environment, so we ideally want to be isolated so we don't want The packages on the system we test to change We also want to some functionality we have so for example
Setting up Crypto policy so to configure FIPS for example we need to be able to reboots, and we want to be to be Yeah, we go run test once it should be half
So have some tests which require for the surface like a server Or for the IPA or our subscription manager And so we also need to be able to talk to a diff
So our solution for this is to run tests run to have a virtual machine where we can which we can test and We Open two ports with for forwarding to this so we want to have access to the cockpits because we copy the self And we all want to have access to SSH we can
Inspect the machine we can run commands in the machine we can install upload things to the machine for testing and we also need to sometimes talk to external service and for that we Yeah, we use Lippert so we have it like a shared network between our two VMS, and they can talk to each other
So now the question is like how do we make? Such a VM so that is so we can test on it so for this we have a python scripts Which fetches usually fetches? a cloud variant of the Distribution we want to test so for example for Dora has a cloud cue cow image. We pull this and
Then we run We started with an cloud in it. I so we created and we set up our SSA we configure our users so username passwords and set up SSH keys and From then on we have full control of the machine so we can run our so we
Basically run them a bash script this installs the packages we need to test Because we want to test offline, so we don't have any external Like things calling back and changing the machine or fetching updates in the background
We save this image as a cue cow and then we upload to our S3 so this is basically How you do this I'll do this on my laptop I run it's image great script I give it a distribution we support and the end and we get it like we get a cue cow image
so now something we can test against And we also want to we so we automated this whole process So we have a give a workflow which runs every night. It then checks if the if our image Is out-of-date and out-of-date means if it's older than seven days
We then open an issue and get helps sayings like this image has to be refreshed And then Then our CI's the picks up this issue It creates an image if it's successful for Successfully run it uploaded in the image to s3 so our
So for in the future when we run the test we can fetch the image from ice tree and run the tests We run the tests. I'll get into that later and If then everything is successful, so this also opens a forecast and everything is if everything is successful
There's a forecast you can see that the tests. I have completed the income merges, and then we have an updated image Now so yeah, and now we need
somewhere to run this test in CI and we do this in a container because that's an easy fixed way and should be reproducible to To run it so we have we create a container image with like lip for it and or testing dependencies, so a web browser and
Then We also need that when we have this this container to run the tests in We do need def KVM access, that's a bit tricky usually if you want to run it somewhere in OpenShift and
We also want to test them against multiple browsers So what we use in production is like we have a bare-metal CI which is basically the machine we can provision ourself and we run Fit or IOT on this and then just spawn
task containers Which pick up the jobs we also run them have some open stack Environment we can use and if the if both environments filled we can fall back to AWS so this
So you can see if you run your tests in and container image is quite flexible from to where you can run it And so this is basically this diagram describes the workflow, so how How we run our tests so when somebody opens broadcast we sent a
GitHub sends a webhook Initiates a webhook and sends what broadcast it needs to test to our to a Poplin container we have which runs webhook and it puts this tasks in a queue and it's a separate queue and
the workers then periodically when they are not working they look at the queue they pick a tasks to work on and they execute tasks and well, and then report back to get up so I can yeah, and
either So that's how we run our tests and now I'm going to discuss how our test framework looks like so We have a web application to test We but we also need to
Be able to inspect the system states so for some of the tests we need we modify for example We stop a container and then we have to test if it's a check in the well in the browser if it's actually stopped So
Yeah So that's requires some More more functionality than your normal normal web framework. Let's test framework. So we start this back in the day before I joined We use phantom yet jazz and that got deprecated at some point and then there were there the team discussed like where do we move next and
We looked at the several alternatives, but most of them require like a download a browser for you or Do some things we we don't want or are written in JavaScript and our tests were in phantom jazz were written in Python. So We decided to write our own
This framework basically, so we use the the chrome debugging protocol with a Small node. Yes module. So this this is chrome interface You can basically control the whole browser with this so you can resize it you can execute
All the jobs this is good enough for us and we want SSH access into the machine Yeah, let me demonstrate how this works so
So this is Running one tests. Is it readable? What do I or should I enlarge my firm? So the test in the OS environments variables tells us like which image I want to test against
So it's pretty fit over 35 then I set the this environment variable controls if I wanna Basically run in our headless mode or not. So in this case, we're not using headless mode. We're gonna show the browser and
Then we call the tests class and this is the test we want to run and the cool thing about our test framework is that it also has the option to Sit in the tests, so when I spawned the test What Oh
Demo effects
So now we start a art our test image and This takes while we do some setup we log in
And now though the the execution stops so this is because it's a bit spammy In our so this is a test we are running So, let me see so
The first thing we did Is do some cleanup because these this test can This is a non-destructive test so we can run this basically anytime every time on on if I spawn I can
Also boot I can so this is this way I'm running the test it always spawns up a VM but there's also an option to pass An SSH ports and a web browser port and it will reuse an already running VM and for this we for the
For the non-destructive test. It basically means that they clean up after after they're done so If I run this test again against a already running VM it succeeds it doesn't model for any modifications It makes you in the tests. They are restored After it's done So the first line was login and go so we login and
we reach the The overview page and now there's a sit and the sit is basically stops executing and waits For me to press enter to continue. So this is a useful way to debug our tests. So now I press enter and
It executed that the following line So it changed mo TD and you can see This is TD mo TD is also shown in cockpit and then I can present her again, etc, etc, etc So this is how the the tests are run locally on a developer machine so
what you just saw was the Basically this process so we spawn the the test machine we launched a browser Our test scripts talks to the
Chrome developer protocol driver to basically tell where it has to go tell what JavaScript has to execute and We use SSH to for example run this The printf command so this is executed on the host itself over SSH
Yeah, so this is basically this this example well, we also can also do in our tests is we look at the journal and If there are any messages in the journal, which we don't expect
There are messages then we will the test will fail So sometimes we have to allow messages to be this so this is basically we this is how we see if there's like any SA Linux policy violations or any other errors which happens during Testing which want to catch and then report upstream
so sometimes we have to allow things which are just Excuse me. Yeah error message are Which happened we kill the surface or something. So we had we have to allow those We also can't really run all tests on all distributions, so we skip on for example
This one is because Network manager Ubuntu lacks some so we have it and it's done with a Python decorator And then there's artists non-destructive tests, which I was already explained. So these be
If you them they clean up after himself and the next time in the shouldn't interfere with other tests, which are run afterwards And we've done that every because if you run a test and then reboots a machine to get a clean state Like a long time for to run your tests so now right take like 40 50 minutes to run if you would add
If you would reboot for every test K to be like a three minutes per test extra So that's not very nice when you're developing But before you can actually run these tests we have this so we created this image, but this image doesn't have the
subject yet, which when tests so for this we need to prepare the image, so we made our I create and We run this and the script builds a cockpit package In the machine, so we first built
We spot the VM we built The package in the VM because Support multiple abusive cannot always build on the host itself so we built a VM and The installed package or the package which comes out of it we install on the in the VM any
spawned VM and We can also run some set of scripts So for example, we want to might want to out to start popman if we want to test popman Then we can run this as a prepare step and this image then is saved
Locally then if you execute test OS fatal or 45 it won't pick the The raw image it will pick the prepared image and in CI we actually use a different entry point
to run the tests because in the CI we want to We are more critical of our tests, so if a test fills in CI we don't automatically fill we retry it three times and If it's filled three times then we
Fill the tests so this is because some tests can be flaky and it's always something you want to circumvent but it's nice if you can at least retry it once because if one test fills out of your 60 tests because it's flaky. That's really annoying if I ever work so flow also
if we if you Or change the test we automatically run this test three times in a row To make sure it isn't flaky and in in this is one of the three attempts Fills we do fill a test so we don't introduce any flaky tests, which Depending on some timing fill so this is
For example, yeah, this is do that happen sometimes so when I write a test on my own machine It is slow to execute and on our bare-metal CI Which runs like ten times faster? So this is a good way to
Not introduce any flaky tests and we have this concept of naughties Which I'll explain later, so these are all every time you run in CI these are tracked this next slide, so This is a for an example so of a put forecast
Can also open I have one open I think maybe a nice Yeah this one
And yeah, this is what I want to talk about so we We also made a custom logging thing for for the output of our tests and This is actually pretty cool So this is a web page
You can see that all the tests which filled the green test or the test which? Succeeded and the yellow test is the tests are skipped, so you're either skipped because there's a known issue So this is it shows there's a known issue, so we're skipping this test and it calls also skip because because this distribution doesn't support this feature and
This isn't yeah for example, so this is a failing test what you get here is the Yeah, the normal test outputs and We also have we also save the journal so for example if there's like a sec folding surface you can look in the journal oh
This demon sec folder then probably this is why my test field also turned if you also have a screenshot We take a screenshot if the test fails, and I can recently see that something is missing well Not sure what it why this test feels but Sometimes it is useful to have screenshots
And developers can also trigger this tests Manually so with a small show command, so if if something fails, and I think it's flaky they can just retry it and But it's also yeah
Or they can also run tests against an image, which is not in the default test sets So and I'll explain the Nadi so the Nadi's are a way of how we Keep track of downstream issues for example. There's some We
built or 36 image and we test it and something in Lippert is broken then we what we investigate and we find out how this is a bug or Either it's a bug or it's like we have to use the API different basically the two things which can or the test display key, but
So if it is a bug we report a bug upstream or downstream And we create a GitHub issue and we reference then the the the bug reports and We make a file and in this file we
put the back the back trace of the failing tests and This file system put in our bots repository and Then this test is automatically skipped so and Every time this test runs in CI it will see it will run the test and if it fails
It's like oh, but I know this is a known issue so I'll skip it and if it Succeeds it was all able It will not really not so if it fails it will update the
Issue you open a GitHub with like this is the last I can actually have this show this So it feels it will update the issue with like the last time it failed and if it succeeds it doesn't
Yeah, so here it is so I do so this is the issue and then the bug reports and
Then our CI ran offered or caress the test filled and ended updates this the comments and I can see Every time it fills so it keeps the tooth 10 latest comments So we don't get an endless well, what is it?
So and don't endlessly spam the ticket of issue So and if this doesn't occur core if the test succeeds and the there's no The last comment wasn't updated for twenty twenty-one days then we make a PR to remove this naughty and So he's not the yeah, so then all he looks like this so you can you can use red X to like
Remove some now for as a Linux of policy violation. For example, the pit is in there voice changes and So when this no longer feels we create a new PR to remove this file run and if the test succeeds know then
You can remove the naughty so it's all a mentally clean up and then we also Inform the we update We check a report made was closed or not And if it wasn't closed and we report like hey, we didn't see this this issue anymore Maybe on a closet
Another cool thing about our test CI is that we So we have one of our external dependencies is a UI framework which can with updates have regressions and We We could visually expect like we could update
You know modules and then check it out locally and look if it is check if everything is okay, but it's like too much work. So My colleague wrote the pixel tests and this is a part of our test suit. So this is In some
In the test coach you can say take a screenshot and When the test and runs in CI if Existing screenshot we have we store this in the gets a sub module and it sits to the current Screenshot and if it's if it changes
The test fails so this is a nice way to how we catch Regressions from our UI framework or if somebody changes something in the in the GUI which has an or in this in our CSS Which has unintended changes Fill and this happens on multiple
So we have one test case for the desktop and one for mobile. We also have some coverage Because we use the Chrome developer The debug protocol and this gives us access to the Chrome's native
What is it They have like a coverage tab in in developer tools and you can also read this With the debug protocol, so All our new peer this so this is fairly recent and our new peers also Putting comments like when something is not covered by tests and
So what I just talked about was everything the infrastructure we maintain our suitest things on multiple distributions But fedora also offers a system to test packages in CI
And it's pretty cool. So basically you define a yaml file in your project and you say Which fedora versions you want to test and you give it like an? The packages you need to install To have installed for tests and then you give it Android so this can be like a shell script
Which runs tests and we also use this so this is something you for example saw in Yeah, this PR so this is the testing farm and Basically it uses fedoras existing build infrastructure so copper to build the package from pool crest and
Then it gives this package to the testing farm and it installs the package so you get like one small VM with I think one gigabyte of RAM and 20 gigs of This space it can run your tests in there so for us
This means we cannot reboot Because rebooting would reboot the test VM we're testing in so we can only run a subset over the L for tests in there Which are non-destructive so don't they don't affect that the ID host So
Everything we test every test run we have we also store in a database and we export this to Prometheus, so we Store the test sets we run so for example fedora How many tests we ran how many were skipped and
then we have a separate table which records individual tests with the test name if it was retried and how long it ran and This SQLite database we basically Get some metrics from there and For that we have a Grafana dashboard of course and
So I looked it up how many tests do we actually run every month and it turns out we run about two million tests every month on All our distribution free support and all the so this is for cockpits cockpits machines cockpits popman and
three other plugins And in this dashboard we also track like How often a tests failed so if a test feels? like 60% of time then it's probably something we used to investigate and
and See if there's some flaky or you can fix And also this so just the what I just talked about also comes into Something we have defined as the error budgets, so this is basically an SRI site
Irubility engineering concepts, so it's more for running services, but we also use applied it to tests, so basically you say if X% of my tests fail randomly we want to we basically get you get a
Way that we don't want to have an automatic alert, but we The graph becomes red or if it takes to law of or if the cute cute the test queue so the the test Suits we need to run our
The queue length of it is higher than something to expect then this is an Something we should fix So we had this once this year so the test took so too longer than the allocated time We we fought for it to run And we did some
Restructuring of how we run our tests and To resolve this issue or to take a look at like what is the longest? Why why does this? test waves Which does something with Drake it takes so long what turns out we have? Three kernels installed and then Drake it has to
Regenerate the in it ramFS for every machine Three times so we were moved to so this is Something we do to keep so basically keep herself happy So that's the CI it doesn't take so long that they are that the tests don't feel too often
randomly Yeah, that was it so We I also looked up like how many bugs did we actually find do we find in? in a file for the distribution so for Debbie and it was only three these years from what I can roughly find and
for a federal rail and sent to us we had like We reported 20 bucks in the last few months, and then there's all this other fun stuff we have so These are the top offenders of the things which feel often in Which make our tests well our image refresh, so when we build a new
Image and we run a test these are the top offenders, so it's a Linux It's of course not one and and they prefer to all Yeah, what we would like so because of this light so
We would like to so what happens now is we pull in a new fedora 6 636 every seven days and we run our tests, and then we find out something is broken and That's a bit annoying you were rather like fix the problem up front, so if
somebody releases a new thing which Copied requires we will Have them Maybe run our tests, so this is what we call reverse dependency testing, and this is a Certainly not applied yet, so this is something would really like to have in the future so for example a new lip first comes out
They build it and the fedora say I see I runs the Cockpit tests which such lip first and then if they fail then if they fail then can Investigate like oh, maybe we need to fix something or hold back the package so we don't have to
After effect report an issue and yeah Already any questions