Rethinking openSUSE release tooling and the build service
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Subtitle |
| |
Title of Series | ||
Number of Parts | 55 | |
Author | ||
Contributors | ||
License | CC Attribution 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/54547 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
openSUSE Conference 20186 / 55
8
9
11
15
17
19
20
21
26
28
30
31
32
34
38
39
42
44
45
47
51
52
54
00:00
Multiplication signPoint (geometry)Factory (trading post)RobotXMLUML
00:35
System identificationCovering spaceProcess (computing)Computer animation
01:03
Web crawlerSoftware maintenanceAdditionProcess (computing)Connectivity (graph theory)Projective planeTrailLevel (video gaming)Set (mathematics)Factory (trading post)Point cloudMereologyRobotSource codeWeb crawlerBitSoftware developerBranch (computer science)Direction (geometry)Point (geometry)Server (computing)Computer animation
02:30
Software maintenanceWeb crawlerScripting languageAdditionSoftware maintenanceProjective planePoint cloudRobotProcess (computing)MereologyPoint (geometry)Scripting languageProgram flowchartComputer animation
03:04
AutomationFunction (mathematics)DivisorProduct (business)TelecommunicationStability theoryEmailAdditionSystem of linear equationsWhiteboardComputer configurationAsynchronous Transfer ModeProduct (business)RobotFactory (trading post)Code refactoringCodeNumberMathematicsService (economics)QuicksortLoginSoftware repositoryTelecommunicationElectronic mailing listConfiguration spaceAreaDivisorComputer animation
04:56
System of linear equationsWhiteboardAdditionEmailProduct (business)Graph (mathematics)MathematicsComputer animation
05:21
Metric systemContext awareness1 (number)CausalityMathematicsGraph (mathematics)Sound effect
05:47
Factory (trading post)Level (video gaming)Roundness (object)Factory (trading post)RobotFeedbackGraph (mathematics)TwitterLine (geometry)Multiplication signDrop (liquid)Software testingRing (mathematics)State of matterComputer animationDiagram
07:12
CodeFactory (trading post)Revision controlStability theoryConnectivity (graph theory)CodeFactory (trading post)Projective planeSoftware developerDrop (liquid)Bit rateGraph (mathematics)InformationDiagram
08:06
Graph (mathematics)TwitterProcess (computing)1 (number)Level (video gaming)MereologyGraph (mathematics)Graph (mathematics)Diagram
08:30
DecimalMetric systemGraph (mathematics)Physical systemGraph (mathematics)Standard deviationPoint (geometry)Physical systemSound effectBit rateMultiplication signThermal fluctuationsMathematicsComputer animation
08:58
State of matterPhysical systemBlogInformationPoint (geometry)EmailFrequencyComputer configurationVideo trackingImage resolutionKolmogorov complexityLocal ringQuery languageVariety (linguistics)Cache (computing)Control flowRevision controlConfiguration spaceMeta elementData managementEvent horizonDependent and independent variablesIntegrated development environmentProcess (computing)Software maintenanceService (economics)Point (geometry)EmailComplex (psychology)Data managementConfiguration spaceQuicksortProjective planeMetadataRevision controlFactory (trading post)Level (video gaming)Software repositoryState of matterMultiplication signInstallation artRobotFigurate numberGeneric programmingSoftware maintenancePhysical systemResponse time (technology)Software developerTraffic reportingTrail1 (number)Existential quantificationAuthenticationHuman migrationAdditionVariety (linguistics)MathematicsService (economics)Incidence algebraInformationNP-hardMereologySystem callAreaExtension (kinesiology)Bootstrap aggregatingProcess (computing)Integrated development environmentEvent horizonCache (computing)PlotterVector potentialLoginCASE <Informatik>Computer animation
15:26
Point (geometry)Product (business)Kolmogorov complexityRevision controlProcess (computing)Scripting languageScripting languageNumberLevel (video gaming)Product (business)Process (computing)Projective planeMultiplication signRun time (program lifecycle phase)Scaling (geometry)Factory (trading post)Computer animation
16:07
Continuous integrationLevel (video gaming)Data managementControl flowConfiguration spaceMeta elementSource codeVideo trackingComponent-based software engineeringBoiling pointQuicksortScripting languageSoftware maintenanceLevel (video gaming)Network topologyConnectivity (graph theory)Configuration spaceBuildingSource codeMeta elementRevision controlContext awarenessProcess (computing)Event horizonProper mapFile systemData storage deviceContinuous integrationData managementRobotSet (mathematics)Drop (liquid)Bootstrap aggregatingSoftware development kitComputer animation
18:06
Group actionImage resolutionThread (computing)Variety (linguistics)Interface (computing)Graph (mathematics)Windows RegistryMobile WebInterface (computing)Process (computing)Mobile WebGroup actionCuboidLine (geometry)Logic gateWindows RegistryQuicksortThread (computing)Computer animation
19:13
Software repositoryMetadataScheduling (computing)Local GroupLink (knot theory)FrequencyBridging (networking)Source codeSynchronizationBinary fileSoftware repositoryLevel (video gaming)Scheduling (computing)Repository (publishing)Binary codeSource codeSynchronizationProcess (computing)Bridging (networking)Logic gateMetadataDifferent (Kate Ryan album)Projective planeTheory of relativityComputer animation
20:15
BuildingMalwarePrototypeBuildingSource codeJSON
20:41
FingerprintSoftware repositoryLocal GroupElectronic mailing listComputing platformText editorRevision controlData managementDatabaseService (economics)Interface (computing)Gastropod shellScripting languageBinary fileStandard deviationData storage deviceMaizeFactory (trading post)Patch (Unix)Thermal expansionPhilips CD-iIntegrated development environmentCorrelation and dependenceBuildingRevision controlSoftware testingOpen setLevel (video gaming)Software maintenanceSoftware repositoryComputer iconFactory (trading post)CuboidRepository (publishing)PrototypeFiber bundleProjective planeTouchscreenDirectory serviceComputing platformState of matterData storage deviceStandard deviationBinary codeSystem administratorMereologyWordTrailComputer fileMetadataMultiplication signProcess (computing)Function (mathematics)Web pageInterface (computing)QuicksortResultantRobotWeb 2.0Entire functionExtension (kinesiology)Server (computing)Computer animation
25:05
System identificationPerformance appraisalQuicksortPoint (geometry)Kolmogorov complexityCodeCodeAreaPoint (geometry)Computer animation
25:39
Medical imagingBuildingPrototypeScheduling (computing)Binary codeElectric generatorSource codeKey (cryptography)Software developerService (economics)Scaling (geometry)Multiplication signINTEGRALQuicksortMereologyProduct (business)Repository (publishing)SynchronizationBootingSoftware repositoryLocal ringMobile appPolygonPhysical systemInstance (computer science)Basis <Mathematik>Right anglePlastikkartePoint (geometry)Office suiteMusical ensembleVirtual machineEndliche Modelltheorie
30:21
VideoconferencingComputer animation
Transcript: English(auto-generated)
00:06
Alright, I guess we'll get started. So for those that don't know me, I spent most of the time in the last year working on the release tools, which is the bots and things that you interact with in factory leap and sleet and a bunch of the things that you don't.
00:21
So this talk is going to cover basically some problems that we've run into and possibly some approaches for resolving them so that they aren't pain points anymore. So first of all, let's cover the goals basically. So, I imagine most people aren't familiar with the current process or not a lot, so we'll
00:41
cover that somewhat briefly just so everyone has an idea of kind of what's involved. Identify some of the problems that have come about in that area and propose some solutions, some things that we need basically to make those problems go away. And explore a fresh approach that will hopefully make those things go away.
01:03
Alright, so first of all, let's look at the current workflow. So again, this is going to be kind of summarized, but just to give everyone an idea of what's going on. So to start off with, you're probably most familiar with this, which is the general idea of the develop projects that eventually get submitted into factory so everyone branches into their own project, does development there, submits to the develop projects, sometimes
01:22
directly to the develop projects, and then on to factory. But what really happens in the factory process is a bit more complicated than this. So to kind of give you an idea, there's some of you may be familiar with the staging projects that proceed entrance into factory, and there's a whole bunch of tools that deal with getting the packages into those staging projects and evaluating what happens in those
01:43
projects, and there's bots as well that you, like I said, have probably interacted with that review various components of the request, the illegal to be, and all of these, everything that I put in the cloud basically is not running on OBS, so these are things that are necessary, but they're not part of OBS.
02:04
And of course, this doesn't include LEAP, which has a separate component, the crawler, that automatically submits packages from factory to LEAP, and it has its own copy of basically all the clouds that you see for factory, its own set of all that. In addition to that, we also have SLE, which is doing very much the same thing at this
02:22
point, and a crawler submitting from SLE, so this crawler has to keep track of the sources of the packages and submit basically in both directions to LEAP, also put something on there for the maintenance bots, which there are some of those, so again, everything that's in the clouds basically is not running on OBS. In addition to that, obviously, individual developed projects have their own tools for
02:44
updating packages and things like that, as well as individual packages may have scripts that are run as part of their update process, again, outside of OBS. So basically, the point I'm trying to drive home is there's a whole lot of stuff that is not in OBS that would probably be preferable if it could live closer to the actual packages
03:01
and things that it's tied to. So before I dive too much into that, I want to cover some of the efforts over the last year or so, just to give you an idea of the fact that we have made a bunch of changes and that the outcome has been, I think, positive, so that we can have some sort of, I guess, credibility for making more changes going forward.
03:21
So I think some of my primary goals, anyway, was to automate as much as possible. Anything that's mundane or people don't want to do, make it automatic, because people tend not to do mundane things. Refactor as much of the code to share common features. So again, this has mostly been targeted on the bots and whatnot, and as well as refactoring them to be able to be used on all the products.
03:41
Some of them were specific to factory or leap and things like that, so making them more generic. Additionally, a lot of the bots run in different modes on the different products. So there's a lot of options where they'll basically implement some policy on one of the products and not elsewhere. So basically trying to abstract that all out as like config switches rather than some of the
04:02
various methods used before. So it's, again, more easy to manage all this. One of the other goals has been improving the tool communication. Certain things like the repo checker were not terribly transparent when they had issues. Mostly, the release team was about the only people that could look at the logs and really find out what was going on. So now some of you may notice that there's a bunch of comments that get dumped whenever
04:25
it has problems, which, of course, introduce their own issues with that. And one of the other areas is since we have to run all these services outside of OBS, we obviously have to deal with all the standard problems in deploying things to production, i.e. monitoring them when they go out, things like that.
04:41
So we've been improving those things. So as I mentioned, Leap and Sleet picked up a number of the bots and tools that were used by Tumbleweed, and the new package list generator basically built to do all work with all the products at once. So these are kind of general improvements that are being made.
05:01
And then I guess the graph there, just to give you an idea, we've kind of ramped up the activity in the main release tools over the last year. So again, to summarize, all the main tools basically are shared. And I basically have been posting details that address when we make changes if you're interested in reading more. So one thing that I'm really interested in is confirming when we make changes as well
05:25
as we can that we actually had the desired effect and that we didn't cause other problems. So one of the other things that I introduced was metrics.opensuso.org. So there's a whole bunch of graphs there. I'm gonna cover some more easily understandable ones without a lot of extra context, just to basically prove that some of the things we deployed over the last year
05:42
had a positive impact, and some other interesting things. So the first one here is the staging bot, which again, if you submitted a factory, you may have seen. So it does a large portion of the first round staging, things like that. Obviously, Dominic and others are there cleaning up whatever remains.
06:02
But this graph basically shows you, after we deployed it, their round five one, that you can see the takeover of the green basically doing all the initial staging work. Interestingly enough, you can see the black lines where there's no data. That was actually OBS outages. So some of those things are very obvious to see in the data here. Similarly, LEAP basically has a similar trend there.
06:22
The dark purple is the staging bot in this graph. So again, taking over a lot of that work. Another one of the goals was to basically try and stage things as quickly as possible because a lot of the bots can't do anything until it's staged. So you don't get feedback until that happens. So this is basically a graph showing particularly the ADI staging switches,
06:43
things that aren't in the rings. So basically things that aren't as less important, I guess, or we don't test as much. But anyway, you can see here that before, it's very clear, they're in the middle around five one again. After the deployment of the staging bot, there's a huge drop in the time before the first staging. So everything's basically roughly under an hour. Those spikes after that, I believe, are almost entirely requests
07:03
that fit into our pseudo state of ignored, which I haven't filtered out in this graph. But anyway, the vast majority is below that line. Another interesting thing, just some code cleanups, a bunch of empty commits made to some of our internal kind of components that the tools share information with.
07:21
So you can see the big spikes there in the middle. And then once cleaning up all those basically empty useless commits, you can see that the rate of commits drops drastically there. This one's kind of interesting. This is the weekly releases of Tumbleweed. So you can see that some weeks we actually hit seven, which is all of them. You can see there right on the left side
07:42
at the end of the year, that was actually basically the end of the year vacation. So we actually had no releases. So again, all these things are kind of interesting to see on the graphs. Again, this isn't so much important of demonstrating anything. I just find this interesting, but this is the develop projects for factory over the last, what is that, a year almost, I guess.
08:01
So you can see it's been steadily climbing from somewhere around just over 170 to over 200, kind of interesting. This is another graph, basically demonstrates, this is the percentage of requests staged in each. Letter staging is part of the process. So you can see that we generally have a couple big stagings and lots of little ones.
08:21
To me, I just find this interesting because basically by providing all these graphs, you can recognize trends that, or they're very easy to recognize the trends that otherwise would be hidden. So to summarize, from those graphs, we were able to see deployments very clearly. So when we made changes, they had a significant effect. Also able to see, like I said, the OBS outages, system failures, things like that.
08:41
It's all very clearly identifiable so that we can verify that we're not having adverse effects. I guess that was the other point with the week releases. You'll notice there was no major drop-off or anything like that. It was just kind of standard fluctuation the whole time. So we didn't have any adverse effect, I guess, on the rate. So what's left to be done?
09:02
So I'm gonna cover some of the pain points that I've identified when working on these tools and the general kind of problems that come up. So one of the biggest ones is the low transparency. So people that aren't part of the release team have a hard time, many times, finding out if something's going awry or what is actually going on.
09:20
All this stuff that's deployed is, again, not terribly visible. So you don't necessarily know if you're just waiting in a queue or if one of the bots is dead. You don't know any of this. Many of the bots don't necessarily indicate when they are waiting on something. They basically just tell you when they're done. So again, it's just kind of, you can go look in, the release team members can go look in the logs,
09:40
but it's not very useful for everyone else. Yeah, and the overall workflow. So a lot of the bots will add each other after they complete what they're doing, but it's hard necessarily as a contributor to go, okay, well, the first bot failed. I'm gonna fix my thing. And then the next bot fails, like how many steps actually are there? Things like that, kind of the overview. Another problem is the notifications.
10:02
So right now, a lot of the bots report back. They kind of abuse the comment system because there's no way for them to basically post a status report. So many of the bots will post something that's useful if someone wants to go look at that request and understand what the bot did, but it isn't necessarily requiring someone to come look at it. But when we post those comments that are basically status reports,
10:21
people get notifications. So right now, if you submit packages to factory, you'll get a crap ton of emails basically for all the different stages. So I think one of the problems is people tend to just ignore those emails, which isn't terribly useful. Additionally, it'd be nice if you had some sort of overview to basically look at what are all the things as a maintainer that basically I need to do.
10:42
So all my packages that's submitted in factory, I may have submitted 20 in the last week. There's two of them that are stuck and I wanna quickly identify which ones those are and what I need to do. And then the last note, I guess, there is some of the bots aren't able to clean up their comments as well. They're, again, using a status report. So for example, the repo checker posts comments
11:01
on develop projects, which it can't remove. So you kind of don't know is this still actually a problem. It's kind of ambiguous. So that's another issue. Issue tracking. So some of the bots will send out emails. There's a bot that sent out emails to kind of remind people outside of OBS in addition to OBS's emails about build failures.
11:22
There's a lot of other issues we'd like to notify about certain, to basically ask the maintainer, hey, are you there? Can you do something about this? That whole kind of escalation process, I think really could be handled as proper issues attached to the packages. So again, it's all tied together rather than basically all these emails and kind of tools that live outside OBS.
11:43
In addition, the release team uses a diary outside of OBS to basically communicate what we're doing. Also use the comments on the projects, but everyone sees those and again, it can be spammy with emails. So kind of having a, that could also be done with issues related to the staging projects. So again, there's some sort of generic issue management stuff that's all tied to OBS
12:01
would be really nice for all this. And being able to cross-reference because that's one of the biggest problems is a lot of times these things are all separate and you end up having to post all over the place that you did something, whereas it'd be nice to just reference it. Complexity is another issue. So all the tools, since they live outside OBS, all have to do their own bootstrapping,
12:20
they have to have their own accounts. A lot of them have caches, some of them fairly extensive, hundreds of gigs, things like that. They all basically have to pick up where they left off, figure out what the state is. There's a lot of just muck basically that isn't particularly interesting to what the bot actually wants to accomplish that they all have to have. And one big one I think is the repo checker,
12:42
which essentially has to re-implement the way OBS does project stacking just because it ends up having to verify. It's basically doing a install time check of all the packages so we can tell if packages and stagings are actually installable and it basically has to re-implement essentially what OBS is already doing to do builds,
13:00
which is the way the staging projects layer on top of factory and things, which again is not ideal. Some of the other issues, at least with minute things with the tools, would probably be nice if we had something like Git, version controlling to do some things. But the other issue is some of the metadata
13:24
that exists related to packages and the config is outside the realm of requests. Certain things, special cases, are changeable by request, but other things aren't. So some of the things that have to be changed as far as the project config that need to go along with packages that are staged, that's just something the release team has to manage. So basically making those changes both in staging
13:41
and then when those requests are accepted, making them in the actual project. So having them either live in the realm that can be changed or a special request for them or something along those lines so that those changes could be possibly A, even created by people outside the release team or B, added to the stagings and then automatically carried over. We can obviously implement this in our release tools,
14:01
but again, these are, I just don't think that's, it's just a lot of stuff to live outside OBS. So this, so all the bots right now do polling. Obviously there's been some movement to try and switch over to an event-based system. So again, this would probably speed up the bots response time in a variety of areas and reduce their amount of calls to OBS.
14:22
That's, again, that would be nice. Authentication was another one that I mentioned earlier. So all the bots have to have their own accounts. Not all of this account information is typically in shared environments, which personally, if someone wanted to do something nefarious, it would be possibly harder to track down because some of these bots have relatively elevated permissions.
14:41
So again, that's kind of a potential issue. And there was an incident with authentication where one of the bots was actually blocked because they were doing some migration and it just kept saying hello. So that whole, the whole issue of authentication would be nice if we could avoid it. So one of the other problems, modifying all this stuff,
15:00
again, can require creation of all these accounts. We have to deploy things, again, outside OBS. You can't just turn something on. And so some of this can be difficult to simulate based on the way it all interrelates. So again, people that aren't necessarily familiar, this ends up being like maintenance of lots of little services that run outside of OBS. So it's just a bunch of work
15:22
that hopefully we could avoid. As if that was enough, we do all this work and it only works for the main products. So a lot of these tools can't work for the develop projects, or if they did, we'd have to have possibly additional resources to run them. They don't run, again, it'd be nice if we could run them on the same workers as OBS
15:40
so that it scales a lot better. So some of the potential things that would be nice to apply the develop projects, possibly even more involved develop projects would be the staging process itself, or at least some of the review bots. The runtime checks has been requested a number of times by develop projects because they can't see any of those problems without manually checking until they submit to factory.
16:01
And again, any of those custom scripts that exist would, again, be really nice to have run there. So enough of all the pain points, what can we really boil down that we actually need to resolve a lot of this? I think the main piece would be some sort of general continuous integration style setup like you've seen elsewhere,
16:21
where basically a lot of these scripts that basically aren't of interest to the OBS team to maintain, and I don't think they should, could at least run on OBS so they scale properly. That would resolve a lot of the deployment problems, things like that. And the events and all that bootstrapping, I think a lot of that could be done away with because basically you just have a simpler job that runs on one particular context
16:41
and basically gets just triggered when it's supposed to run. Another interesting thing, some of the bots have to manage artifacts outside of OBS. And there has to be basically, they have to be dealt with storing them somewhere. And again, it's all kind of a mess. There are other tools out there that would manage that sort of thing, kind of like the same way that OBS stores RPMs right now
17:01
but basically being able to store arbitrary stuff attached to these CI jobs would be nice. Another nice thing would be, like I said, if everything lived in Git, especially the configs and meta and all that kind of stuff so that we could basically, if it all lived in the same version control system as the package source, then for the same reason that I think it's been adopted elsewhere, things like changing your Travis config
17:21
when you add a new component to test more things. Being able to do that all together really makes a lot of sense rather than having them be separate. Obviously, if we're gonna use something like Git, we would need one of the large file system solutions to avoid having to store the actual source, upstream sources in the Git tree.
17:42
And back to what I earlier said, having some sort of package level issue tracking where maintainers can very quickly go and see, okay, hey, in the last overnight, I submitted a bunch of packages, two of them have issues, one of them's not installable and the other one doesn't build in the staging and very quickly being able to identify that.
18:01
So those are, I think, basically the three big things there that would resolve the majority of these problems. So, do those three things sound familiar? I think they do. I think it sounds like GitLab. So interestingly enough, if we used GitLab, I think we'd get some nice bonus features for doing this.
18:21
So for example, per line reviews, things like that, where you can have threads, so it makes the review process a little easier, specifying what's the issue. Cross-referencing works, so you can reference between requests and issues and all that stuff. And when requests go in, you can have actions attached to them, so basically you close out issues or all sorts of other stuff.
18:43
Something that'd be really nice for the release team would be being able to group all these things into stuff that's visible on the actual main tool. So for example, we have some custom dashboards that do some of this, but we get this all out of the box if we use something like labels or milestones to attach all of the requests.
19:03
Interestingly enough, it also has container registry, which is something that OBS has been adding recently, and mobile interface works well. Lots of little stuff that we get as bonus features. So, which we're all probably saying, but we have to at least have the base features to be able to do builds.
19:22
So interestingly enough, the GitLab upstream is already dealing with one of the biggest problems, which is interrelation between projects, so that you can basically have CI jobs that depend on each other, different repositories. So there's already a lot of work there, but I think generally it boils down to generating the repo metadata, so you actually have a repository
19:40
that you can install things out of, and a basic scheduling in the same way that OBS does it, which again, these are all kind of interrelated, the stacking and everything, like it has to be done in the repo checker, all that is basically tied together. So interestingly enough, I think if we were to do this, at least for the staging workflows, things like that, I think it could be achieved
20:01
with something like a source sync, similar to the way OBS IBS bridge works, where you basically sync all the sources back to OBS, and then additionally you could expose the binaries the same way sleep binaries are done, where they weren't built on OBS, but they are accessible there. So, I have a prototype of this, which I will show, just as so you can see what I'm talking about.
20:23
So basically here I'm building one of my packages that I maintain, Adminer, so you can see it building, this is GitLab if you're not familiar. So you can see there on the right side you have previous builds that failed, click through it all, it has artifacts so you could download all of them, so I could download all the RPMs at once, or I can browse them,
20:41
browsing them, this would look like I was looking at the no-arcs directory, so you could see all the RPMs it produced, it already has a preview screen, which you could obviously add to expose the RPM metadata if you wanted. I think the most exciting part of this whole prototype is I built basically, well, you can serve the repo metadata,
21:00
so this is basically serving directly out of GitLab's artifact storage, so the publishing workflow doesn't require copying anything or moving files around, you can serve it directly out of GitLab, so you get all the features of GitLab keeping track of all the artifacts and cleaning them up for you. Interestingly enough, so this is just the same thing, it's showing you the packages that it's serving out, so the only thing to pay attention here is the platform sh and drush at the top,
21:23
so if you notice the URL has the word latest at the end, so I can actually serve revisions out of this, which I think is the most exciting part, so for example, this would have been the repository, or this was the repository state before I added the platform sh and drush packages, so basically it only has the admin or packages, this already works obviously,
21:40
if you plug it into zipper, it's happy to present those, so the benefits of having revisions or repositories, I think are numeral, the biggest thing is that right now the whole workflow of having standard and to test, where we're copying all the binaries around, could be unnecessary, because you basically just take a build, and you tag it as being to test, and we use that revision until we move on,
22:02
and then when we're done with it, we tag it as the standard, so basically everyone builds against those, and it just simply references a revision of the entire repository. Interestingly enough, this also means that if you extended this to home projects, things like the issue the OBS team ran into, where they deploy out of a repository, and aren't able to easily go back, because they update a bunch of the dependencies,
22:21
things like that, if you had revisions of the repository, that would be trivial, so I think that'd be really cool, something like Tumbleweed snapshots becomes built in, so basically you just tag every release, your revisions, and you keep x amount, whatever you want, they're all there, I think the same thing applies to something like OpenQA, where they want revisions of the repositories, things like that,
22:41
so you can just do this all out of the main repository. Delta RPMs, same thing, if you have the previous revisions easily accessible, that becomes trivial. Might also be interesting that you could do parallel releases, because you could basically be running OpenQA on a, say you check in one staging an hour later, you check in another staging,
23:01
you could have both of those as revisions, and be testing them independent of each other, which would be kind of neat, so you wouldn't have to block. So, next thing to cover, this is a prototype, basically you're not a prototype, but a mockup I made half a year ago, about possibly improving the transparency of the staging workflow, so as you can see,
23:21
it just kind of shows you the overall steps, has check marks, things that are done, interestingly enough, I think you can achieve something very similar to this, almost out of the box with GitLab, so if you look here, basically you have the reviews that need to be done, so the little icon there is basically representing the review team having approved it, but it still needs to be approved
23:40
by other maintainers of the package, and you can also communicate very clearly that it was staged there, using the deployments feature, so it was staged in A, and you can see that it's building all that. So in this example, I basically have the factory autobot and the repo checkers delete check,
24:00
which does not require staging running ahead of time, so they run by themselves, you can very easily click through those and see the previous page that I showed before, with the log output, so again this becomes very transparent, everyone can see what's going on. For the release team as well, you can use the deployments feature, which basically actually has an interface, so we can now do the entire staging process
24:21
through the web or the API itself, but it doesn't have to be done outside via tools, so that's kind of cool, and people again can see this, so basically if people were submitting to factory, they could see this pipeline there, and they could see that, okay, I have to pass those two bots, and then I go to the staging process, and they can very clearly see, hey, it hasn't been staged yet.
24:42
So back to this, so if we look at the fact that it was staged, we can see the build results there, and the same thing with the pipeline, just some of you may be more familiar with this if you've used GitLab extensively. So this is basically what the build results look like, so you can download the artifacts or retrigger them,
25:02
so all the sort of base features you need there are there. So to wrap up, basically revisit our original goals, so we covered the workflow, identified the problem areas, fresh solution, so let's evaluate the solution, so I think it definitely resolves
25:21
a lot of the major pain points, we get a bunch of extra features with it, also means we don't have to maintain a bunch of code. Anyway, so I think it's, like I said, I think storing the revisions in general is really useful. So, questions or comments?
26:11
Thanks for the talk. So the build would happen in GitLab, right? Yeah. So, and for instance, how do you scale out, so if you need 100 workers or something,
26:23
so how would that work? Because I'm not familiar with GitLab. Well, it has its, I think it already has integration with things like Kubernetes and stuff like that that you can use, so it basically has a way to manage workers, so it'd be similar, I imagine, to the way OBS is deployed, you just need machines, basically.
26:42
Have you thought about at least moving some of the stuff in OBS services of some new kind of way, because then we could also use OBS workers for things? Yeah, so some of the things, like I said, that kind of having access to some sort of general CI that would run on OBS would be another way you could obviously achieve at least the main part of running all the tools alongside OBS,
27:04
but it's something I think that needs to be worked on. I think, also, one point, what you missed, what is one key feature of OBS is interconnecting different instances,
27:22
so that you have build against Pac-Man or the internal OBS, but it's against the external, so how do you imagine how that would work? So I think there's no reason not to basically just use, GitLab basically already has mirroring, so you just mirror the sources themselves,
27:41
and you can expose the repositories basically the same way, so that reboot data generation just exposed, so it basically looks the same way as the way the, what do you call it, the SLE binaries are exposed to OBS, so basically do the same thing, so you shouldn't need anything special there.
28:02
Anything else? Oh, that I don't know. Like I said, I have the prototype that I showed that has the basic features, but it's obviously missing, specifically the proper scheduling,
28:21
so it can only do kind of localized scheduling. But I don't know, it kinda depends, I guess, if people like this idea or not, and whether or not I work on it more, so.
28:40
So I've found all what you presented very impressive and convincing, but to make it more clear, do you want to single-handedly replace OBS? Well, I guess that's what I threw out, is you could still sync the two, so basically I would imagine if you were to use this,
29:00
that we at least target just the features we need to develop OBS, and that way you can still do image building and all the other stuff that I'm not covering at all here on OBS. Right, that makes sense. Or just use OBS if you like it better. Yeah, so I wonder in that model, let's say in two years from now maybe, what would be the main driving system,
29:21
would it be OBS or GitLab, and then the other one would be triggered by that or the other way around? Well, I imagine if we were doing the actual product development on GitLab, then it would probably be the one publishing the binaries back to OBS, and OBS would be triggering builds over there. I had something else I want to say, but I can't think of it.
29:43
Okay, thanks. Oh, I guess, so for some of the other things outside of the things we need to build the products, obviously if you have that generic CI, I don't see any reason why you couldn't port things over, like building images, because it would just simply be basically executing Kiwi the same way OBS does, so all that work could basically just be ported over.
30:10
Anything else? All right. Well, if you're interested to talk more, obviously I'll be around. Thanks for your time.