Re-Thinking Spec Files
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Subtitle |
| |
Title of Series | ||
Number of Parts | 40 | |
Author | ||
Contributors | ||
License | CC Attribution 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/54422 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
openSUSE Conference 201915 / 40
7
11
12
13
15
16
17
18
19
20
23
25
26
27
34
38
40
00:00
Hill differential equationBuildingComputer virusComputer fileNumberMultiplication signComputer configurationSign (mathematics)Point (geometry)View (database)AreaImplementationDistribution (mathematics)BitMathematicsSoftware developerTerm (mathematics)Repository (publishing)Perspective (visual)Uniform resource locatorBuildingPhysical systemCycle (graph theory)Template (C++)Boilerplate (text)LinearizationSoftware testingRevision controlOpen sourcePatch (Unix)Extension (kinesiology)Network topologySource codeDifferent (Kate Ryan album)Reduction of orderAlpha (investment)Order (biology)State of matterRoundness (object)Disk read-and-write headScaling (geometry)Scripting languageFormal languageFile formatExpert systemInstance (computer science)InternetworkingRight angleDescriptive statisticsEquivalence relationCentralizer and normalizerLevel (video gaming)Reading (process)Installation artMetadataComputer clusterComputer animationDiagram
09:15
LTI system theoryNumberPatch (Unix)Revision controlLine (geometry)Moment (mathematics)Total S.A.CodeGame controllerDifferent (Kate Ryan album)Branch (computer science)CASE <Informatik>Sheaf (mathematics)Configuration spaceMathematicsDirection (geometry)Template (C++)Distribution (mathematics)Computer filePoint (geometry)InformationFormal languageSystems integratorMessage passingINTEGRALPhysical systemProcess (computing)Source codeOrder (biology)BuildingAsynchronous Transfer ModeLogicMusical ensembleLogic gateTerm (mathematics)EmailTablet computerVirtual machineLecture/Conference
18:25
Electronic mailing listType theoryComputer fileScripting languageError messageGroup actionPoint (geometry)Game controllerElectric generatorRule of inferenceDifferent (Kate Ryan album)Distribution (mathematics)MereologyExtension (kinesiology)Projective planeFormal languageRevision controlOnline helpEuler anglesRight angleSoftwareLevel (video gaming)Set (mathematics)ConsistencyCentralizer and normalizerData structureSatelliteComputer configurationOpen sourceImplementationLine (geometry)Term (mathematics)Template (C++)Web pageCodeDensity of statesNumberFamilyKeyboard shortcutAreaMacro (computer science)WritingFlow separationModulare ProgrammierungBoilerplate (text)Computer clusterLecture/Conference
27:34
Computer fileBuildingLipschitz-StetigkeitProcess (computing)Binary fileSource codeRevision controlFormal languageMechanism designSoftware development kitError messageMereologyPatch (Unix)Distribution (mathematics)Level (video gaming)Installation artOrder (biology)Branch (computer science)Macro (computer science)Arrow of timeTemplate (C++)Disjunctive normal formDegree (graph theory)Different (Kate Ryan album)Video gameElectronic mailing listProduct (business)Semiconductor memoryPhysical system2 (number)QuicksortLibrary (computing)Integrated development environmentPerspective (visual)NumberLecture/Conference
36:44
CodeProcess (computing)Software testingQuicksortCASE <Informatik>Distribution (mathematics)Cycle (graph theory)Image resolutionProjective planeLine (geometry)Java appletPower (physics)AdditionMacro (computer science)FamilyWeb pageGastropod shellMereologyHash functionMultiplication signPlanningMusical ensembleSource codeDisjunctive normal formAlpha (investment)BitEmailError messageCore dumpInteractive televisionChainService (economics)Goodness of fitGodRoutingLecture/Conference
45:53
VideoconferencingHypermedia
Transcript: English(auto-generated)
00:06
Okay. Hello. Welcome to my talk. Created some people, even more people actually made it up here, even whether it's great outside and just beer somewhere. So we're talking
00:20
about spec files and RPM. Who am I? I'm working for Red Hat for quite a while now. I started in 2006. After about one year, I started getting involved in YAM, which is the updater, the predecessor of DNF, basically just what equivalent to zipper. And after a year of
00:43
optimizing it, I realized, well, I'm done here. If I want to do anything else, I need to go down one level. And so I get involved in RPM in around 2008 and have stuck there for some reason. That's actually my second visit to the open source account. I've been
01:01
here before, but a lot of things have changed. Last time I've been here, it's been 2009, and it were very different times. And I've actually not really been in a conference. We've been hiding in some rooms and doing technical talk into how to merge the different patches that have been in open source and in the Fedora tree at that point. So where
01:27
are we in RPM-wise? We've done a few large changes and features, but it's been a while now. So it's like two, two and a half years till we roll them out. File triggers and booty dependencies were one of the biggest. Especially file triggers are, at least in
01:44
Fedora, a big thing because they've basically changed most of the scriptlets. So there's a huge amount of packages that have been changed. I've actually not looked into open source, what's the state there of adoption, but maybe someone can enlighten us on this.
02:04
So we currently have a huge backlog of mostly smaller features that we are going to release. The idea is to have an alpha release out next week, hopefully. So over the next Fedora cycle, we want to get the release stabilized and then out. So traditionally
02:23
we've in RPM used Fedora as our test bench, so basically we release in an early version of the next release and then stabilize throughout that. And the thing is we've been
02:45
pretty busy with this rel8 thing on the side. And so basically after that's done, we're now thinking of what's the next big thing, what to do, where to go from here, and what is the most important thing to consider now. And the problem with RPM development
03:05
is always that RPM developers are developers and they are not really packagers. Yeah, we do have a few packages that we have to take care of, but that's not the basic side thing. This has led to basically the last decade that most changes, even if they have
03:22
huge impact for packaging, like the file triggers are basically done from an RPM perspective and not from an RPM build perspective. So it's like how do we want to have this installed on the system properly and not so much what's the easiest for the package to actually put it into something. So that's basically something we want to do next and
03:47
to basically look into specs, files, and packaging from a package perspective to see what can be improved and what can be made easier. Another thing that's probably not that interesting for you, but I had the data lying around, that's the growth of
04:06
Fedora. I more or less assumed that the data for OpenSUSE looks basically the same. The exact numbers are not that important. It starts here with 2004 and goes to basically now. As you can see, it's basically a linear growth in number of packages and also in
04:24
the overall size of the distribution. That means the number of work that has to be done each release gets bigger and bigger and bigger and bigger every time, and there's no sign of just slowing down or stopping at any point. So the only way around this
04:46
is either to get more and more and more people involved, which is surely an option, but there's only so much you can do there. And the other option is to basically lower the amount of work needed for each update to be able to keep up with that.
05:08
That's basically all what I said about this. So the question is what can be removed from the package or what steps can be removed from actually doing updates or creating new
05:20
packages. What can be automated and what can we remove from the manual work that's needed. So one big area has been scriptlets. That's kind of solved from an implementation point of view with file triggers. I don't know how far this works in OpenSUSE. Anyone
05:42
an idea? Is this used? Are they used on a broader scale? Okay. So that's something worth looking into, basically replacing all the scriptlets in most files, in most packages
06:01
and centralize them. For those who don't know, the file triggers are basically you can run a script based on a file name that's in another package. Probably SUSE doesn't run that many scriptlets as we do, but that's not what I heard. Ah, there's an expert.
06:22
So the idea is basically to do all the scriptlets in a centralized work and move them out of the packages so the packages get simpler and the packages themselves don't have to care about it, but one central instance does all the work centrally. The next big thing that's going to be the next release is automatic build dependencies
06:41
that comes from the Rust and Go folks. The problem here is that when many of those new languages do have their own package format and they do have all those metadata already like dependencies on what other packages they depend, then it's a pain to synchronize
07:04
that right now. So there are tools that can read some other Rust or Go package description and turn it into a spec file, but that's great as a starting point, but it's not something that's very helpful for distribution that does updates because you don't want
07:21
to overwrite your spec file. You want to keep that and you want to keep your history and your patches and everything. You don't want to copy over stuff that gets generated elsewhere all the time. The automatic build dependencies will solve this to some extent. It's basically a build scriptlet that is run after prep and will generate dependencies
07:43
for the build. That's going to be interesting for the OBS people that probably still sit down there not suspecting anything, as it breaks a lot of assumptions of the build, which
08:00
we have. Right now the assumption is you can just build a source RPM without anything, basically with RPM only, and you can then start a build with the dependencies in there and being guaranteed that it's going to succeed if you have auto-dependencies installed. It's going to break, but it's not as bad as I first thought, so you basically have to do
08:23
another round, read in the new dependencies and restart the build. It's going to be okay, I promise. Maybe. So, another thing that's kind of RPM-ish, but not really, that's probably more
08:41
Fedora-specific and we have to see how that translates to other distributions, is I want to get some stuff out of the spec file and using the Git repositories we have the spec files in as data store. I'll elaborate on that, it's a bit more complicated.
09:00
And for the long-term things, what we want to do is having templates within the spec files that can be maintained centrally, so you can remove some of the boilerplate stuff and have it in a central location that does things. There are basically two directions to these. One is something to have templates for building,
09:26
so you have for different languages prepared build templates that you can use. That's still very vague. In my mind, the question here is what can be centralized, how many configurations do you need, and what can actually be saved in complexity,
09:47
or if you need so many knobs that it's not worth doing. But we will look into this in more detail. Another thing is building sub-packages is currently kind of a pain.
10:01
The thing that's currently most complicated is debug info packages, and it's currently by some code somewhere in RPM that basically does them in C code, which is not that beautiful. But there are a lot of other use cases where right now
10:21
those sub-packages have to be done by hand, but I will also go into details soon. So what we currently have, if you do an update, you have to create a patch somehow, you have to add it to the spec file, you have to pick a patch number, find out which is the next suitable,
10:42
then you have to apply that in prep using the number above. You have to increase the release, you have to add a changelog entry, you have to use the release number you just increased from above, you have to add your name and email, then you have to commit this. I don't know what you do in SUSE. Where do you store your spec files?
11:02
Do you have? You can integrate BCS in the build service. Okay, so you put them somewhere else, but so we could put it in Git,
11:23
and typically you'd even have to copy the changelog, so you have something in Git also. So it's a lot of steps. Yeah, we already removed this step here with auto setup. The next release will allow you to not set a number, so we will auto number the patches,
11:46
which makes sense if you use auto setup because who cares what number the patch has. This is more interesting for us because we have different branches for different releases, so we might want to cherry pick stuff from one branch to another,
12:02
and this is a total nightmare because there's nothing in here that doesn't give a conflict. Literally. So I will try to look if we can get rid of the changelog by basically generating that from the Git backlog,
12:23
and probably also calculate the release number from the Git by just counting up, so that will reduce the number of things that can go wrong or be wrong. It also will basically remove all those things that create a merge conflict for us because you basically just put in the patch. The patch adds one line up here, and if it's a conflict
12:44
that's not that bad, it's just one line somewhere in the moment that might be in the wrong order. Who cares? You put it in there, and then the commit message stays the same, and it just does everything else. That's something I want to look into Fedora. That will probably take a while until it gets to a point where it's interesting for you guys,
13:06
but this will probably be some kind of white paper how to do that or what can be done there.
13:22
So I will probably have a look at that. Even with the external file, we're still copying the OBS VCS, so it's still in two places, so they have the same problem anyway. Yes, but it's pre-given, so you commit and you get the whole text already pasted into it.
13:42
Or you could commit without having to change the file and have those entries in there and pull it from there. Depends. Yeah, I know. There's a lot of history to the whole process. Yes.
14:03
Well, the stuff that we do within RPM will of course be upstream RPM. The other stuff, the problem is it's all basically build system or logic, so it depends on the integration in the build system or the way you store your spec file.
14:21
So the auto setup stuff is done already and the patch number stuff is going to be the next release. So more or less, that's done. We just have to release it.
14:47
Yeah, we will see how... The thing is, the thing is, you're probably... So first, now I don't say anything about what we're doing.
15:03
Well, but we can talk about Fedora and we've so far been hesitant backporting too many stuff back. We'll see, but that's a topic for another rainy day.
15:23
That's basically what we're trying to do here. Then dynamic build dependency, I already said that. There will be a new section that will run after prep. The main thing is that it will require build system integration.
15:42
You will be able to do that also on a command line, of course. So you have to expect that you build my fail on missing build dependencies, even though you already installed all of them that you had in your source up here. So that's basically a main
16:01
change that is in there. The other thing is, as a package, you can probably outsource generating the build dependencies from your package and it will be... That's one section less that can go on if the package has changed. I assume there will be tools for the typical candidates like Rust.
16:25
Igor is working on that. He's hiding in the background. And yeah, so that's on your doorstep.
16:41
And I hear the Go people are also interested in using this. The thing is, that's how it's going to start. I can imagine that on the long term, even classical packages may be using this. I mean, it's not that great if you want to get requirements out of a configured file, but ZMake maybe... Maybe one can even convince
17:12
upstream to basically ship a machine-readable file of dependencies at some point if we downstream are able to actually process that and make something useful with it.
17:23
So I see there's more thing that is more impact that this can have on the long term, but it will probably take a while until all the tooling gets in place and it actually gets adopted. So another thing when we're looking on the packages, but I've been thinking about packages,
17:45
is there is a weird conflict about who's actually controlling what in the RPM land. So there's of course things that are RPM upstream that we do, and then they are implemented and
18:04
everyone else has to follow because if we change something, yeah, we changed it. Then there is of course a package which has most control over the package itself, and there's a very weird in-between layer of the distribution that has very little control
18:23
right now. So it's very difficult to actually centralize stuff out of the packages and bring that in a place where people think about a bigger picture. So we've tried to do that with the file triggers to be able to get the
18:42
triplets to a central place, but I think in the long term we need to think about more, if there are more places where we can centralize things. And currently that's really difficult because there's no implementation level. So there's, and I know in Open Source but in Fedora we have huge amounts of
19:04
packaging guidelines, pages over pages over pages over pages, how things are to be done, how they should be named, how they should look like, what you can't and couldn't do, what we shouldn't do, what you might do if you ask someone or what else.
19:20
The worst thing about all those guidelines is, yeah there's a package review but if your package is in no one cares so there are all those rules that may or may not be followed and there's no way to actually put them into the world as an entity on their own. So I hope that there we can find ways to for one help the packager doing the right thing
19:47
and on the other hand giving the distribution or parts of the distributions more control over a set of packages of their interest. I first thought if we can do something on a distribution
20:02
level but I think that's not possible. The reason why there's so much control on the packager level is because the packages are so different and that's one of the reasons why RPM is so hard and so complicated because we have to cover basically every possible situation that might be somewhere and there's no way to fit them all into one solution.
20:26
But there are a lot of packages that look a lot the same. All the Python packages, all the font packages, all the language packages and so there are a lot of packages that belong together either by the way they are built or what they contain or how they are related to
20:44
and so I think we need to focus if we can find solutions that allow those packages to be maintained in a more controlled way with a more centralized approach.
21:04
But that's going to be tricky because those structures don't really exist because there's no point in me like having a script saying well that's what Python packages should look like. You need basically a group within a distribution that actually takes care of this.
21:24
This extends to RPM to some extent that we also have a lot of the tools that we already have upstream are things that we don't really maintain well because we don't know and we don't care really. So there are all kinds of those dependency generators for different languages and
21:46
I'm fine, I know Python, I can look at a Python dependency generator and make some sense of it. Then there is a Go dependency generator and I have no idea how that's even supposed to work and there are all these other languages. And so there's the, we've tried the last
22:06
year or two but not succeeded much, basically pushed them out and basically handed them over as a separate project that are maintained by different people that actually care about how those packages look like. So I invite everyone if you're taking care of some of those groups
22:24
of larger packages, talk to us if you want to get involved so we can basically hand those adjacent areas over to people that actually care because RPM upstream can't really get involved into all those ever-growing number of package families that have special needs
22:47
and need special care. And one of the things that we want to look into performer and feature wise is how to make this easier and how to offer solutions to those groups that can be actually
23:02
done. Yeah, you can write macros and RPM and but that's all kind of ugly right now. So it's probably some, a lot of things can be done if you really want to but you run into issues very quickly. So if you want to basically, if you want to ship those macros as separate files and
23:26
and automatically set dependencies on those stuff. So there's probably a lot of smaller feature that we will look into over the next year to see if we can make this easier. And one of the goals
23:43
to centralize those boilerplate code, it's not that interesting but at least get this done. I've looked at eBuild which does something very similar to this. Those
24:00
interest groups need to, we need to get in contact with and the idea is of course to, this will be of course optional. So we are not going to remove the other stuff so but that means on the other hand packages actually will need to be moved more or less by hand or by
24:23
script to the small new options. One way this could solve is if you put those scripts or templates into separate versions and for different releases you could get rid of all those if lines that litter a lot of our packages and then here that's even worse
24:49
in open source not pointing fingers. But if you have centralized scripts that are used there you can have different versions for different releases that does do the right thing without
25:05
package even knowing the difference hopefully. The other thing is dealing with sub-packages. That's currently, the problem with this, the sub-packages in RPM right now is the overall attitude that RPM has in spec files.
25:25
The spec file right now basically is a consistency check for the software packages. So you have the file list and the file list is there to type in every file so to make sure that if there some file pops up that doesn't belong there it gets an error. It creates an
25:44
error and you as a packager are supposed to look up what went wrong and fix that or fix the list or whatever. And the same thing is also true for sub-packages. So as soon as something goes wrong there you will get an error and the package will not build. And I think we might be able to basically just loosen those rules or be able to loosen those
26:06
rules basically by a switch to be able to have template packages that will build if everything is right. So you can basically have a double template that will be used and it will
26:23
swallow all those files that look the right way so all the include files will just move there if there are some. And the behavior will be if there are no files to be included because it's not a C package but some documentation package, those package will just not be built
26:43
without generating an error. So you have those templates you can use and it will fail graciously and not bother you. I have some ideas how to do that but it's still brewing in my mind how to do this in detail.
27:08
In the end it's a question of philosophy how much convenience you want for the packager and how much control you want to bind down how the package actually should work.
27:22
There is the possibility of course to use the build templates from above to actually include those templates so you could have so that even those sub-packages get basically generated automatically and you could have like a distribution level includes that would determine
27:46
what level of sub-packages are actually built for those packages that using this. So you could say well we want all the non-binary files split out in a separate package so you would only have the
28:02
binary stuff in the lib package and everything else gets a lib no source rpm or something like this or you would be able to split out all language files and basically explode every application into like 50 language sub-packages and you could switch this on and off basically
28:22
without even touching the package. So that's basically yeah one interesting thing is how to what to do with files and there is a couple of mechanisms that we would need there like some sub-package stealing files from another and or if so the problem is right now files are more or less
28:46
taken care of very carefully but if you want to enable switch on a sub-package you of course have to move the files over there without generating an error in the other package that may list them still. So there's a couple of there's we will need some syntax that will
29:05
allow to do that without generating errors. And we will also need something to basically append packages that's something that's currently not possible so you cannot have like a second file list for to add files to a package that may be coming from a template. So if you have a double package
29:23
you might have those other files you want in there too so you will add something like this. So that's the things we're I'm thinking at night. I'm thinking about at night questions, comments, scared faces. I do have RPM merch for the best comments.
29:49
Yeah please. My main life has always been dealing with new macros or new features in memory. For us it's very easy to have backport packages and I really hate that you then have to
30:02
do if conditioners and this back file too. So that's why I was asking if you can make it that you can easily backport those features so backporting like rebuilding RPMs on older systems still works fine. Yes the question was about backporting and how to make it
30:27
easier to make those new features and new macros to actually work on all the releases that are built from the same spec file and to avoid all the if release version something. So in Fedora
30:42
avoids this to some degree by having actually different git branches so they're not building from the same file but not everyone is willing to split up the spec file into actually different versions so we basically get the same thing. Backporting features is kind of
31:05
difficult. There's no magic here. There are two ways to do that. Either you update RPM in the old version. That's something a lot of people feel very uncomfortable with and the other thing is basically backporting the single patches which is something we have done
31:25
but we are trying to avoid because it's of course a lot of work and may break stuff anyway but there's no real solution for us. The real solution is not to have too many different versions or the other thing what we've actually done in the past is delay the
31:48
usage of those features for a release or two. So you basically drain out the old RPM versions that can't support it and only use it later on.
32:04
Yeah they've been waiting 15 years. The thing is we've done that in the past but not for this reason. We can no longer do that. We are trying to keep up and be faster and that's
32:24
a good thing but it balances out so being slow sometimes has benefits. Being fast sometimes has benefits faster. At least from the macros perspective one of the things that we did for Fedora was if they're just macros that run in the macro engine we just yoink them and
32:42
put them into another package and then just force them into the build room for all the older releases so that works for us for like 90 percent of feature backboards. When it comes to like they change the way RPM build works you're kind of screwed. Yeah but that's something that can be
33:07
outside of RPM so you don't actually... yeah the missing numbers not really. That would be a possibility. Yes.
33:23
Yeah the comment for me this looks a lot magic. It scares me. Because for me explicit is better than implicit. So for example my question is you have generated build request. Bring it offline like we have to like sit on spec
33:43
which figures out the request for you. What does it bring to do with a macro and not bring it beforehand before commenting? I mean once you see a macro the other I see the build request for me as a reviewer I read thousands of step files it's easy to see the difference for example.
34:04
You might introduce a new build request and I might not call it. Well the main reason so the question was this looks like a lot of magic and why not generate the build requires previously in another step and one of the reason is that we
34:26
first you need this you need the infrastructure to do the other another step the second thing is those new languages basically yeah you could previously but it's kind of
34:40
proud of the build process actually so they come pre-packaged with the information in inside and basically using this during the build process makes it easy makes it harder for for us to actually go wrong or break so it's basically you could do it outside but you you need to
35:05
you need to have all the stuff wrapped around if you do it in the package you can actually have the process of extracting those dependencies as part of the package yeah rpm is rpm is
35:20
all a tooling problem
35:45
so we are not we are not hiding them so the sort of have a scripted thingy that every spec file calling ld concrete is just too bad you can just figure out it's a library i have to call it which will simplify spec files
36:02
so this is a comment to the to the more follow-up question how do you do it in koji like these set up a new build environment for every build yes you need to know up front uh what we need to install into that how do you do it in koji do you want to then run the rust extract tool
36:23
on your koji resolving thingy that creates a build job that then so koji does it by you we take from this git we create a source package source package gets passed to mock mock runs dnf build depth which extracts those installs those build depths then runs rpm build
36:42
against that rpm build bombs out with another source package which then runs dnf build depth against that creates a new turn that runs the build the second time and then runs through that and then that's the final bit well at the end of there there'll be a final source
37:04
package but there's intermediary no source package that's created as part of this thing the thing is it's actually is the the way it's implemented in rpm it's actually meant to to restart the build and you can actually restart the build even from the extracted prep
37:21
from the extracted sources if you want to so the turnaround in there is very small if you if you do it properly so it basically just creates a header with the dependencies in it and you basically install those into the existing build route and then restart the build that's all you need to do you need to look well you need to you copy the rpms in then start the
37:49
vm and the vm only gets the rpms but it has pre resolved so there's no external interaction not there's no secondary resolution process at all yeah this will be interesting
38:02
i know so so so i've been talking i've been talking to michael schroeder and he said we know the dependency before i know that's what that is all one of build service i know yeah we will see that i've been talking to michael schroeder he said he thinks he can do
38:22
it somehow but michael's also died yeah so so so who am i to question him i i'm not i'm not repeating that
39:08
there's a simple there's a very simple answer and the answer is no we have people that do java packaging that they're not me
39:27
there's a packaging guidelines page the java stuff applies to all red hat family distributions yeah as we say we have a lot of packaging guidelines yeah the problem is probably not that they are too little
39:43
guidelines the problems rather than they are too many
40:00
well they mostly okay thank you so any other questions
41:07
so i've not looked at connery but that's clearly something i will look into can someone i'm yeah so can someone i'm kind of i'm kind of chained here okay any other questions remarks
41:44
that's that's a that's a good that's a good question and it's that's that's easy to answer because there are no comments in rpm yeah no that's but it's the actual answer the thing is if you have a hash that's an that's a comment within an uh within a shell that's part of the
42:04
thing and rpm is completely oblivious of the fact that you thought you would commenting or something out that's something we actually looked into like a half a year ago and i don't know if we did something about it but it's it's something i i think i think we added a
42:26
warning for that i'm not wondering i added an error to get master so that if it's a multi-line that is on a comment line it will blow up instead of letting you yeah something like this but that's a git master which means it'll be coming out sometime in the far future no
42:44
we could do we're doing an alpha release next week yeah that's a plan just get my i have some time packages which and why because of my core resolution and so the
43:08
sort of plan is to do an alpha release next week and refine that through the next fedora release cycle which ends in october or something november i think yeah the penning just sent out
43:21
the the chain proposal yeah for rpf 415 so you're feel free to trap the alpha release as soon as it's out i you maybe maybe wait a week or two before you push it yeah but but of course yeah
43:43
feel free play around with it um i mean yeah we will typically fedora takes most of the heat of getting the really fresh stuff but there's really no reason why other people shouldn't try and feel a bit of the pain yeah but but even even even if you don't put in a distribution
44:12
right away you can play around with it
44:26
can you just open a ticket for with some test guys because the macro engine we have pavlina which has looked into the macro engine i for years have
44:42
basically refused to even look at it because it's scary on the outside and maybe it's scary on the inside how would i know but so yeah there's probably still stuff that can be fixed
45:01
even if it's code that's 20 years old i will probably just i will i will probably just merge it as soon as i get back from the conference thank god it's it's it's an it's an it's an epic
45:33
over like years and and yeah don't get me started okay any other questions i think we're
45:44
done here thank you