Providing a Long-Term Support dristribution with Gentoo Prefix
This is a modal window.
Das Video konnte nicht geladen werden, da entweder ein Server- oder Netzwerkfehler auftrat oder das Format nicht unterstützt wird.
Formale Metadaten
Titel |
| |
Alternativer Titel |
| |
Serientitel | ||
Anzahl der Teile | 150 | |
Autor | ||
Lizenz | CC-Namensnennung 2.0 Belgien: Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen. | |
Identifikatoren | 10.5446/34348 (DOI) | |
Herausgeber | ||
Erscheinungsjahr | ||
Sprache | ||
Produktionsjahr | 2015 |
Inhaltliche Metadaten
Fachgebiet | |
Genre |
FOSDEM 2015140 / 150
3
5
9
11
13
14
15
16
17
18
19
20
24
26
27
28
29
30
37
41
42
44
48
50
54
62
64
65
67
68
69
71
75
78
79
82
83
84
85
86
87
88
89
90
94
96
97
103
104
105
107
108
113
114
115
117
118
121
122
123
125
126
127
129
130
131
133
136
137
138
141
142
147
148
149
150
00:00
BeobachtungsstudieTransitionssystemGruppenoperationAutomatische HandlungsplanungPhysikalisches SystemSchreiben <Datenverarbeitung>DämpfungVerzerrungstensorDifferenteSchnittmengeTopologieAuswahlaxiomDistributionenraumMultiplikationsoperatorGeradeCASE <Informatik>ServerAutorisierungOrdnung <Mathematik>BildschirmfensterDatenverwaltungModulare ProgrammierungKernel <Informatik>Open SourceNichtlinearer OperatorNetzbetriebssystemSoftwareBeobachtungsstudieUnrundheitSchnelltasteVererbungshierarchieNatürliche ZahlSiedepunktKonfigurationsraumArbeit <Physik>PaarvergleichKette <Mathematik>VerzeichnisdienstVersionsverwaltungDifferenzenrechnungProzess <Informatik>AbstraktionsebeneResultanteDateiformatInhalt <Mathematik>Installation <Informatik>MathematikNabel <Mathematik>HilfesystemVirtualisierungWhiteboardService providerEinfache GenauigkeitXML
08:17
DistributionenraumTransitionssystemBeobachtungsstudiePatch <Software>UmwandlungsenthalpieInstallation <Informatik>TopologieStochastische AbhängigkeitVersionsverwaltungSoftwareentwicklerKette <Mathematik>TeilmengeExogene VariableVerkehrsinformationInstantiierungQuellcodeMAPDatenverwaltungTransitionssystemPatch <Software>SystemplattformCachingBinärcodeProdukt <Mathematik>ServerDefaultURLDienst <Informatik>DistributionenraumDifferenteQuaderProgrammbibliothekOverlay-NetzSupremum <Mathematik>Technische OptikTwitter <Softwareplattform>Abgeschlossene MengeWort <Informatik>Data MiningMinkowski-MetrikGeschlecht <Mathematik>SchnittmengeArbeit <Physik>ResolventeRechenwerkBildgebendes VerfahrenImplementierungEin-AusgabeKonfigurationsraumDerivation <Algebra>MultigraphMultiplikationsoperatorVektorraumNummernsystemFlächeninhaltProgramm/QuellcodeComputeranimation
16:30
BeobachtungsstudieTransitionssystemBinärdatenOverlay-NetzInstantiierungHardwareVirtuelle MaschineWorkstation <Musikinstrument>ComputersicherheitDerivation <Algebra>SoftwareentwicklerVerzweigendes ProgrammZeitrichtungDatenverwaltungModallogikKlasse <Mathematik>Singularität <Mathematik>Kartesische KoordinatenProgrammbibliothekGebäude <Mathematik>SoftwaretestEinsCAN-BusOverlay-NetzVersionsverwaltungVollständigkeitIntegralQuaderAdditionBinärcodeNummernsystemAusnahmebehandlungDistributionenraumQuick-SortUmwandlungsenthalpieComputerspielVariableMultiplikationsoperatorQuellcodeService providerPatch <Software>Deklarative ProgrammierspracheImplementierungExogene VariableWort <Informatik>Grundsätze ordnungsmäßiger DatenverarbeitungMySpaceSampler <Musikinstrument>StrömungsrichtungTopologieRichtungHochdruckMAPProfil <Aerodynamik>TransitionssystemTabelleInformationsspeicherungAnalytische FortsetzungSoftwareKontrollstrukturProgrammfehlerPhasenumwandlungSystemplattformKette <Mathematik>Programm/Quellcode
24:42
BeobachtungsstudieTransitionssystemOverlay-NetzGentoo LinuxBinärdatenImplementierungProfil <Aerodynamik>TropfenTransitionssystemGebäude <Mathematik>VersionsverwaltungProdukt <Mathematik>InstantiierungDerivation <Algebra>PunktDienst <Informatik>Prozess <Informatik>Offene MengeKartesische KoordinatenAnalytische FortsetzungVollständigkeitSkriptspracheTopologieInstallation <Informatik>Web-SeiteVerzeichnisdienstKlasse <Mathematik>QuellcodeElektronische PublikationProgrammbibliothekTabelleOverlay-NetzDistributionenraumVirtuelle MaschineVerdeckungsrechnungAdditionIterationMathematikGefangenendilemmaFigurierte ZahlMusterspracheMatrizenrechnungComputervirusCompilerFormale GrammatikSystemprogrammPeer-to-Peer-NetzSpieltheorieSichtenkonzeptMailing-ListeEin-AusgabeOrdnung <Mathematik>Kategorie <Mathematik>AusnahmebehandlungBitrateFunktion <Mathematik>StrömungsrichtungDokumentenserverBildschirmmaskePackprogramm
32:55
Globale OptimierungBeobachtungsstudieTransitionssystemKanal <Bildverarbeitung>CompilerDienst <Informatik>Kartesische KoordinatenBildschirmfensterPhysikalisches SystemServerE-MailVersionsverwaltungVirtualisierungPatch <Software>DistributionenraumProgrammbibliothekBinärcodeCachingStabilitätstheorie <Logik>Elektronische PublikationGebäude <Mathematik>NetzbetriebssystemSoftwareDateiverwaltungMereologieEinfache GenauigkeitVirtuelle MaschineCASE <Informatik>Nichtlinearer OperatorResultanteTransitionssystemProdukt <Mathematik>DifferenzkernGesetz <Mathematik>Objekt <Kategorie>Web-SeiteWellenpaketComputerspielStatistikKoordinatenMailing-ListeSynchronisierungMultiplikationsoperatorDokumentenserverSoftwaretestFlächeninhaltAusnahmebehandlung
41:07
GoogolComputeranimation
Transkript: Englisch(automatisch erzeugt)
00:28
Thanks. Thank you. Can you hear me? Microphone, okay. Yeah, welcome from me as well. My name is Mike Haubenwalner.
00:42
Just remember haubi is a little shorter. Yeah, I'm about to provide long-term support with change to prefix. So let's see if.
01:04
Well, it is a case study. There is of course a studied case with its own challenges. And next part, change to. Who of you know about change to? Who even know about change to prefix?
01:21
Oh, it's quite special. And finally, long-term support, what are the requirements and how do we implement them? For a studied case, I'm working at Salomon Automation Limited is a member of the SSI JFA group.
01:43
And the SSI JFA group does plan and manufacture warehouses of different kinds. And yeah, and Salomon Automation does provide the software to run these warehouses.
02:02
For an extract of references eventually, you might see some known one. These are the customers of us. For the challenge, when SSI JFA manufactures the warehouse racks, Salomon Automation does manufacture
02:27
the software to run these warehouses. So VAMAS is a shortcut for warehouse management system. It's the timber and is highly customized
02:42
for each warehouse. The challenge I am talking about is long-term support. Whereas long-term is up to 20 years, sometimes even more. For the nature of VAMAS being a software, it does need a server and an operating system.
03:03
Which operating system would you choose where you have 20 years of support? Any idea? Well, because there is no single choice, there are lots of them.
03:25
20 years ago, well, in 1986, Salomon Automation was founded. From now, 20 years ago, we were somewhere, well, here I think AIX4, HP UNIX 11,
03:46
or 10, 11 I think was later. And actually, this is the time when I joined Salomon Automation. So almost 20 years, I have the same job
04:02
and the almost authentic work to do. Still interesting. But how would you design your software to run on so many different operating systems? It doesn't work without some abstraction layer in between.
04:25
But which abstraction layer would you choose? Server virtualization is of less use because you don't want to virtualize your server. You still have an operating system on top of the virtualization.
04:42
Fortunately, almost all software packages you find in a recent Linux distribution these days also do the board compiling on whatever Unix system you have. Some of them even support compiling on Windows.
05:06
Problem now is I haven't found a package manager that is able to build these packages on top of whatever Unix system.
05:21
So I started with writing myself one from scratch using one shell and GNU make. But this got quite hard by time. And the solution for me is to have
05:42
a custom GNU distribution on top of whatever operating system. Because of these different kinds of systems, it doesn't help to have a binary distribution. So my goal is to have a SOAS-based distribution.
06:01
So here I am for chain two. You know chain two Linux, you may know chain two prefix. So what's the difference? Both are SOAS-based distribution. They share the almost same set of package definitions for chain two Linux, this is called the chain two tree.
06:22
And for chain two prefix, it's called the prefix tree. And they share the same package definition format. So where's the difference? Let's have a look at the e-build. An e-build content boils down to configure, make, make install.
06:41
This is what almost all open source packages support in one or the other way. The configure step is to determine which features does this operating system, it is compiled for, which features does it support. But have a deeper look at the configure line.
07:02
You see here is a prefix. What to do if we make this prefix non-constant? Okay, I keep this less-USR constant and add some variable before that.
07:21
What is the result of this variable for our comparison? When an e-build does support e-prefix, for using it in chain two Linux, the e-prefix is empty. But in chain two prefix,
07:41
this e-prefix can be whatever you like. You can install chain two even in your home directory. Also, this allows to install different versions of chain two on one single operating system without any virtualization and run them at the same time.
08:05
But still, chain two Linux is still a chain two Linux meta-distribution. And chain two prefix does not ship a kernel. It does use the operating system's kernel. So in lack of a better name,
08:22
for now I call this a GNU prefix meta-distribution. But as you know, chain two usually is a bleeding edge platform or distribution where there is stable and unstable keywords.
08:43
I have identified a few requirements for my long-term support distribution. Long-term support, as I said already, is up to 20 years. This requires a traceable patch management. Tracing patches is one of the most important things
09:01
in software development. But still, over those 20 years and beyond, the quality assurance still has to be continuous. Major releases, well, sometimes they are necessary,
09:21
but five years, okay, is sometimes, I'd say. Still, update releases happen every once in a while, once per year, twice per year, a little on demand. Additionally, production servers
09:41
are not necessarily connected to the internet, so it is necessary to provide some offline install. Of course, for the support team, it is necessary to be able to install hotfixes on demand. Major release upgrades, however, are useless.
10:07
For the long-term support implementation, the 20-year support, the necessary thing here is independence of the fast-forwarding chain to development. As a consequence, I need to self-host everything.
10:26
That includes the fork of the prefix tree, at least a subset of. I do call it the LTS tree. Of course, there are VAMA-specific packages, e-bills. And I have to mirror the source dist files,
10:43
because no one guarantees that you can download an old version in 20 years, or 20 years later. This creates different responsibility levels. Okay, there is upstream, responsible for the package source
11:03
distributed as the source dist file. Next level is chain to Linux, so responsible for the chain to tree, providing the chain to tree. Chain to prefix, the team. Actually, is responsible for the prefix overlay,
11:21
which is combined with the chain to tree, and shipped as the prefix tree. This should be the border of the company. Behind the company border, just because of responsibilities, is the VAMA's long-term service team,
11:43
is responsible for the prefix tree fork. Additionally, the VAMA's overlay, and I do call this here the VAMA's LTS tree for now. And finally, there is a production team that is responsible for a hotfix overlay,
12:01
and here are the binaries created. Every level here is still source-based. Actually, I do think binary distributions are not much more than a binary cache out of the source.
12:20
But anyway, for the patch management, back to the responsibility table. Traceable patch management requires a proper versioning scheme. Okay, upstream has some versioning scheme,
12:42
how they choose is their problem. They provide a release. Chain to Linux uses this release, and eventually may have an additional patch for this release. Ideally, the patch is reported upstream, and subsequent responsibilities,
13:05
responsibility teams use the incoming version as a release. Eventually, I have need to provide another patch, and they have to track this one patch in their own overlay or tree.
13:21
Ideally, report this patch upstream, except for it being some default configuration patch, and the next level uses this one again as a release. This creates an innovator-derivator tree,
13:43
or graph, actually. But then, how about the package manager support? Well, of course upstream and chain2 is, of course, supported by the chain2 package manager. Within chain2 prefix, we do have an additional patch,
14:04
not upstream in the package manager. And for these levels, the package manager does not support these levels of versioning. I have been able to manage without these levels for now,
14:22
because I'm committed here on prefix, and eventually Linux as well. But indeed, I did miss patches for default download URLs or something like this. When importing a new prefix version in my LTS tree,
14:43
I forgot that I did have another patch for the previous version, because I was unable to add another level of release or of sub-versioning. So, this would be one thing I would love to see
15:04
in the upstream package manager, whatever package manager this is. For chain2, eventually, this is, well, I'm not sure this would be EAPI zero still,
15:21
because this chain2 Linux level doesn't have use for this level. But of course, it has to be specified in the package manager specification. Okay, closing those versioning things.
15:40
Continuing with the quality assurance. For each e-build, or a whole bunch of, I can define an e-build quality in chain2 known as the keywords, and where this e-build is distributed, that means where this e-build from is built a binary.
16:05
When importing, I start with the unknown keyword. As the LTS tree developer, I do have my development box, or at least a prefix instance of whatever box I have, these are not just Linux boxes,
16:23
these are, well, AIX, Solaris, whatever. And when I am confident that this one e-build does work for compilation so far,
16:41
I do have a special keyword known buildbot. Then I do have buildbot instances on each supported platform which does build those buildbot keyword e-builds for me.
17:00
When this is successful, I take these e-builds as unstable, and it goes for the next level of development and QA. The VAMAS development inside Saruman Automation is split into an innovation team that has its own development boxes or LTS instances
17:24
in some prefix, and their own quality assurance builds. So when they test the VAMAS in the QA, of course they don't just test the VAMAS, but they test the LTS distribution as well.
17:41
And when they say, okay, this piece, or this release of VAMAS does work, this also includes it does work with one specific version of my LTS distribution, and no one more patch from wherever.
18:03
If the QA team says okay, then there is a VAMAS release done, and not just independent, but at the same time as a VAMAS release, but still when in the VAMAS release,
18:24
some patch or some bug fix has to be made, this one bug fix from the LTS distribution is retested for a specific release, and one single patch can go stable as well. Next level is the derivation team
18:44
of the VAMAS development, which is tightly coupled with the customer during development. And they take this VAMAS release. The VAMAS release also is distributed in source, not in binary.
19:00
So they can implement additional customer needs however they want. They do have access to the complete VAMAS development source. They do of course have their own development box or LTS instance on whatever box.
19:20
So for now we have five and up to six instances of LTS distributions on one hardware machine without any virtualizations. The QA team of the customer sometimes is joined
19:43
with the QA team of the VAMAS derivation development. So these two may be one instance on the customer side, on our side, whatever. But finally, here are the final binaries created that do run the warehouse.
20:01
There's one thing missing. Of course, again, the derivation team may provide or should provide patches to the innovation team. That doesn't mean patches are never innovative. I just haven't found another name. I know there are derivates, but I just found innovation as the opposite of derivation.
20:27
For the major release cycle, I have identified some phases when creating a major release, preparing declaration, implementation, and breaking the previous major release. And again, continuous quality assurance.
20:42
For preparing a new major release, again, this is the responsibility tree or table. First thing is I do make the current prefix tree work with the VAMAS overlay and merge these together.
21:02
No, not merging, but in store from prefix three, additional packages from the VAMAS overlay. And I do create the temporary binaries I compile VAMAS against. So I can see what is necessary to be done in the prefix overlay or eventually in the gender tree.
21:25
So I get up-to-date packages in my binary LTS installation, and I can test the VAMAS against up-to-date packages.
21:44
Again, because this is one of the really important thing, respecting the versioning scheme. And again, for the new major release, I need separate automatic build tests again.
22:03
So I have automatic build tests for the old and the new major release. Especially, I could say this is necessary for each major release for 20 years. There is one exception for the package manager itself
22:22
and the package definition libraries, the E classes, which are unversioned in chain two. When I would import the E classes into my life LTS tree, for sure I'll break the old release.
22:44
Thus, I have to use an integration branch of the LTS tree, have separate build tests again to unbreak the current stable previous major releases. When I can build the old major releases
23:06
and the VAMAS software against the old major release using the new E classes, okay, I say the E classes aren't too broken anymore for the old E build versions.
23:21
And I can merge the integration branch of the LTS tree into the master branch. So the old major releases get the new package manager because they want to be able to install the new packages even in the old releases if necessary.
23:43
So I need the new package manager. Fortunately, both the package manager and the E classes do not provide any runtime thing for the VAMAS application. There is no library source code in the E classes so far. And hopefully this will stay so.
24:05
For the declaration, again, the same direction, I use the current prefix profile, import it into my LTS tree. This is a new profile in the LTS tree
24:24
for the new major release. So major releases are nothing else but profiles, package profiles. And here, of course, I can make those tweaks necessary for the LTS needs, can make permanent here.
24:44
From the LTS tree point of view, declaring the major release is, well, is some picture. This is the profile open for each version because the implementation is much similar.
25:05
Import working e-builds, drop the existing e-build tags, the keywords, and tag them again as for the build bot. So I can use the new e-builds,
25:21
the new versions of packages combined with the VAMAS overlay to test my VAMAS instance again. From the LTS point of view, importing e-builds is something like this. For unbreaking the previous,
25:41
again, each previous major release profile needs to get masked the new and uncomfortable and maybe dangerous package versions. This is the new profile open for any version of packages
26:01
and the old profile. If I want to change the old profile, I would get GCC 4.8 or Python 3.4 in the old major release where I did have GCC 4.2 and Python 2.7.
26:20
I can accept for the old release to upgrade from page three to page four, but I cannot accept upgrading Python releases and the GCC releases because with the newer GCC, this is unlikely to be able to compile the old version of VAMAS. So I set the package mask and say, okay,
26:45
the old release accepts page four but does not accept Python 3 or GCC 4.8 or whatever version. And because the old release accepts page four, page three can go out.
27:02
Again, the quality assurance for the old release, it is enough so far to have the old VAMAS release be so far usable with the old packages which are compiled with the new E classes,
27:23
the old LTS three profile with the new E classes because the E classes, as I said already, don't provide source code for libraries. When I'm able to compile with the new E classes, I'm pretty sure the installed libraries still work
27:43
as long as the VAMAS application by itself is compiled as well. And now for the new release, again, continuous quality assurance from top to bottom
28:00
because new customer instances, of course, use the new version, the new major release. So the complete QA process can be implemented. For update releases, an update release contains all major release profiles
28:23
that still needs support. Over 20 years, well, actually, this is all major release profiles. To ease creation of update releases, I do have automatic snapshot creation once a week
28:42
and the outcome is the E build three, I do call it the VAMAS LTS three with the creation date and when I do say, okay, this snapshot is fine for a release, I do set the VCS tag.
29:02
From the LTS three point of view, back in 2010, the whole tree is used as the update release, as the E build three table and the VCS tag is set for the old date. For the new or for the current update release,
29:25
for the current release table, the current tree is used with both major release profiles with the package mask for the old release and of course, the VCS tag with the current date.
29:46
For the offline install, in addition to the E build three table, I also do create a source dist files table containing every package dist file necessary to bootstrap and install each major release
30:04
with the stable keywords. Additionally and finally, of course, an all-in-one setup script is crucial. This also is created by the automatic snapshot creation
30:22
and pre-configured to use these two tab wars. Important thing here is an easy-to-use command line script so the product installer on the customer side or even on the derivative VAMRS team side
30:43
are easily able to install an instance of the VAMRS LTS distribution. Hotfixes, we already have seen them. They are the production team,
31:00
the VAMRS derivation team or the service team is responsible for the hotfix overlay. The hotfixes don't necessarily go through all the QA process because when they are through the QA process, they aren't hotfixes anymore. And for major release upgrades,
31:23
because of the prefix support, I do have multiple instances on one machine and I install the old major release in some directory and the new major release in some other directory. So I don't have need to upgrade this installation to the new major release profile.
31:46
Any more questions?
32:11
This is accessible only for me because, well, we don't have real need,
32:21
or who else would have need for the old tab wars because they don't have the build instructions anymore.
32:57
Well, usually upstream eventually will have the dist files
33:04
or should be able to recreate them from their VCS because they really should have VCS text when they distribute the tab war. So this is eventually in case for dead upstream.
33:21
Well, if upstream is dead, still the VCS could be accessible. If you have a specific need, you can ask if I have it in my repository, of course.
34:00
You mean why I don't build the binary beforehand?
34:23
Wait a little. Well, which kind of binary should I pre-build?
34:49
I don't know beforehand which version of Solaris the customer really has. I do know or what the requirement is
35:02
for the customer to provide some operating system with file system support, a C library and EDR liposics API. And of course for installing the VAMAS LTS distribution,
35:20
of course a compiler is necessary, but I do, but this compiler from the operating system is only needed for bootstrapping my own compiler, my own GCC. And often, well, often, yeah, sometimes it happens if you apply some patch
35:44
for the operating system. This may change some system header file, GCC, because as I don't have right access to the system header files on a target machine, GCC does copy the system header files.
36:01
And when I distribute GCC binaries that includes the copy of my own system header files, I don't get the target system's patch applied to the copied system header files distributed with the GCC binary.
36:22
So we really compile on the customer's production machine. Or if the customer really has more than one so far equal production service,
36:41
then we compile on the oldest patch release for this operating system, just for this one customer. And this one customer then copies the binaries across his distribution service. But this then again is true for the VAMAS software as well.
37:03
Compile the whole GNU package stack, beginning from above the C library up to your final application binary
37:20
on one single machine setup at least. Because when you have old binaries built with an older patch release of whatever operating system, subsequent compiling the VAMAS application may produce unexpected results.
37:42
So still, again, a real distribution really works if it is source-based. The binaries can be done, yes, but it's just a cache.
38:18
Libtor, yeah, that's it.
38:26
Libtor in the sense of how to correctly build shared libraries on AIX to get its own M support. And one challenge was the Windows thing.
38:45
As you see, the last Windows I have added here is 2K3, because this was the last one we were able to build this LTS distribution on. However, the build system on Windows Server 2003
39:02
was the POSIX subsystem, the Interix, or sewer services for Unix applications. In Windows 2000, it was Cygwin. Cygwin got unstable with Windows XP. So with Server 2K3, we choose Interix.
39:25
Because this was the only one we were able to get stable with some workarounds. Interix, again, got unstable in Windows Server 2008. It was deprecated with Windows Server 2012 R1,
39:44
and we are not part anymore in Windows Server 2012 R2. So right now, we are seeking for some POSIX system, for some working, stable working POSIX system on Windows.
40:02
Yes, we have tried using Linux in some virtual Hyper-V, whatever, to call the native compiler. But yeah, this works in theory, but isn't stable.
40:21
It does produce object files. With the build system running on Linux, using the Microsoft compiler toolchain running on Windows, but the file system synchronization, well, it doesn't work for this workflow.
40:46
Anyone else? 45 minutes.