Do Linux Distributions Still Matter With Containers?
This is a modal window.
Das Video konnte nicht geladen werden, da entweder ein Server- oder Netzwerkfehler auftrat oder das Format nicht unterstützt wird.
Formale Metadaten
Titel |
| |
Untertitel |
| |
Serientitel | ||
Anzahl der Teile | 561 | |
Autor | ||
Lizenz | CC-Namensnennung 2.0 Belgien: Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen. | |
Identifikatoren | 10.5446/44243 (DOI) | |
Herausgeber | ||
Erscheinungsjahr | ||
Sprache |
Inhaltliche Metadaten
Fachgebiet | ||
Genre | ||
Abstract |
|
FOSDEM 201986 / 561
1
9
10
15
18
19
23
24
27
29
31
33
34
35
38
39
40
43
47
49
52
53
54
55
58
59
60
63
65
67
69
70
78
80
82
87
93
95
97
102
103
104
107
110
111
114
116
118
120
122
123
126
127
131
133
136
137
139
141
142
148
153
155
157
159
163
164
168
169
170
171
172
173
174
181
183
185
187
188
193
196
197
198
199
200
201
205
207
208
209
211
213
214
218
221
223
224
226
230
232
234
235
236
244
248
250
251
252
253
255
256
257
262
263
264
268
269
271
274
275
276
278
280
281
283
284
288
289
290
293
294
296
297
300
301
304
309
311
312
313
314
315
317
318
321
322
327
332
333
334
335
336
337
338
339
340
343
345
346
352
353
355
356
357
359
360
362
369
370
373
374
375
376
377
378
383
384
387
388
389
390
391
393
394
395
396
406
408
409
412
413
414
415
419
420
425
426
431
432
433
434
435
436
438
439
440
441
445
446
447
448
453
455
457
459
466
467
471
473
474
475
476
479
480
484
485
486
489
491
492
496
499
500
502
505
507
508
512
515
517
518
529
531
533
534
535
536
539
540
546
550
551
552
553
554
555
557
558
559
560
561
00:00
Distribution <Funktionalanalysis>Automatische HandlungsplanungKartesische KoordinatenProgrammierumgebungDistributionstheorieBildgebendes VerfahrenKonfigurationsraumElektronische PublikationMailing-ListeRegulärer GraphKonfigurationsdatenbankTypentheorieKontextbezogenes SystemNetzbetriebssystemDokumentenserverRechenschieberMereologieAssoziativgesetzElement <Gruppentheorie>MultiplikationsoperatorSoftwarePhysikalisches SystemPunktRotationsflächeDatenverwaltungDämon <Informatik>SkriptspracheSpezifisches VolumenInformationDifferenteKernel <Informatik>LoginCodeQuaderWeb-SeiteVariableMatchingMixed RealityVerzeichnisdienstSoftwareentwicklerApp <Programm>Projektive EbeneAppletDienst <Informatik>AuswahlverfahrenDefaultGruppenoperationGemeinsamer SpeicherProzess <Informatik>NamensraumOverhead <Kommunikationstechnik>Deskriptive StatistikInhalt <Mathematik>SystemplattformImplementierungComputervirusHardwareBitMini-DiscSchreiben <Datenverarbeitung>Quick-SortStandardabweichungREST <Informatik>ComputeranimationVorlesung/Konferenz
09:06
Prozess <Informatik>ProgrammierumgebungNormalverteilungLokales MinimumArithmetisches MittelInstallation <Informatik>TypentheorieGruppenoperationDisjunktive NormalformNamensraumREST <Informatik>SoftwareentwicklerPhysikalisches SystemPunktDifferenteMechanismus-Design-TheorieKernel <Informatik>BinärcodeRechenschieberNormalvektorInstantiierungBitDämon <Informatik>DistributionstheorieKartesische KoordinatenSchnittmengeDefaultKeller <Informatik>ProgrammfehlerMereologieEinflussgrößeKontextbezogenes SystemElektronischer ProgrammführerVorlesung/Konferenz
11:56
DateiformatFormale SpracheSchnittmengeDefaultCASE <Informatik>SkriptspracheTypentheorieRechter WinkelSoftwareDistributionstheoriePunktInstallation <Informatik>BimodulMultiplikationsoperatorProgrammierumgebungVersionsverwaltungGebäude <Mathematik>AppletResultanteBinärcodeKartesische KoordinatenGNU <Software>Vorlesung/Konferenz
14:44
DistributionstheorieDifferenteDifferenzkernProgrammierumgebungMobiles EndgerätNetzbetriebssystemPunktMakrobefehlVersionsverwaltungKartesische KoordinatenElektronische PublikationCoxeter-GruppeKonfigurationsraumSoftwaretestBildgebendes VerfahrenRechter WinkelCompilerRechenschieberMailing-ListeSchnittmengeGebäude <Mathematik>EinflussgrößeInhalt <Mathematik>Physikalisches SystemKernel <Informatik>SchlüsselverwaltungInstantiierungVerzeichnisdienstDokumentenserverNotebook-ComputerIntelligentes NetzStabilitätstheorie <Logik>MultiplikationsoperatorBildschirmmaskeSoftwareUmwandlungsenthalpieSoftwareentwicklerDatenverwaltungKonfigurationsdatenbankQuellcodeEinfache GenauigkeitVorlesung/Konferenz
20:07
Hash-AlgorithmusLokales MinimumInklusion <Mathematik>VerzeichnisdienstProgrammierumgebungWurzel <Mathematik>Differenz <Mathematik>DatenbankDefaultDämon <Informatik>sinc-FunktionSchnittmengeKonfigurationsraumBitEin-AusgabeAusnahmebehandlungParametersystemMultiplikationsoperatorSoftwaretestGebäude <Mathematik>ComputerarchitekturSoftwareElektronische PublikationInteraktives FernsehenDokumentenserverGraphfärbungSocketVersionsverwaltungDifferentePhysikalisches SystemStellenringPasswortVariableMereologieDistributionstheorieGeradeBildgebendes VerfahrenVirtuelle MaschineProjektive EbeneTypentheorieRechter WinkelMehrplatzsystemVirtuelle RealitätInformationMalwareRoutingHypermediaRhombus <Mathematik>Formation <Mathematik>Repository <Informatik>EindringerkennungEinflussgrößeZahlenbereichMulti-Tier-ArchitekturProgramm/QuellcodeJSON
27:06
Hash-AlgorithmusMultiplikationsoperatorLokales NetzProgramm/Quellcode
28:23
Hash-AlgorithmusServerKanal <Bildverarbeitung>HackerStatistikInklusion <Mathematik>Lokales MinimumDokumentenserverSoftwareentwicklerProgrammierumgebungProzess <Informatik>SkriptspracheSchnittmengeVersionsverwaltungEinflussgrößeKonfigurationsraumElektronische PublikationMereologieDifferenteDistributionstheorieSoftwareComputerarchitekturDisjunktive NormalformUmkehrung <Mathematik>MehrplatzsystemBitCoxeter-GruppeGraphfärbungPerspektiveMultiplikationsoperatorInhalt <Mathematik>MathematikPhysikalisches SystemGamecontrollerObjekt <Kategorie>Installation <Informatik>Wurzel <Mathematik>AdditionProgramm/QuellcodeJSON
34:58
SpezialrechnerInhalt <Mathematik>StellenringCachingOperations ResearchDistributionstheorieGebäude <Mathematik>Inklusion <Mathematik>Hash-AlgorithmusKanal <Bildverarbeitung>ServerMenütechnikCoxeter-GruppeElektronische PublikationVersionsverwaltungMehrplatzsystemProgrammierumgebungGebäude <Mathematik>Wurzel <Mathematik>SoftwareNormalverteilungBitRepository <Informatik>Physikalisches SystemZahlenbereichFormation <Mathematik>SkriptspracheMailing-ListeAdditionQuellcodeBimodulDemo <Programm>DistributionstheorieMultiplikationsoperatorClientFlächeninhaltDigitale PhotographieBrowserLokales MinimumVerzeichnisdienstCASE <Informatik>BenutzerbeteiligungSocketKonfiguration <Informatik>Programm/QuellcodeComputeranimationXMLJSON
40:39
ServerHash-AlgorithmusWurm <Informatik>MathematikPhysikalisches SystemZusammenhängender GraphTopologieProgrammierumgebungBitInhalt <Mathematik>MultiplikationsoperatorVerzeichnisdienstSoftwareGebäude <Mathematik>SoftwaretestVersionsverwaltungCASE <Informatik>Formation <Mathematik>QuellcodeElektronische PublikationKonfigurationsraumGamecontrollerVirtuelle MaschineComputerarchitekturZahlenbereichRepository <Informatik>Coxeter-GruppeDämon <Informatik>Zweiunddreißig BitProzess <Informatik>DokumentenserverSchnittmengeARM <Computerarchitektur>SoftwareentwicklerComputerspielGewicht <Ausgleichsrechnung>Lokales MinimumKernel <Informatik>Distributionstheorie
48:17
PunktwolkeComputeranimation
Transkript: Englisch(automatisch erzeugt)
00:05
Thank you Brian. So thank you very much everybody for waking up so early and coming to this session I know it's very difficult myself. I had some problems this morning to wake up because you know, it's fast and so Saturday night generally you do stuff interesting with friends and you remember that
00:21
You were called by Brian to say hey, you need to give a talk tomorrow morning Oh, I proposed my name. So sorry if it's not exactly the same topic By the way, we can talk about the original topic because I have planned for that and and it's a topic which is also interesting The relevance of Linux distribution at the era of Docker containers. It's something that we can
00:45
address Hopefully I will cover it a bit But if you have questions on those topics if you have concerns and raise your hands and we will try to do our best to answer those points So Let's start. So my name is Bruno Konec. I'm working for
01:01
hardware manufacturer I won't give the name here because I didn't want to support my travels this time So I will use my my association name to present the topic here I've been doing Linux and stuff for the last 25 years and I'm part of different upstream and downstream projects. I'm for this talk. I'm particularly
01:24
Concerned because I'm a major packager and I had some Development to do to be able to package more easily at the container era Okay, so let's start with a few reminders so that everybody is on the same page
01:44
Containers are just Compared to hypervisors or bare-metal environment are really really near to the bare-metal infrastructure the only stuff which is different from the Bare-metal infrastructure is that you have the Engine, which is managing the notion of containers here
02:02
And which is just a thin layer that you put on top of application. So you put An environment which is suitable to isolate the execution of your application if you want So you have name spacing of C groups which are set up by the way By default on your operating system and here in a in a container context
02:23
That's the container engine which creates those Environments of execution for the application before launching the application So no specific overhead because anyway the kernel is doing the job that you use it on the bare-metal that you don't use it On a container environment at the same same cost
02:44
For launching the applications. Just the fact that you have an isolated environment is what is of interest to us in this context So I will talk about Docker because I've started using Docker and adding Docker to the Maggia distribution three or four years ago something like that three years ago and
03:03
It applies to other type of containment engine Of course some stuff are typical to Docker some other stuff are really generic to containers environment So the idea is really to pack everything together to give you a way a new way of delivering application to your users
03:24
Forty years ago you were using tar and A script in the tar file and you were delivering your application to your customer like that or to your users like that After the Linux revolution you had more clever way of doing that which are called packages and
03:41
We are in the distribution dev room so people building distributions are really keen to make packages to Create a suitable environment for delivering application taking care Not only of the application itself and the right place where it should be delivered on the system so having a standard which is a Linux standard base, which gives you all the directories in which you
04:04
Need to install your software The logs are in vlogs. The application is under USR. You have etc for the configuration files, etc, etc So this is standardized this is taking off This is taking care of by the distribution management system and the distribution management system is also creating
04:25
support for the dependencies for the build dependencies of the software and for the installation dependencies of the software Which is a big advantage So that's one of the element why using distribution is still relevant compared to to the Docker environment
04:40
So Docker came and said, okay, there is a new way of packaging application. You can have these Apps in the box that we had in the previous in the previous slide and everything Inside that that box inside that context can be shipped easily to another platform and run easily on another platform So that's really the approach they had is bundle everything
05:02
The approach is to say, okay, if you have one process you want to run you will create one container so if you have an application which is comprised of Seven different daemons running together working together Then you would like to create seven different containers images and seven different containers to host them
05:22
It's working with layers and I will detail that in the next slide You have the notion of image and the notion of container which is an instantiation of the image Which is read-write in which you can work whereas the image is read-only itself You have additional features So if you want to share your images if you want to distribute your images
05:41
Especially inside your environment or outside you have the notion of registry That's what the people from Docker run on the Docker Hub. So when you do a Docker search for an image it it's Interacting with the registry it's trying to find an image which has the name that you gave and it gives you a list of
06:01
dozens of different images Corresponding to what you are looking for You can do exactly the same internally You can have so private registry as well to allow the sharing of images Which is like on distribution when you do repositories in your distribution and you manage packages Inside the repository and you share the packages through repositories
06:23
They created the notion of Dockerfile. So this one is linked to Docker Which is the receipt to create the image So you want to have instructions that helps you build the image on a regular base you can replay it It's like the makefile for building an application. It's a receipt to build the Docker image
06:44
Inside the image everything is volatile. So when the container dies You lose everything Sort of it's stored on your disk in the layer, but you lose everything. So you want to have permanent Information stored in volumes
07:02
Which can be mounted from the hosts inside the container or you can use network attached volumes if If you need that Inside the container you are completely isolated from the world So you need to specify which ports you want to make available to the outside world
07:21
So if you run one daemon inside an image you want to expose the port of that Network service to the outside so that it can communicate with the outside world The goal is to is if you remember JVAP promise, it's Write once, run everywhere. So it's pretty the same on the Docker container image
07:43
It's create once, run everywhere on a given OS You cannot really mix and match between different OSes The stuff I've not written here on the slide that you have a standard describing images which is the OCI and
08:03
it gives you the possibility to Use the same image content with different implementation of container engines So you can take a Docker image and run it run it with Racket or CRU When you want to orchestrate stuff you go a bit up in the stack
08:21
You have the notion of composition. You can create YAML files to give to your engine the information on how to launch The container how to instantiate the container from the image So which type of volumes you want to attach, which type of port you want to expose, which type of environment
08:40
variables you have, which type of networks you have, all that stuff that you can pass on the command line to the Docker command. You can also store it in a YAML file and give that YAML file to someone else And you have the Docker compose upper layer Which will create all the containers from that description based on the virus image that you have
09:01
So HA layers is a swarm of Kubernetes that you can use. Everything is using a REST API Even the command line interface tool, so they always discuss with the Docker daemon on your system using the REST API. It's developed in Go and the composition is redeveloped in Python and it's licensed under the Apache V2.0
09:24
I said in the previous slides there is a layered approach. This is how it's working. So everybody is using the same kernel on your host There is no difference. There is no kernel inside the container image So if you go inside a container image and you use a uname command to look at what is it
09:45
It's the kernel running below on your whole system It's not a kernel coming with the image. That does not mean anything because it's just an isolation of processes So you have the same kernel, it's launching different stacks, different applications
10:01
But it could be a process that you run on your system Or it could be a process that you run in a container on your system. It's the same Then on top of the kernel once you have created that and you have your cgroups and namespace available as feature of your kernel to be able to enable those type of
10:20
working environments, you have the notion of image. So the image is a read-only part of the solution And you can create as many layers as you need to reach a point where you are happy with your image You can have something as simple as just a BusyBox binary. That could be your image
10:42
So you execute Docker, you create an instance based on that image and you will be in an environment where you just have the BusyBox binary And you have the 100 something command that the BusyBox is providing to you in a very small footprint environment That's one way to deal with it. You can use a very small distribution that the Docker guide developed Alpine
11:05
Which is a very small Linux distribution To provide a strict minimum of environments that you need to have something which looks like a Linux environment because BusyBox is really really small Or you can put a full distribution or normal distribution, the minimum set of that distribution
11:22
So for a major distribution, it would be 200 something packages For Fedora, the same for Debian, maybe a bit less, okay So you have the possibility to really create a small layer Which is the base of your distribution and on top of your distribution, you can install packages using your own normal mechanism
11:42
So if you are on Debian, you do a PTK install, if you are on Fedora You do DNF install, if you are on measure.ai you do Europe EMI, but that's the same approach You just use native tools that you have on your distribution to build the context that you want to have You can add scripts, you can add binaries, you can do PIP install, you can do NPM install
12:03
You can do whatever you want inside And that's isolated And once you are happy with the result of the build of the image and the running environment Then you can instantiate one version Which will be the container in which you will have the right to write in it, modify stuff
12:26
And that's where we'll be running your application, on top of it Any questions at that point? Is it obvious for everybody? Good So why do you want to use distribution packages with distribution and packages inside the distribution with containers and VM
12:46
And that probably was what the original speaker was intending to cover more So First why do you want to run containers? Because you already have a distribution What you want is not necessarily polluting your distribution with a ton of stuff that you want to test in a dedicated environment
13:07
Especially when you do stuff like JavaScript, Node.js Type of stuff where very few distributions have package, the whole stack that you would need to develop That does not exist under an RPM format or that format because it's really a moving target
13:25
You would need thousands of dependencies when you do NPM install of something you You get ten thousand of different modules installed Through the network That's a huge work for a distribution And most of the distributions are just packaged in Node.js itself
13:41
So you can do NPM install and then the rest is not packaged because it's moving too fast It's not possible to keep up with that It's not the case of some other languages. You have a lot of Perl modules, a lot of Python modules A lot of Java modules as well, which are available under a package format So you can benefit from the work of the distribution people to have a clear set of packages
14:02
But for moving targets like Node.js, it's really something where you want to isolate it from your native distribution because you don't want to install on a distribution something which is not packaged Why? Because it creates I mean
14:20
manual installation compared to package installation is a direct way to create problems in your environment because you may have Stuff in the standard place and you may have at the same time the same stuff in a non-standard place Under a usr-local for example, if you just do Configure make install for a GNU software It will by default arrive in usr-local and then you will have binaries in usr-local-bin and binaries in usr-bin
14:44
And you don't know which version is which and sometimes you don't point to the right configuration file because you have multiple instances So really if you want to to have a serious Execution environment and also build environment you need to be very clear of what you do And identify clearly where you need to use non-package environments such as Node.js for example
15:05
compared to package environment The advantage that containers bring like VM You're not polluting your running environment You create an isolated place where you can do everything you want at the end is just in that environment And that's something you can send to someone else for test or whatever
15:24
So that's something you should be able to rebuild easily So doing it with VMs You should automate the creation of the VM and the operating system deployment in it And the installation of your application in it, etc. It's easier with containers than with VMs, but that's the same approach
15:40
You want to be able to easily scratch and redo your running environment execution environment if you have problems And so it's easier with docker file to do that and to address that and to rebuild on a regular base Your docker image is to be really up to date
16:01
Containers also bring something which is useful and that I'm using and that's the goal of the talk in fact You can have on a single Linux distribution Tons of other distributions available to make your test So you can automate The portability of your application in different running environments in different distribution You can package for different distribution your software so that it's installable natively for the people using packages from the distribution
16:28
So it's a really easy way to to distribute for for other distributions than the one you have or For I mean i'm running the sixth version of magia, which is the last stable version It has more than one and a half here now
16:42
And we will issue seven in a couple of months But I don't want to run a non-stable distribution. I use my my laptop to work and and I mean working with A development distribution is prone to break my ability to work very often so I prefer to isolate the test I do on the
17:03
Development distribution in a specific environment such as a container and that's the goal Compared to to using natively the development environment and having the compiler the compiler broken for a couple of weeks because There are stuff that we need to to put in place when we move from one version to another etc
17:21
Another advantage of containers with regards to vms is that it's very easy to share your home directory with you with the container So you can attach when you Launch your instance of a container from the image you can say to to that attach my home directory And put it in the home directory of the container environment so that i'm at home and I can use all the
17:46
files that I need to work in my environment, which means for example that allows you to share if you are on a RPM based distribution all the configuration files that are needed to build packages correctly RPM macros, RPM RC you can also keep a look on your SSH keys
18:04
I have my SSH key in the build system of Maggia to be able to push packages and ask for the build system to To recreate the packages I've tested locally You can do the same with other distribution as well. What I do is really generic. It's not linked to Maggia itself
18:22
The only way it's the only place where you really need the VM compared to a container to Do that isolation Is if you need a different kernel between what you're running and what you want to test That's the only only place where it's important for people like me who are doing packaging most of the time
18:40
I don't care I can use a native kernel of my distribution to do the packaging. I'm packaging for 120 different distributions The software i'm upstream for without any problem due to the difference of kernel On the host and inside the packaging environment So that's really feasible
19:02
So, how do you deal with that concretely? So You have the docker registry or you have your own registry or you have your own local images You have a set of images that you can put on your own environment You create those images using a docker file
19:23
I will show you the content of the docker file just after And every time you need to Test something in a different environment than the one you're running you instantiate a container for the distribution, which is your target And inside the container you're building packages that then you can send to your distribution repository
19:42
either using Subversion or git sources depending on the distribution and the package management system of your distribution okay, so Maybe I should turn around And show you the real file instead of the one on the slides for people who want to have a look
20:03
after the presentation But I should be able To show you something else here Which is a real one
20:28
Okay, so I have a way to capture some parameters As input which is not really important. I have a configuration file that I can use To pass some variables and have default variables
20:41
Available in my environment here. So the version of the of the major distribution target temporary directories mirror That I can use to download the dependencies The working directory the architecture are on which I'm working because I can also build for different type of
21:03
Of architectures there is a very convenient project in qemu which allows you for example to run on an x86 machine non x86 binaries As if you were in a virtualization environment except that you are not virtualized You are not in a vm. You're virtualizing the instruction set, but you're not in a vm
21:24
So I'm I started to I was just a raspberry to make some tests with the another architecture to see if it was working here That seems to be very interesting So I get my my information My uid gid because I want to map those inside the container and then I write I generate a docker file here
21:47
I start from What I call a major official repository, which is in fact local to my system. So those are my local Root images for the distribution I can show you if we have time how it's built
22:03
The first thing I do is I update my distribution inside the the image when I build the image I say I want the latest version of every package that I need Then I I can okay, that's not that's commented then you install all the dependencies that you need
22:21
Uh in your environment So I update the repositories and then I install all the dependency packages that have been updated since the last time And I install in that environment because i'm building packages The set of packages I need to build packages So there is a bm command which does the build through rpm build
22:43
Majaia is using subversion for configuration files and stuff like that. We have the image here repo command, which is The interaction with the major official repository and the launch of the build interaction with the build system of major and Some other useful tool like a color diff and studio because I want to be able in my build environment
23:05
So when you're building packages for for a distribution never build as root That's so if you if you take something out of this This talk is never built as root instead of because when you're building a package You don't know what you are launching your your packaging this Set of software which is coming from upstream and those guys may do
23:24
Remove files etc. And if you don't set up the right environment viable You will remove files in a place where you don't expect to remove them So never run as root run as a single user So that's why there is some magic here to create the user in the container image
23:41
Uh associating the right the right uid gid this line is not useful anymore and giving to that user the studio the right to studio in the container And without any password to be able to launch some commands as root when you need them But not when you don't need them and you do you do that on purpose. So every all the builds
24:02
Is done as a single user, but sometimes if you want for example to install the build packages on your environment Then you will need root access to be able to write in the package database that you want to install the package And then I create the home directory of that user. I say that I will be in a workday, which is my
24:22
the place where I have my My major environment I run the container as a user not as root and I launch a bash command and the rest is just some A small part to detect if there is already a container if I force I can remove the the previous image to rebuild an image
24:41
And then I just run So this is the line which is creating the instantiation From the image. So here we are building the image with that receipt once the image is built you instantiate an environment You say I want to remove that environment at the end of the run. I want to map my
25:01
I want to be able to SSH Correctly from my docker container environment. So I need to make some stuff with the with the socket inside the environment and set up the SSH socket inside the environment to the same place where it is outside so I can communicate using my SSH agent Which is already installed on my system
25:22
I just want to mount my home directory under my home directory inside the container And I I use the image which is tagged like that which is the name of what we are creating here in the in the receipt, so How does it work?
25:41
So here I'm running on image ES6 And I can create of course image ES6 environment as well So by default By default so I didn't relaunch anything this morning so I cannot
26:04
Let me just Let me just restart the docker daemon because I may change some stuff since yesterday
26:21
Let's try again So I have a certain number of images. That's why it takes a bit of time So there is a couple of images and you see there are a lot of different distributions that I use
26:41
to build to have different environments to be able to build different software correctly, so that's why it takes a bit of time Is it better now? No, still not no such holes. So I may have lost my network Which is here Where is my mouse?
27:02
Here
27:52
Yeah, I have no No LAN
28:14
So can I use The first time legacy here Complete for the first time one should be better
29:01
Oh, yeah, right
29:21
So the difference of
30:10
64 It should be before
30:35
I'm not pointing to the right image I may see the architecture here
30:41
Okay, so here where am I? I am in a container which has been instantiated from the image which is here You see at the prompt change of course from That perspective here. It's still a major ES6 environment, but this one
31:06
Has 232 packages where has my native distribution? Has 3000 packages so I'm in a completely new different environment. It's a fresh major ES6 environment
31:22
Which has a minimum set of packages that you need to have to run the command which can install additional packages So that's what you want to have you have a bare minimum distribution On which you you are able to use if it's a Debian distribution apt-get If it's a measure, yeah, you are PMI if it's a Federer DNF
31:43
That's just what you want to to be able to do because that plus the network configuration, correct so that you can Touch the repositories and download content from the repositories so where you're here So the stuff which is also not right is that I'm root in that environment
32:02
I should not be root. I should be I should be a single user Let me check yeah, so Yeah, right
32:21
So this is this is the image. So this is the official image. This is the one I use As a base environment, so this is not the image I use to build my packages I can do the same Easily with another version. So if I use a cauldron version Here, I will be now in a different environment
32:45
Which is imagine 7 version Which has a different set of packages Only 219 so nice job for the guy working on that because they reduce the size of the minimum distribution set from 232 to 219 so we have less packages when we want to create a
33:04
a small distribution with major 7 release and here If you look all the packages which are installed or mga7 Whereas of course here all the packages which are installed on my native system or mga6 So I have I have a working major environment here, which is completely different that i'm pointing to the development distribution
33:25
I have all the dependencies of the development distribution. I can really do what I want in that environment easily. Let me try to Just fix that Because there is something wrong here you should never make changes
33:51
Uh, well, that's not that's not very correct. In fact, that's not very correct It's the day before the distribution the presentation. Um, let me check
34:04
Because yesterday I was building some stuff so So I have in my environment normally this one for example so if I go here into
34:20
this one Yeah, which has the um, which has the architecture that was the missing part of my my script script has not been updated for so now I have an image which is Based on the previous image. So this is still a control
34:41
version version 7 But this time I have a bit more packages because in my receipt So if you if we look at the docker file uh that we have for example
35:01
Where is the presentation So if we look at the docker file that we are using here in the presentation, which is which is the same
35:21
in addition to the standard Distribution which has 219 packages I asked to add a couple of additional commands to be able to work So for example, I should have the bm and the mga repo command So let's go back here
35:41
So here here first I am a single user. I'm not root anymore. I change the environment in which I want to run And I have access to the bm command I have access to the mga repo command which were not there before So and I am placed in my in my directory where I have all
36:01
The packages i'm following for majoria that I can rebuild So let's take for example Something related to docker. I have the docker compose. I will do a remove of Everything which is not relevant So all the intermediate Build stuff here. I just keep the sources and the spec file which are the strict minimum. I need to build packages
36:27
so For those not really Familiar with that maybe so the spec file is again a receipt which gives to the rpm system Instructions on how to build a package for the distribution i'm running so it gives you
36:42
some Dependencies at build time that you need to satisfy to be able to build and as docker compose is a python Script it needs a certain number of python modules to be able to build correctly and then it will also Indicate some
37:01
Installation dependencies. So if you install the package on the distribution, you will need to satisfy those dependencies Around python modules needed and then you have the receipt to be able to To build the software in your environment So for the majoria distribution is as simple as doing bm and of course it does not work not because it's a demo
37:22
Because it's it's on purpose. I missed All the dependencies I showed to you that there are build dependencies here and I don't have those if I do rpm pipe grep python I have a certain number of python packages typically
37:46
What is needed to build Python packages and the python 3 and 2 versions as well itself But very few other packages just setup tools is the only one I have for example, I don't have I think it needs The docker package the web socket client, etc. So all those packages are not available yet. So I can say to my system, okay
38:07
I need to be a root because I want to install additional packages and I want to install the packages Which are mentioned in the spec file that I need to have So it says, okay, I want I will default to to using build requires. So which are the build requires that you need?
38:24
Okay, you need a python docker package. Which one do you want and say, okay, let's take the first one And you have recommended packages or optional packages. So I say, okay, I don't want to pollute too much my system So I would say okay do not
38:43
Do not install the recommended packages just the one I really need to build so the list of packages as Dependencies which are needed in my environment are those I will say install those stuff and hopefully if there is a bit of network
39:01
I should be able to download them which does not seem to be the case
39:25
Okay, so maybe the mirror is having an issue because I have the network here Let me check
39:46
District coffee Yeah So I cannot touch the mirror myself either on the web browser, so I need another mirror Let me try
40:41
So let's say okay. This mirror is broken does not want to deliver to me stuff. That's not a big issue Well, that's a problem, but that's not a big issue So I will go in my configuration file and I will change the mirror
41:11
Okay, let's do it the other way around so I will change the mirror the reference to the mirror here
41:26
Inside the configuration file to something which is better Which is the kernel.org mirror, which should be working fine here
41:57
Oh, there is to this trip
42:09
Okay This time should be a bit better So let's try again Okay so when you deal with a mirror, which is up to date and
42:23
available You can download the dependencies to build your software It installs them for you So now you can build your package and this time as all the dependency requirements are satisfied then you can
42:41
Build the package and you have in your environment again All the directories that have been created for example You have the new package which is available here, which has just been built in my environment Which is clean because it has been built using the Maggia Cauldron tools Maggia Cauldron dependencies
43:00
Creating an mga7 version. So everything is completely safe from a build environment And now I can just try to install it and again It's looking at dependencies at install time and it's okay for Installing that package you will need those packages as dependencies. So just say yes
43:25
It will download some additional packages and now you have the package which is here and you can start Testing it in your environment because it's working So you have a strict minimum environment to be able to make tests of one package that you have built here
43:41
Which is exactly what you want to do, and I'm not polluting the rest of my system It's completely isolated and I can do that as many times as I want with different distributions available any question
44:09
Yes
44:21
So generally What happens by distro vendors is they have a build system And on the build system you have a machine with all the targets that you need to support or you want to support So here I'm testing on my local system. I will check that everything is working when I'm done I can use the Maggia repo command to push
44:43
My content to the build system pushing my content is just pushing the subversion Set of files that are under control So in my case here Where is my mouse again, here sorry
45:04
So here is On the build system the subversion Tree that I mirrored locally You can have a look at the different stuff that have been
45:24
Done From the system so you see what happened to the life of the package During its development you see when you have modified the compose file when you have A build a massive build for example for Maggia 7 which happened which changed automatically a certain number of stuff
45:42
okay, and when you are happy with what you have so In your environment what is important or the sources directory And the spec directory so the spec directory contains the spec file that is mandatory to rebuild And the sources directory contains the sources of the values version I have had during time of that component and a shaawan
46:09
File Checking the for the checksum of the of the source file So those are stuff that are in the subversion repository on the repo at in the Maggia build environment And when I launch
46:22
build for me The package it will go to the build system extract from Subversion the right files do the bm command like me on all of the target systems that you need to to support So it will be for x86 I586 Which is the 32-bit version
46:41
Arm 7hl because we are not like Debian because I see you have a Debian t-shirt So we are not as Debian maintaining as many as many architectures that you are maintaining, of course And we have less packages as well as Debian to be clear Only 30 000 when Debian has 50 000 or something like that So that's that's the way it's done. You have your target system on the build infrastructure that are used to build the final system
47:06
So you're building stuff you're testing it Of course, you may have a software which is working nice on x86 and not working on Arm And you will not detect it through this process. You will detect it when your contributors say Hey, it's broken on my on my version and you you have the backzilla you give the architecture on which it's not working
47:24
And people will make tests on that version if they have not done that before Um, that's the way that's the way it's done No It's a dedicated system, which is I think using just fruits for due to to I mean you don't change build system easily
47:43
That's that's one of the problem So, yeah That's the way it's done Any other question? How am I doing with time? Are we here? One more question, okay
48:03
So if there is no no other question, I'll leave you a bit of time to change room and Get another fantastic presentation. Hopefully Thank you very much