Purely Functional Package Management
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 44 | |
Author | ||
License | CC Attribution 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/46130 (DOI) | |
Publisher | ||
Release Date | ||
Language | ||
Producer |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
All Systems Go! 201911 / 44
5
10
17
20
21
30
32
36
44
00:00
Functional programmingDigital rights managementSystem programmingSoftware maintenanceInformation securityProgramming languageDistribution (mathematics)Term (mathematics)State of matterEmailSoftwareStandard deviationElectric currentFunction (mathematics)Physical systemPerformance appraisalSound effectComputer fileDirectory serviceUniqueness quantificationRevision controlAtomic numberHypermediaDigital rights managementProgramming languagePhysical systemVirtualizationSoftware testingState of matterSoftwareFunctional programmingPerformance appraisalWebsiteFile systemFlagGraphics softwareLibrary (computing)Binary codeDisk read-and-write headRevision controlLevel (video gaming)Information securityMedical imagingServer (computing)TheoryWorkstation <Musikinstrument>Computer programmingProcess (computing)Normal distributionConfiguration managementAxiom of choiceSystem callInstance (computer science)Repository (publishing)FreewareComplete metric spaceYouTubeWordGroup actionOperator (mathematics)GodInformation technology consultingSound effectService (economics)State observerTwin primeoutputMusical ensembleZoom lensSoftware developerCASE <Informatik>ResonatorExpressionBus (computing)Value-added networkContext awarenessQuery languageResultantParameter (computer programming)OpticsEndliche ModelltheorieLocal ringCuboidChemical equationData storage deviceInstallation artMetadataFunction (mathematics)Uniqueness quantificationConfiguration spaceComputer fileDifferent (Kate Ryan album)Hash functionCore dumpMultiplication signEmailNumbering schemeSemantics (computer science)MultiplicationBuildingMereologyHypothesisDistribution (mathematics)Term (mathematics)Module (mathematics)Set (mathematics)BootingReal numberCryptographyComputer animationMeeting/Interview
09:10
Control flowOpen setLevel (video gaming)Virtual machineSoftwareBackupView (database)Scripting languageMusical ensembleLibrary (computing)Computer programmingBinary fileLatent heatData storage deviceRevision controlWrapper (data mining)Normal (geometry)XMLUML
10:07
User profilePhysical systemFunction (mathematics)Sound effectProgramming languageFunctional programmingComputer networkPatch (Unix)User profileDigital photographyDefault (computer science)File archiverMultiplication signForm (programming)Virtual machineMetra potential methodData conversionPoint (geometry)Open sourceScripting languageVideoconferencingHand fanProfil (magazine)Revision controlBit rateFirmwareCross section (physics)LoginOperating systemBinary fileMehrplatzsystemPortable communications deviceGroup actionData storage deviceSoftware developerResultantPhysical systemLaptopDigital rights managementProduct (business)MathematicsSoftwareMoving averageFunctional programmingLink (knot theory)Computer fileEndliche ModelltheorieAsynchronous Transfer ModeInstance (computer science)Patch (Unix)Latent heatMetropolitan area networkContent (media)Correspondence (mathematics)CASE <Informatik>Web 2.0Context awarenessMachine visionSystem callLevel (video gaming)Integrated development environmentTimestampStandard deviationLibrary (computing)Gastropod shellBranch (computer science)Variable (mathematics)Interpreter (computing)BuildingNamespace1 (number)CryptographyKernel (computing)Programming languageUser interfaceParameter (computer programming)Table (information)Sound effectSet (mathematics)MereologySource codeXML
16:19
Data structureAttribute grammaroutputConfiguration spacePoint (geometry)RecursionSet (mathematics)Function (mathematics)SoftwareAttribute grammarWrapper (data mining)Configuration spaceFunctional programmingPhysical systemBuildingDirectory serviceLine (geometry)System callUniform resource locatorLevel (video gaming)Set (mathematics)FlagRight angleDerivation (linguistics)Web pageData typeImage resolutionCASE <Informatik>CryptographyRevision controlSimilarity (geometry)Point (geometry)Combinational logicCodeComputer configurationCalculus of variationsHookingContent (media)RecursionLink (knot theory)Key (cryptography)MultilaterationoutputDefault (computer science)Standard deviationComputer fileFile formatIntegrated development environmentProjective planeSurjective functionInstallation artEndliche ModelltheorieInstance (computer science)Statement (computer science)XINGTemplate (C++)CuboidControl flowDifferent (Kate Ryan album)Expert systemOpen setGroup actionMultiplication signSoftware testingParameter (computer programming)Communications protocolFactory (trading post)Home pageData structureInterface (computing)Boss CorporationLie groupRow (database)Chaos theoryScripting languageSpherical capPerformance appraisalMedical imagingNatural numberComputerPRINCE2Theory of relativityComputer iconMoving averageCompilerWordVirtual machineMatching (graph theory)Interactive kioskXMLUMLSource code
24:23
Point (geometry)Attribute grammarRecursionSet (mathematics)Integrated development environmentContent (media)Default (computer science)SharewareAtomic numberDigital rights managementMultiplicationConfiguration spaceModul <Datentyp>Physical systemSoftware testingSoftware frameworkWechselseitige InformationVirtual realityHybrid computerComputer-generated imageryComputer networkDeclarative programmingFerry CorstenDynamical systemString (computer science)Attribute grammarRevision controlMathematicsGastropod shellScripting languageCryptographyProjective planeComputer fileDigital rights managementLatent heatFunctional programmingIntegrated development environmentConfiguration spacePublic key certificateFlagExpressionWordMatching (graph theory)Variable (mathematics)Multiplication signImperative programmingCartesian coordinate systemSystem administratorPhysical systemSoftware developerDefault (computer science)SharewareAtomic numberPhase transitionOverlay-NetzLibrary (computing)System callLocal ringKey (cryptography)Module (mathematics)32-bitSoftware testingSoftware frameworkRollback (data management)Instance (computer science)RootkitBuildingService (economics)RecursionSet (mathematics)Line (geometry)Function (mathematics)Complete metric spaceDirectory serviceParameter (computer programming)EncryptionSlide ruleGroup actionCombinational logicProgramming languageSoftwareExtension (kinesiology)Patch (Unix)Sound effectMathematical singularityGame theoryFreewareSubject indexingCommunications protocolSurvival analysisBitVirtual machineType theoryCodeFigurate numberTurbo-CodeConnected spaceFile viewerSpacetimeHome pageMassPoint (geometry)WindowForm (programming)Medical imagingOrder (biology)Level (video gaming)Point cloudWebsiteEndliche ModelltheorieState of matterJSONXMLUML
32:25
System programmingMultiplication signState of matterPower (physics)Latent heatProgramming languageFunctional programmingConfiguration spaceGoodness of fitBitReading (process)Virtual machineSource codeSocial classOpen sourceBuildingInternetworkingElement (mathematics)Constructor (object-oriented programming)AbstractionCurvePhysical systemArchaeological field surveyUniform resource locatorCurvatureMusical ensembleBell and HowellEndliche ModelltheorieMaxima and minimaSmith chartChainComputer programmingLecture/Conference
35:08
Principal ideal domainRootkitMusical ensembleDefault (computer science)SoftwareBootingConfiguration spacePeer-to-peerServer (computing)Endliche ModelltheorieComplete metric spaceMaxima and minimaService (economics)Level (video gaming)Public-key cryptographyComputer animationSource codeJSON
35:51
Digital rights managementDeclarative programmingAlgebraic closureWebsiteStrutDreizehn19 (number)Set (mathematics)Server (computing)Level (video gaming)Bit rateDomain nameEncryptionService (economics)Public key certificatePhysical systemDefault (computer science)Modulare ProgrammierungMultiplication signSoftwareForcing (mathematics)Proxy serverConfiguration spaceVirtualizationIntegrated development environmentData storage deviceComputing platformGroup actionExecution unitComputer animationSource codeJSONXML
37:42
Gateway (telecommunications)Dynamic random-access memoryGateway (telecommunications)Web browserCodeWeb pageComputer animationSource code
38:14
System programmingWebsiteLattice (order)Computer animation
Transcript: English(auto-generated)
00:05
Okay, let's start. As already introduced, I'm a former Debian maintainer, and I've been contributing to NixOS since 2015, after I got fed up with normal distributions or doing package management
00:23
and how configuration management is built on top of package management, but you always run into trouble if you use tools like Puppet, Chef or Ansible, and since then I started looking into new things and one of the things that I really like is NixOS.
00:43
Since then I've been release manager and I'm on the security team, and on my day job I'm an infrastructure engineer at Mayflower. Also, I've been able to work on company time for NixOS at Mayflower, and we are providing
01:04
consultancy services for the NixOS ecosystem. So about this talk, I've done a few Nix introduction talks. This will be no real Nix introduction talk. I want to showcase some Nix concepts and their value to the greater Linux distribution
01:22
community, so it will be on a technical level and I assume that we have some Nix knowledge, but please do ask questions during the talk if something is not clear. Okay, so about the terms I'm going to use.
01:41
Nix is a package manager that was developed around 2005 as part of a Ph.D. thesis in the Netherlands by Elko Dolstra. Nix is also used as a term for the Nix expression language. That's a purely functional programming language that was tailored to defining package builds.
02:03
And there's Nix PKGs or Nix packages. That's a package set containing package build definitions for, like, 70,000 packages. For instance, we automatically import the Haskell package set, so all Haskell packages are by default available and tested for our environment, and it's just a GitHub repository,
02:25
a big GitHub repository with over, like, 1,000 contributors. And there's NixOS, which is built on top of Nix and on top of Nix PKGs, the package set. That's building a whole Linux distribution with the kernel, with the bootloader.
02:42
There are service modules that basically replace configuration management and a few other goodies. There's also NixOps, which is a deployment tool that fits into the whole ecosystem. With that, you can deploy Nixos systems to, like, EC2, to a local virtual box for testing,
03:02
or to just a host that has Nixos installed via SSH. Okay, let's get started. The typical procedural approach to package management roughly works like this. You have some packaging instructions and metadata, and you have the state of the file system,
03:25
which includes headers, libraries, and all the dependencies you want to have. You take those who invoke the build system of the software you want to package, which then builds the software, and the files are then placed on the file system at a specific
03:41
prefix, normally slash USR, and that in turn modifies the state of the file system, and after that you can build other software that depends on that new software that you just built. So what happens here? The build results depend on an inherent state by previous builds or by previous package
04:05
installs, and installing packages modifies the global file system state. Okay, so far so good. Now if you go to the functional approach, you have to understand one thing. What is purely functional in that context?
04:20
You maybe know functional programming languages like Haskell, Scheme, Lisp, but purely functional is quite a very strict definition of a functional language, that is, functions are pure if and only if they always evaluate to the same results given the same arguments.
04:42
For instance, this might not be true for languages like Scheme or Lisp, because they are not purely functional, but languages like Haskell or actor are purely functional. They have no side effects, and the evaluation of a function causes no semantical observable
05:00
side effects either in the system or the program itself. That means purely functional. So how do we do that in a package management environment? The functional approach also has packaging instructions and metadata as the input, and that it will invoke the build system of the software package, but in these packaging
05:24
instructions and metadata, all the dependencies are also included. The output will then be stored in some unique prefix, so not slash usr, but some other prefix, and that will later be used in the packaging instructions to refer to that
05:42
exact version of the program. Okay, but what do we gain from that? So one thing we gain is we have a conflict-free system. We can have multiple versions of different programs installed, or the same program installed
06:02
because they are installed in a completely different prefix. By definition, it has to be immutable because you always want to have the same software. It's also reproducible because you always define what exact dependencies you want to have. So not only the version, but all the transitive dependencies along the way.
06:27
And you can do atomic operations on them because you can assume that you have no conflicts, everything is immutable, and everything is reproducible. So how does this look in practice?
06:41
In practice, Nix has a concept called NixStore. That is that custom prefix where all programs will be installed. In this example, I have crypt setup, which you probably all know. It will be installed in slash Nix, slash store, then hash sum, and that hash sum is built over all
07:05
inputs, so all dependencies, and the package build definition. So if you altered the package build definition, like increase the version or add a new command that does some fixing up, for example, you will get a new hash and a new store path.
07:24
That, of course, means that if you change something, you need to rebuild that package and all packages depending on that package. So for instance, if you update glibc, you have to rebuild everything. You might say now that that's a bad thing, but actually it's a good thing because we always,
07:44
if you like, change some core library, we always bootstrap everything, and we make sure that it's buildable every time. For instance, in Debian, you have situations where some library bump, after some library bump, you have to recompile software, and in Debian,
08:03
it's a very concerted effort to build the libraries or the programs depending on the library and then upload them. We don't do that. We always rebuild everything. Okay, we also have a notion of multiple outputs. So that crypt setup you saw has all the binaries.
08:24
You can execute the libraries and some stuff in shell-local. We also have a development package. We call that output. And to the store path, that's the dev output. And the dev output, we have stuff like the headers or the package config file
08:40
and some special NIC support files. Okay, so how do we do that? We lock every single dependency down to the NIC store path. So if you look at the libraries used by crypt setup, the crypt setup binary, you see that every single library has a fixed R path
09:03
to the exact version of the library that it was compiled with. So if I try to scroll, it doesn't work right now. As you can see, it's just the whole store path
09:21
and after that comes the library from that exact store path. So far, so good. This is for normal ELF executables. But what about other stuff? How do we lock down the dependencies there? So for instance, this is Borg or a Borg backup. It's a Python program for doing backups.
09:43
And not only does Borg require some Python libraries, it also requires OpenSSH. Two SSH and two machines and copy backups there. So what we are doing is, this is the Borg executable. It has a shebang that is fixed to a specific version of bash.
10:02
This is just a wrapper script that appends the path to the OpenSSH bin folder into the path and then executes the actual Python executable or the Python script of Borg. If you look at the wrapped Python script of Borg,
10:21
we see that the Python interpreter on the top, the shebang here, is also fixed to a specific Python version. And there is some special stuff inserted that adds the needed Python libraries to the Python side path. So for instance, Borg needs siphon,
10:43
and ELF use, and context-slip too. They are added and explicitly defined and only that exact version will be used. We do that not only for Python, not only for shell scripts, but for Haskell, for Ruby, for whatever language you can think of,
11:00
even though it sure is, and MPM. There is another concept called profiles. These are basically version symlinks where you can have a set of packages or a set of store prof that you can put together and put into an environment and then version it and do updates or changes
11:24
and then roll back. What you see here is that there is a main symlink. In this case, it is the system profile and it links to a version of the system profile which then links to the specific system profile version. This works for the system profile in that case,
11:41
but we also have user profiles. And if you, for instance, look at the system profile, this is a NixOS operating system profile which includes stuff like the activation script to activate the system, slash etc, which we also statically generate, some firmware,
12:01
the init, rd, the kernel, the kernel modules, and as we use the actual software available on the system, which is also like a subenvironment. And we also have like systemd on there, which is kind of special because it's the init system. We also have user profiles
12:21
because we have multi-user package management, so every user can actually use Nix if Nix is installed on a system in multi-user mode and can build their own package that is only available for them. For instance, in the user profile, user profiles are also versioned and user environments include also their own prefix
12:45
where every software that is defined in the environment is linked together and is made sure that all the software finds the exact versions of the dependencies that are needed. So for instance, if I look at my path variable
13:01
when I log in, I see that Nix by Nix profiles per user, or there's the user name missing, fplatzprofile slash bin is in my local path variable so I can execute everything that I installed in my user profile by default.
13:26
Okay, so how do we achieve purity in Nix? So on the concept level, a package, or how we call it, a store path in the Nix store is a result of a pure function that is written in Nix
13:42
and has, as said before, no side effects, and the result only depends on the function arguments without the side effects. Okay, so if we are in a Nix build environment, we have a sandbox in place. So on the one hand, we have that functional definition
14:00
which by itself is like functional. We use, of course, Linux namespaces and zgroups just as any container technology nowadays would use. You can think of it like a lightweight container, but we don't use every namespace out there, just the ones that are useful to achieve
14:23
the level of isolation we want in the build environment. We set a fixed time. So every build will be executed with a fixed timestamp. All created files will have a fixed timestamp so that there's no deviation between when you build it
14:41
or which machine you've built it. There is no networking available, so you can't just do npm install to install dependencies that you don't know the real contents and the version of because other package managers just download stuff
15:01
depending on the branch or the version. And we also do stuff like patching the she-bangs of shell scripts or some build scripts automatically that are in the source table. And we have some patches to tooling to ensure reproducibility. We don't have much patches left
15:21
because we are also part of the greater reproducible build effort, and a lot of stuff has already been upstreamed. So not much patches are left. So at some point we have to introduce impurity, and one of the impurities we have is
15:42
patching source tarballs. So what we do is, if we have impurity, we always guard the package definition by a checksum, in particular the cryptographic checksum of the contents. So if we, for instance, define the package crypt setup
16:01
and we fetch the sources from a kernel mirror, we always include the checksum of the contents so that we can be sure that we always get the same source table.
16:21
There's, I think I slide missing, but what this is called, it's a fixed output derivation where you already know beforehand what the output of that build is going to be like. So you can think of this fetch URL call as another package that will be installed, but you already know what contents it will be like.
16:42
So in that case, our fixed output derivation, we enable networking and downloading is actually possible. This is the only way you can do impure stuff. You could, for instance, do like NPM install in a fixed output derivation,
17:00
but then the output has always be the same, and it won't be. Okay, so how is the next package set structured? Next, there are this data type called attribute set, which is basically a key value pair or like a key value map of key and N values,
17:24
and here's a small snippet of some examples. For instance, we have crypt setup that calls the function call package with the definition of crypt setup, which is in some other folder. For instance, if you look at othercap,
17:44
this is special because othercap doesn't compile with a recent version of OpenSSL, which we have by default. So we have to pin OpenSSL to an older version to make that package compile. So in this case, OpenSSL 102.
18:00
And if you look at OpenSSL H, we do something similar for PAM because the next package set can also be used on Darwin systems where no PAM is available. So if we are on Linux, then PAM is available, else PAM will be null, and PAM will not be used for package builds
18:21
because on Darwin it would fail. We can do similar stuff in more complicated code for every package out there, so we get exactly the variation of dependencies that will build for that system that you're currently on
18:40
or for which you define the build. Okay, so on to defining a package build. This is actually what's in this directory where call package will go look for the source. And on the first line you see basically a function definition with some arguments,
19:02
call start and fetch URL lvm2. This is, these are the same attribute names that are present in the top level attribute set. So call package just passes the right attribute names with the values to the package build,
19:22
and after that we have the first line, the actual definition, which says we want to make a derivation which is like the internal name in X for intermediate format for defining package builds, and we use stdnf here, which is the standard environment
19:43
which includes a lot of convenience wrappers for build tools. So for instance, if you have an auto tools project or a CMake project, we already have code in there that detects that and will automatically call the right build instructions
20:03
for your build system. Of course, not everything is supported, but most of it is. So if we continue, we define a name. We already looked at source. It's just a fetch URL call. We can add patches, like define custom configure flags, because in this case we are dealing
20:23
with an auto tools project, so we have configure flags, and we even can do stuff like, if Python is enabled by that flag, we also add the enable Python flag to configure. Then we have the definition of the dependencies. We have on the one hand native build inputs,
20:43
which are dependencies that are to be run on the system where the software will be compiled, and build inputs are the dependencies for the system on which the package is to be run.
21:00
Because we have native support for cross-compiling, like for instance, I can cross-compile on my machine, my x86 machine, a package for AR64, but for that to work, I need the package config for my system, because it has to run on my system, but I need the LVM2, for instance, of AR64.
21:23
So there are even more options for that, because there are some more cases, but this is the normal and easy case. And we can define the outputs, which in this case is just the default output, the def output, and the main output.
21:40
And the cool thing is, these outputs are also automatically generated, so if I define a def output, the standard ENV wrapper will automatically add configure flags so that the include files will only be installed to the def output, and the main pages will all be installed
22:00
into the main output. You can always do that manually, and the whole package build is divided into phases. You can add hooks to the install phase, the build phase, the configure phase, and do your own thing. If it's announced, then that package build, which is mostly the case.
22:25
Okay, so onto next packages. You can override packages, so if you're a user of next packages, you can say, well, there's this cowsay package, but I want to execute as moosey, so I can add a post install hook
22:40
that just adds a link to cowsay called moosey in the output. And if I then use anything that installs cowsay from a local user, I will get that version of cowsay that has the link to moosey. You can always do that in an XOS configuration,
23:01
but we'll get into that later. So next, you don't have to understand anything that's on this slide. Package overrides are an old concept because you can only refer to packages after they've been fully evaluated. So next packages itself evaluates
23:23
the whole next package set to a fixed point, which basically means that the dependency resolution will be taken care of like the fixed point combinator. And the cool thing about that is that you can hook into it and override packages,
23:42
but without having an infinite recursion if you override a package and you access something in that package again. So that's basically everything that happens behind that, with some example, because if everything is a function,
24:01
like in that case, that always takes itself as an argument, you can refer to attributes like foo and bar in the package set itself, because it will be recursively evaluated until there's nothing left that has to be evaluated. So this is implemented in pure next.
24:21
This is the fixed function. There's nothing magic going on. It's available in every functional language. And if you evaluate that, this function, we get what we expect, so the concatenation of foo and bar as a string. And if you go one step further and create extensible attributes that can be like,
24:45
where you can define overrides, you can define extensible function that you don't have to understand. But the cool thing is now we can define a new function called g that takes self and another super argument, and then you can redefine foo by the previous version of foo,
25:07
and you can change it, for instance. So in a normal case, when you define it like that above with the fixed function, it would result in an infinite recursion. But in that case, if you use fixed and extends
25:20
with those two functions, you get what you expect. We override foo with foo plus, and if we evaluate foo bar again, which that function will do, we get foo plus bar instead of foo bar. So that is what's happening under the hood for every package you define.
25:41
And with that, we can define much more powerful overrides because it overlays. You see the same here. We are using self and super as arguments. And for instance, if we want to override boost, we just say we use the previous version of boost, like the version of boost before our overlay
26:03
was introduced and evaluated, and we override Python with Python 3, for instance, because we want to have the boost Python libraries built with Python 3, and now we have a boost built with Python 3. Also, you can do that with your own packages.
26:21
You can just call a call package, like in the top-level package, next packages definition, have your local package definition here, and for instance, you want to compare it for 32-bit. No problem. This is standard for 32-bit. And you have on your x86 machine, x86 64-bit machine, 32-bit executable.
26:43
And to use it, when you like instantiate next packages, you can just add it as a parameter, both in the next configuration by environment variables or like at import time.
27:00
The next concept are development environments. The tool we use is called Nickshell. If you append a dash a, you go into the build environment of that specific package. So if I do Nickshell dash a crypt setup, I'm in the build environment of crypt setup, and all dependencies for crypt setup are available.
27:23
If I would be in a git checkout of crypt setup, I could just call configure, and everything it needs is available. If I exit the shell, everything is gone again. And this works for like C libraries, Python packages, Ruby packages, Haskell, whatever.
27:42
But not only that, also all the phases are available. So if I call the special generic build function, the whole package build will be called or just executed. And also you can invoke specific phases. So you could say, I want just to invoke the configure phase.
28:03
I only configure with the specific configure flags defined in your package definition will be executed, or I just want to call the build phase. And so you can do incremental builds, changing stuff, patching stuff, oh yeah, and now it works. Create the patch, put the next big edges, yeah, it works.
28:24
You can also do that for like your development projects. You can do on the one hand imperative environments. So in that example, I want to execute Python 3. Python 3 is not available in my normal environment.
28:40
So I open a Nickshell with dash p to have the Python 3 package available. After that, I'm in a Nickshell, denoted by that special prompt. And if I then call Python 3, it's available. But not only that, if I want to use the requests Python package, which is not available because it's like a regular Python, I exit the shell.
29:04
I use Python 3 and the Python 3 package has requests. And after that, request is available. If you want to use declarative environments, you can create a Nicks file, for instance, in the project root of your software,
29:24
where you can define which packages are needed for like building, developing, building some front-end stuff. And in this case, it's basically also make derivation with build inputs, and this is just another way to denote which Python package you want.
29:44
After that, you can just execute Nickshell because Nickshell uses the default Nicks in the local directory by default, and you are in Nickshell and you can do like development. Okay, Nicksos demo time.
30:00
I don't have that much time left, so I skip it for now because I was light prepared for that. So key features of Nicksos are we have atomic upgrades and rollbacks because everything is just one Zimlink. If I want to switch from one Nicksos version to the next, do a package upgrade, do a configuration change or whatever,
30:22
it's just one Zimlink I have to change and a shell script to execute, of course, but one Zimlink and the new system is active. Multi-user package meant, or I told about that, you just have to write one configuration file and you can build it for like a multitude of targets. You can build Docker images, ISOs, Netboot images,
30:42
container tuples, whatever. We have about 800 service modules available, so like for Nginx, Apache, Next, Cloud, GitLab, I don't know, so much software, so much modules, which make it very easy to set up, for instance, Grafana, which I had prepared as a demo,
31:01
which is just a few lines of code, complete with Let's Encrypt certificate and everything, and we also have containers, we have system DN spawn and a testing framework using VIMs. We have some features coming up, for instance, to use like Nicks for Nicks expressions
31:21
as a package manager itself, it's called Nicks flakes, or like you want to use system dnetwork d by default, we have to bump the TCC version and some more, oops, and we have some challenges, for instance, we have a word readable Nix store, that means you shouldn't put secrets into the Nix store,
31:42
this is a thing that we can't easily fix for now. Also, we don't match application state, that's for the administrator to do, and our declarative containers don't support user namespaces, because system d doesn't support
32:01
dynamic user in system d and spawn containers with user name spacing enabled, that's an issue we still have. Okay, and that's it for now, if you don't have questions, I have the demo for you.
32:24
Everyone wants to see the demo, no, it's a question. I think I know what a bunch of the answer is this, but I'll ask it of you, what's the current state of ARM64 support on Nixos?
32:41
Good question, we have first class ARM64 support in Nixos, and actually the ARM64 builds are much quicker than our x86 builds, because we are packet, which is also a sponsor of this conference, donated a lot of build machines to us, because we have to build, as we've said,
33:02
we have to rebuild a lot of stuff, so we have a lot of powerful systems, and thanks to packet net, we can do that for at least two architectures, which are our main release architectures, which are x86-64 and ARM64.
33:21
So I wanted to ask about the language used in the configuration files, how long it's gonna take to learn it, or why not use an existing function, I don't know what it is actually, so. Okay, I didn't introduce Nix itself, because the language, because it would take too much time, but you saw a little bit of that,
33:42
so it takes a little bit getting used to, because some of the syntax elements you see, you maybe know from other languages, but they are not. So if you already did some functional programming with any functional language, you might be quicker to learn it,
34:02
but if you have not yet before used a functional language, it might take a while. But the problem itself is not the Nix language, but all the concepts and all the abstractions we have for package builds. So you have to know the specifics. We have a lot of documentation by now,
34:21
but most of the times, you will have to read source code, and for beginners, you should read the source code of, for example, Nix BKGs, to understand what Nix language constructs are useful and you should actually use. So if you want to use it, there's a very steep learning curve ahead of you,
34:43
but in my opinion, it's worth it. Okay, how much time do I have left? Bob? Five minutes.
35:01
Then we can do a demo. Okay, so I have a machine on the internet. You see on the right, and an SSH on the left. You can see that not much is running on there.
35:21
I have a Nexus configuration, which is by default in slash NIC ATC slash Nexus configuration, and this is basically what it looks like. So there's some bootloader configuration, there's some networking configuration for peer addresses, default routes, DNS server. I enable bash completion, I make some tools available,
35:43
I add some public SSH keys, I enable open SSH, I open the firewall, and if you now look here,
36:03
that's the domain, nothing is available, the port 80 and 443 are closed. And I now go back to the configuration, and I enable these two services. So I enable NGNX, where I enable a virtual host,
36:23
and enable ACME, which automatically fetches an let's decrypt certificate for me. I use forces L to add an HTTPS redirect, and proxy to like local host 3000, where Grafana is running, if I enable it, and don't configure it, anything by default.
36:42
And if I want to apply that, I just call nixos rebuild switch, which will build a configuration, and download the software package as needed, which it's doing now, and rebuild some stuff like our users and groups, some systemd units,
37:02
the whole user environment, like system environment where all the software is installed, and if it's finished, it should start the NGNX service, the Grafana service, the services to fetch the ACME certificates,
37:20
and that should be it, let's see. It's a VM running on a virtual host somewhere, so it's not that fast, unfortunately. And I think my time is up, okay, now you see, system service where restarted or started,
37:41
and if I go into the browser and refresh, we see bad gateway, because Grafana is still starting up. The FAFI code is already there, because I was on that page before, but nah, Grafana.
38:03
Come on, there it is.