We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Yomi - an openSUSE installer based on SaltStack

00:00

Formal Metadata

Title
Yomi - an openSUSE installer based on SaltStack
Title of Series
Number of Parts
44
Author
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language
Producer

Content Metadata

Subject Area
Genre
Abstract
We will present Yomi, a new proposal for installing Linux using SaltStack. This installer is designed to be used in heterogeneous clusters, where you need a bit of intelligence during the installation and be integrated as one more step in the provisioning process. [Yomi] is a new kind of installer for the [open]SUSE family based on SaltStack and independent of AutoYaST. The goal of this project is to make the installation of Linux (currently openSUSE) when: * You have a cluster of heterogeneous nodes (different profiles of memory, storage, CPU and network configurations) * The installation needs to be unattended * The installer needs to make decisions based on local profiles and external data * The installation process needs to be integrated, as one step more, into a more complicated provisioning workflow. The dependencies of Yomi are minimal, as only Salt and a very few CLI tools are required, which make it ideal to be deployed a booted from PXE Boot.
Plane (geometry)Parallel portComputer networkVertex (graph theory)IdempotentState of matterComputer hardwareComputer fileConfiguration spacePlastikkarteDecision theoryGeometryConnectivity (graph theory)IdempotentService (economics)Category of beingMereologyComputer hardwareConfiguration spaceNP-hardTheory of relativityParallel portDecision theorySoftwareFamilyProfil (magazine)Physical systemLogicAxiom of choiceOrientation (vector space)CodePartition (number theory)Semiconductor memoryAreaReal numberSystem administratorHard disk driveKey (cryptography)Similarity (geometry)Basis <Mathematik>Image registrationBefehlsprozessorComputer animationMeeting/InterviewXMLUML
Personal digital assistantComputer clusterVertex (graph theory)Plane (geometry)Data storage deviceContent management systemGame controllerChainVirtual machineMereologyData storage deviceComputer hardwareClient (computing)Semiconductor memoryDifferent (Kate Ryan album)Hard disk driveBefehlsprozessorCASE <Informatik>Point (geometry)SoftwarePlane (geometry)Vertex (graph theory)Connected spaceService (economics)ExergieDataflowRAIDResultantNeuroinformatikPartition (number theory)Matching (graph theory)Open setSpecial unitary groupComputer animation
File systemCommon Language InfrastructurePartition (number theory)VolumeExpressionGastropod shellScripting languageSoftwareAbstractionService (economics)Configuration spaceComputer networkBootingTask (computing)Installable File SystemLine (geometry)Limit (category theory)Data storage deviceBit rateGroup actionPhase transitionVolume (thermodynamics)Order (biology)Server (computing)Read-only memoryCommon Language InfrastructureKernel (computing)FlagBootingBitGastropod shellService (economics)File systemDistribution (mathematics)Moment (mathematics)Complete metric spaceComputer fileMultiplication signOcean currentNormal (geometry)Time zoneDifferent (Kate Ryan album)Task (computing)AbstractionScripting languageSource codeSemiconductor memoryRepository (publishing)VotingPhysical systemIterationSoftwareComputer animation
Partition (number theory)BootingBitCommon Language InfrastructureOperating systemServer (computing)Partition (number theory)CASE <Informatik>Level (video gaming)Computer animation
RAIDPartition (number theory)Mathematical optimizationPartition (number theory)Scaling (geometry)Bit rateRootGroup actionPhase transitionSpacetimeFile systemSoftware frameworkFlagComputer fileRAIDComputer animation
Installable File SystemFile formatVolumeFlagFile systemPersonal digital assistantEvent horizonPartition (number theory)RootComputer filePhase transitionSuite (music)Volume (thermodynamics)Goodness of fitComputer animation
Default (computer science)Line (geometry)Information securitySoftwareInteractive televisionKey (cryptography)Configuration spaceSoftware repositoryPattern languageKernel (computing)RootVolume (thermodynamics)Computer fileSingle-precision floating-point formatFile systemPrototypeProcess (computing)Line (geometry)Goodness of fitSuite (music)Moment (mathematics)Operator (mathematics)CASE <Informatik>Bit rateParameter (computer programming)Multiplication signCoroutineRoutingPattern languagePhysical systemDirectory serviceWindows RegistryMultiplicationRootGroup actionKernel (computing)SoftwarePhase transitionIntegrated development environmentComputer animation
BootingDefault (computer science)Configuration spaceState of matterConfiguration spaceRootAsynchronous Transfer ModeDefault (computer science)BootingComputer fileGroup actionRoutingFood energyComputer animation
MultiplicationComputer-generated imageryPartition (number theory)MiniDiscRAIDSystem programmingConfiguration spaceBootingSoftwareService (economics)Physical systemCASE <Informatik>File systemRoutingNetwork topologyProper mapBootingRootMedical imagingComputer animation
MehrplatzsystemBootingDefault (computer science)Configuration spaceLine (geometry)VolumeFlagComputer-generated imageryPartition (number theory)MiniDiscRAIDSystem programmingBootingSoftwareService (economics)Partition (number theory)Integrated development environmentScripting languagePhysical systemDifferent (Kate Ryan album)Formal languageReal numberBitSuite (music)Profil (magazine)Phase transitionFile systemBootingChemical equationBit rateInformation securityOrder (biology)Programming languageConnectivity (graph theory)Default (computer science)Computer animation
SoftwareConfiguration spaceData managementDefault (computer science)Modul <Datentyp>Physical systemCASE <Informatik>Configuration managementComputer architectureConfiguration spaceComputer configurationMotion captureSpacetimeDefault (computer science)Service (economics)Event horizonParameter (computer programming)SoftwareComputer programmingVertex (graph theory)Computer animation
Element (mathematics)State of matteroutputModul <Datentyp>Category of beingCodeModule (mathematics)Electronic meeting systemTopological vector spaceArchitectureDefault (computer science)Vertex (graph theory)State of matterWordAsynchronous Transfer ModeVertex (graph theory)Configuration spaceOrder (biology)Decision theoryModule (mathematics)Endliche ModelltheorieComputer fileService (economics)Element (mathematics)Flow separationCodeAddress spaceProper mapDeclarative programmingGroup actionSemiconductor memoryDirectory serviceSubject indexingRight angleMaxima and minimaHardy spaceServer (computing)BefehlsprozessorSoftwareSystem administratorInformationWebsiteException handling1 (number)Traffic reportingDifferent (Kate Ryan album)Computer animation
Default (computer science)ArchitectureState of matterModul <Datentyp>Bus (computing)Event horizonDifferent (Kate Ryan album)Distribution (mathematics)State of matterTelecommunicationComputer animation
Module (mathematics)Parameter (computer programming)outputGroup actionBlock (periodic table)BuildingCodeConfiguration spaceIntegrated development environmentExplosionElectric currentExecution unitElement (mathematics)Asynchronous Transfer ModeEndliche ModelltheorieState of matterGroup actionConfiguration spaceDecision theoryPlanningSoftware testingPairwise comparisonModule (mathematics)outputCodeMultiplication signOrder (biology)Sound effectDifferent (Kate Ryan album)Control flowParameter (computer programming)Semantics (computer science)Ocean currentComputer animation
State of matterModule (mathematics)Template (C++)LogicSystem programmingFile systemInformationBootingState of matterSolid geometryScripting languageComputer fileDecision theoryShape (magazine)File systemPartition (number theory)RandomizationElement (mathematics)Gastropod shellPoint (geometry)Endliche ModelltheorieDeclarative programmingOrder (biology)Macro (computer science)Software developerLine (geometry)Descriptive statisticsInternet service providerLogicFerry CorstenComputer animation
State of matterData structureUniqueness quantificationVertex (graph theory)Formal languageLogicTemplate (C++)Mechanism designFile systemComputer-generated imageryRootElement (mathematics)State of matterDescriptive statisticsNamespaceDifferent (Kate Ryan album)File systemCodeTemplate (C++)ResultantComputer animation
State of matterGroup actionModule (mathematics)Modul <Datentyp>ImplementationComponent-based software engineeringState of matterInheritance (object-oriented programming)PlanningSoftware repositoryGroup actionMereologyDampingRepository (publishing)Level (video gaming)Partition (number theory)ImplementationOpen sourceComputer fileModule (mathematics)Dependent and independent variablesString (computer science)Demo (music)Projective planeMultiplication signSoftware bugAmsterdam Ordnance DatumComputer animation
BootingRootVertex (graph theory)Single-precision floating-point formatPartition (number theory)VolumeHolographic data storageRootDemo (music)Phase transitionVertex (graph theory)Virtual machineConfiguration spaceBootingCASE <Informatik>MiniDiscComputer animation
File systemBookmark (World Wide Web)SoftwareKernel (computing)Repository (publishing)Computer hardwareSoftware repositoryPattern languageRootComputer fileMenu (computing)Scripting languageBootingDefault (computer science)Hard disk driveVideo gameMedical imagingArithmetic meanSource codeXML
Line (geometry)Pattern languageComputer hardwareDemo (music)Source codeComputer-generated imagerySoftware testingKey (cryptography)Vertex (graph theory)Discrete element methodMenu (computing)SoftwareKernel (computing)File systemRootPort scannerExecution unitNP-hardConfiguration spaceTime zonePartition (number theory)Physical systemComa BerenicesEvent horizonBootingVolumeSoftware repositoryPasswordDefault (computer science)Service (economics)Repository (publishing)Personal identification numberPartition (number theory)Configuration spaceVolume (thermodynamics)Vertex (graph theory)WritingParameter (computer programming)Point (geometry)Similarity (geometry)Virtual machineSuite (music)Service (economics)File systemPeer-to-peerInformationSheaf (mathematics)BitBootingLatent heatComputer fileCASE <Informatik>Type theoryView (database)AdditionPattern languageSource codeXMLComputer animation
RootPermianDemo (music)Source codeLine (geometry)Computer-generated imageryKey (cryptography)Vertex (graph theory)SoftwareService (economics)Configuration spacePattern languageSoftware testingComputer fileNetwork socketDependent and independent variablesDefault (computer science)Gamma functionComputer fontPasswordPublic key certificateOrder (biology)SoftwareVirtual machinePresentation of a groupProcess (computing)Event horizonComputer animation
Level (video gaming)AbstractionModul <Datentyp>State of matterRootPhysical systemCodeComputer hardwareInformationParameter (computer programming)Pattern languageProduct (business)VolumeComputer fileDeclarative programmingCodeSpacetimeDistribution (mathematics)RootBranch (computer science)Software bugEndliche ModelltheorieConnectivity (graph theory)Covering spaceLevel (video gaming)Data managementInformationDifferent (Kate Ryan album)Module (mathematics)BitState of matterComputer hardwareComputer animation
VolumeSelf-organizationData storage deviceSoftwarePartition (number theory)RAIDPartition (number theory)Linear programmingOperator (mathematics)State of matterNetwork topologyDeclarative programmingDifferent (Kate Ryan album)Decision theorySlide ruleMereologyBitComputer animation
State of matterModul <Datentyp>Partition (number theory)Cache (computing)Cubic graphDisintegrationProduct (business)Configuration spaceComputer networkElement (mathematics)State of matterDifferent (Kate Ryan album)Partition (number theory)MereologyLevel (video gaming)PlanningConfiguration spaceSoftwareGroup actionOrder (biology)Data storage deviceEuler anglesSystem administratorVertex (graph theory)Cubic graphComputer animation
Medical imagingSolid geometryOpen sourceShared memoryEvent horizonNamespaceConfiguration spaceMultiplication signVirtual machineComputer animation
System programmingNeuroinformatikMathematicsRoutingPartition (number theory)Matching (graph theory)Control flowOvalCycle (graph theory)Game controllerPattern languageConfiguration spaceService (economics)Virtual machineMereologyLaptopBootingCASE <Informatik>File systemPhase transitionComputer configurationVertex (graph theory)Limit (category theory)Medical imagingResonatorState of matterKernel (computing)RootMechanism designMeeting/Interview
System programmingWebsiteLattice (order)Computer animation
Transcript: English(auto-generated)
Hi, so I am Alberto, I work in SUSE. In SUSE we know how to make installer, it's like a tradition there. And this is one more installer, so this is, we are going to talk about Yomi, that is a new kind of installer.
So what is Yomi? Yomi is a new kind of installer, for now oriented for the SUSE family, so we have like micro-stumble with, it's very oriented for the SUSE family. And it's kind of similar to Autogast. Autogast basically is this kind of installer
that you provide a profile, generally it's an XML, and it's able to make local decisions and produce an installer based on this profile document. So Yomi, one of the goals that we have for Yomi is it needs to be used for parallel installations
in network where we have nodes that have very different hardware configurations. So network that some nodes are going to have a lot of CPU, memory, hard disk, and we want deployment that are different in relation with the partitioning and the software and the service that are going to be installed there.
Of course it needs to be unattended, so if you have multiple nodes, you don't want to take care of them. That means that you need some kind of freedom and the system needs to make some choices for you when you are not very clear what you are talking about. Also we want something that is simple to manage.
Something, one of the problems about Autogast is that the XML is not easy to manage, it's not very DevOps oriented. So basically it's hard to provide logic inside an XML document, it's kind of hard. And if you want to integrate that in something
that is Git based, it's maybe not optional, not optimal. We need something that is easy to orchestrate, and this is really an innovation in relation with Autogast. Orchestration is not part of the Autogast, and we are talking about installation where the orchestration is a key component.
For example, some installation need to be before others, and some service need to be running before other service are able to connect to the first one. So we need something that can orchestrate. Something that is not a requirement, but I put here is idempotent, idempotent. We want something that is not going to break our system,
so we have something that is working, and we make a mistake, so we are admins, we make mistakes. We don't want something that is going to be broken because I try to reapply the installer there. So this is a property that we are looking for.
And of course, we need something that can work alone, so maybe the installation is a single problem that you have. You have a big network, and you only want to take care of installation. But we want something that can be integrated into a bigger solution, something that you can put this piece of code as one step in a bigger one.
Typical use case of Yomi. If you have experience with OpenStack or Kubernet, you have a good candidate for the use of this installer. Because in OpenStack, you have a lot of nodes. Those nodes are different, these different hardware, because there are different roles involved in OpenStack.
We have, for example, the control plane. Control plane is usually a very big machine with a lot of memory and a very nice network because it's going to be the connection point with the rest of the network. We have some nodes that are going to be computation nodes. So basically, the memory is going to be big,
but the hard disk is not going to be super, super beefy. But the CPU is going to be, of course, the feature that is going to define this kind of node. And we are going to have a storage node. By a storage node, we don't care much about the memory, maybe, we don't care much about the CPU,
but we really care about the storage that we should install there. Basically, we won't create LVM, or we are going to have a specific hardware for resolving this problem of storage. We have different nodes, and of course, we want different kind of installation there. That means different partitioning, different service, and different users.
Yeah, so, as I commented, now we want something that can be integrated in the usual workflow that is the company or the client is doing during the provisioning.
Provisioning is something more complicated than installing and setting up service. Sometimes it's a very big chain of dependencies, and it's very easy to neglect the installation part of this kind of workflow. So, let's try to draw a line
about how the normal installer work. I mean, if you have experience with the N2 Arc, this is going to be very easy for you. But sometimes you are related, so you take JAR, so you take whatever installer that your distribution have, and next, next, next, and you have an installer system.
And basically, all the installers, whatever you do manually or automatically, have this kind of a step. So, basically, the first asset you are going to do is partitioning of your devices, maybe taking care of rate LVM. Eventually, you need to provide a file system in the volume or subvolume that you are going to have there.
If you have a battery phase, probably you are going to create subvolumes to take advantage of one of the features of battery phase. Eventually, you are going to install the server inside. I don't know how. Maybe you are going to copy the server from a different source or install VR repository. Eventually, you are going to install users, both, for example, but maybe some admin users that you need to, you can provide during installation time.
Of course, you need to configure certain service, like the time, the zone that you leave, the network, the different service that you want to run, like SSH. One of the last step is the bootloader, so a group needs to be taking care of,
to be placed in the current device and in order to provide the kernel to be in memory in the proper moment during the boot process. And maybe there is some kind of a post-installation task. If you have a snapper, you need to take care of some bits that is not something you sell
in any installers. If you have battery phase with read-only volumes, this is the moment when you need to set the flags and whatever you need to do post-installations. This is extremely easy. I mean, this is very well known and it's very easy to be a CLI.
So you have a device, so you boot there and we are going to see how this can be done, but it's very easy. It's also very easy to express in a shell script. You can take the shell script and grow for it until you have an installer, something more complicated, more feature complete. You usually have, eventually, one of your iterations,
you are going to take abstraction of that. For example, in JAD, we have leaf storage that is abstracting completely the problem of partitioning volumes and volume file system. This is a very complicated problem. It's something that we can abstract, but abstraction usually provides limitations to the thing that you can do
and this is something not very nice because when you have a different kind of installation, that is something that in SUSE we have, those abstractions are broken. So let's talk a bit about how a typical installer. So in that case, imagine that we have a server with two devices. We know that we have two devices, SDA and SDB.
So the first thing that I need to do is, of course, have a USB-D ST or a DVD where I boot my operating system and somehow I have a CLI and I'm able to see my devices. So the first thing that I need to do is to create the partition label. In that case, we are going to create GPT. It can be a different one, MS-DOS or anyone.
So we have GPT. So the next step usually is to create partitions. So we are not going to take care of RAID because RAID can live sometimes without a partition. Sometimes needs to be the framework who is going to take care of the setup of the device,
but this is not a problem for us to know. For now. So we are going to create a first partition that is going to allocate some space for group. So, you know, group maybe needs some more space. Let's create a partition for it and set the proper flag for this initial partition. We are going to create the swap. We are going to create the root file system.
And in the second device, one single partition. And we know that we are going to use that for battery phase. Next step is usually the file system. So we have four partitions, but only three need file system. One for the swap. For the root file system, today we are going to choose X4 and battery phase.
Eventually, because we are using battery phase, I decide that I'm going to create the first suite volume. Maybe later I'm going to create a different one, but I'm going to create the first volume. So I need to mount the device, create the suite volume, and I'm going to be a good citizen and mount the suite volume. The device, sorry.
The next step usually is create the file system tab. And this is something very tricky. The creation of the file system tab is not a single step in installation process. It's something that is going to be done multiple times during installation. So we are going to create the file. Of course, first we need to mount the device, create the etc directory, create the file,
make sure that the proper lines are living there. Of course, taking care of the UID for the battery phase because maybe it's very picky. We have the initial prototype of the file system tab. Now is the moment of the sub-installation. I decide today that I'm going to use zipper, the packet manager.
Whenever you use Debian, DNF, Joom, zipper, there is a way to, the packet manager is going to create a CS root for you inside the device, via, in that case, the slash slash root parameter that we can see here. It's going to, if you have a repository registry
that is one of the first operations that we do, it's going to take the package and taking care of the CS root environment and install the software inside the CS root. For today, we are going to install a pattern. Pattern is a collection in the SUSE parlango. It's a collection of package.
We have to take the super enhanced base system, the kernel and group to it, so it's going to be very minimal. We are very close because we now can install the bootloader. The bootloader is going to be very tricky because we really now need the CS root.
So the first thing that we are going to do is do the proper modes to provide something that we can leverage via CS root. Now we can create the init RD. I know that the package from SUSE is going to create the init RD, but I'm not sure the state that it is, so I'm going to create now the init RD that they want.
I'm going to make some small configuration in the group, etc default group, that is a file that the good mkconfig is going to read to generate the configuration file, and the last step is make the proper installation. You can see that every command is prefixed with CS root, and this would make this step a bit tricky.
Now the reboot and to have a proper file system running. CS root in that case is used to make sure that the full target is loaded via systemd, and system CT reboot.
The funny thing is that if you take those steps and you copy and paste, you are going to have a bootable image, so it's not very complicated. But now you can see that there is like a lot of to-do commands at the beginning, and this is something that you need to take care of when your system is going to change, when you are going to have something that is going to be a bit different.
The thing that of course that we miss is that we don't have a way to parameterize anything. We know that we have two devices, so okay, accordingly to what I have. We know the kind of partition that we want, but we don't take care about the size, we don't take care about the ordering. So we need a way to specify if we have different kind
of installation to indicate or discover the DIX partitions, the kind of file system. If we have LVM already, the situation gets more complicated, but you need a way to parameterize to indicate how this profile is going to be there. If we are going to use battery phase,
you need to indicate more sub-volumes and maybe a prefix for the sub-volume. If you are going to use a snapper, you also need to indicate a default sub-volume that is going to be used when you mount this device. We completely forgot about secure boot UFE,
we forget about users, of course service, we only take care of one target, but that's all. And we take care manually of the CS Red environment. All those components are missing, and you can provide that with a better script, using maybe a real programming language like Python Ruby, or whatever.
So the proposal that Yomi is doing is, let's do the installation in a different way. We are going to use a configuration manager system that in this case is Salt. I don't know if you know Salt. So this is a configuration manager system, it's kind of like of Jeff and Ansible,
at least in principle. They are both taking care of the same programming space, but the architecture is completely different. So it's something like a push list, you can play a lot and you can have very, very advanced configuration options there. The default option is to have a master and a medium. So we are going to have a node that is going to have a service
that is called master, salt-master. And it's going to take care of controlling the different minions that are living in the different device of your network. They have some advantage, but you can optionally remove the minions and optionally remove the master. You need eventually one of them.
And you can have very funny and crazy architecture based on reactors, so you can listen in some kind of events. So if one of the node is changing the configuration of doing something crazy, the master can capture some event and react based on the kind of event that is happening. Maybe you want to shoot down a service and restart,
or you want to change some parameter of the configuration. So it's a nice piece of technology that is able to do great stuff. Of course, they have his own kind of concepts or words or jargon that is only alive in the salt realm.
So they have the concept of grains, pillar, execution, mold, state mold, and salt state. But maybe it's more easy to see that here. So this is the typical master minion configuration. We have a node that is going to have the master,
and we have three nodes that contain the minion. In the minion site at the right, no, at your right, you have the grains. Grains is the minimal data that the minion is going to export to the master. I'm going to publish, okay, this minion is going to have, or is currently having the CPU,
this amount of RDs, amount of memory, this network, MAC address, whatever, whatever. It's very basic information that the minion is publishing. In the other side, we have execution models. Modules. Execution model is like a small action, a single action, basically done in Python,
that is going to make something. This something is whatever you can think of. For example, you have model to start a service. You have model to remove a file. You have model to create directories. You have model for whatever elements. And it's only doing that. If managed to do that, it's going to return true.
And if fails, maybe you have an exception or some nice report about why it fails. On top of that, on top of this basic element that is an execution model, you have an state model that is again a Python code that is going to leverage several execution models in order to reach an state.
And this is super cool, because the concept of a state is the concept that is going to make Yomi quite different from the rest of the installers. And a state is going to guarantee that certain configuration is already in place.
If not, it's going to make decisions in order to reach this final state that we want. We have also states that cannot be confused with the state models, because the states are something declarative. It's a YAML document that the user or the admin describe
that is going to put in order different state modules. So you are going to first make sure that the Apache, for example, is installed. There is another state that you're going to be sure that the Apache server is going to be running, that the server directory is going to be in place,
that the index file is going to be there with the proper username and permissions. So those declarative documents are the ones that are going to orchestrate those state execution models. We have the pillars. The pillar is the data that the states are going to use in order to have a proper configuration.
So for example, we can have a state that is going to be sure that Apache is going to be installed, but Apache can have different names in different distributions. Via a pillar, I can provide all the names that this package have in my different distributions.
So we have for one side the state that is going to be sure that certain state is going to be there, and a pillar that is the data that the state is going to use. And between both sides, we have a bus. And this bus is the channel of communication between the minions and the master, and can be used to deliver events.
So if one of the nodes is doing something, I can find an event that is going to be collected by the master and add accordingly. So we know what is execution mode. So the execution mode is the unit element that is going to take care of one single problem.
The state module is the piece of code that is going to guarantee that a state is reached. So it's going to validate the input, take care that it's going to check the current status. It's going to make decision about the actions that needs to be done in order to reach the final state. If you don't provide the test parameter,
those actions are going to be made to be done. And later, it's going to recheck again the final status and compare with the plan. And if there is some kind of difference, okay, maybe some problem happen. Something very nice about this proposal, if you are able to express all your problems
in this kind of semantics, is that you can fix a wrong configuration reapplying your state. And as a side effect, that means that if your status are within place, you can reapply several times the same state and nothing is going to change. You don't break, you have an input end.
Again, solid state are the declarative document and have this kind of shape. So in the first example, we have a way to express that a device is mounted. You can see that we put mount mounted. So we are going to guarantee that the mount point
is going to be in place with the proper permissions that the device is pressing and that the partition can be mounted and the file system meet with the current state that you have. If everything goes in place, this state is going to be executed. If not, he's going to try to make the missing elements in place in order to guarantee that again,
you have this device mounted in the proper place. You can have something more complicated like prepareKExit. You have this command run that is actually an execution model. And you can see that you can put random shell script there. This is very bad behavior, but you can do that. That the status of the check
if the state is meet or not can be in a different line on your declaration. And the very nice thing is that this declarative document can be enriched with a template file. So you can provide logic on top of your logic. So it's like a macro.
Like if you were a LIF developer and you can provide macros there. And you can make decisions based on the pillars. So the data that is the pillars can be read from the state and you make decisions inside the YAML document to hide certain elements of your document or to show another one. So you can, using the pillars and the grains,
provide a different description of the state. So the final element is the pillar. And something very cool about the pillar is that it's also intelligent. You can enrich traditional YAML document that is something tree-like, declarative,
and very flat and plain with a template. So that means that you can dynamically change your data based on the description that the minion provides to the master. So if a master has a different kind of ID,
so every minion has an ID, you can change. For example, the file system that you are going to apply. And this is extremely powerful because you are able to inject your old grains. So grains is basically a Python code that is executed and will reach a result in a namespace. And you can query that from your data.
So now the idea is super clear. We are going to make an installer that is a solved state. So the master plan, this is like my master plan here. For each basic installation,
we are going to make sure that, sorry, for basic action, we are going to create an execution module. Because salt is a very big project, probably we have the action already in place in the repository in GitHub. If not, we are going to implement the missing parts or fixed bugs or whatever.
Next step is that for every high level action, like for example, mounting a device, partitioning a device, creating a user, we are going to make sure that we have an estate module that is going to properly do the action that they want or reach the state that they want to be reached.
So again, we reuse the one from AppString. If not, we extend that ample reaching in the Sol repository, so this is an open source project, and we are going to contribute the missing parts that we need in the main repo. And if there is not there, nothing there that we need,
we are going to implement something from scratch and try to upstream that. After that, we are going to take care of what is really yummy. That is the SLS, the YAML file that is going to order the different estates. We are going to provide a way to parameterize all the data.
The data, of course, is something that is a responsibility of the user, but we can provide some examples for that, and we have join. So I have a small demo. This demo is a two-node installation, and I try to be wildly different in the kind of installation.
In one node, we are going to have a BIOS machine, and we are going to install microSD. If you know microSD, it's a transactional update operating system, so that means that we are going to have a battery phase in a round-only way. And we have a second node. In that case, it's not a BIOS, it's a UEFI node.
No secure root is going to have two hard disks, and we are going to use a LVM, battery phase for the root and XFS for home. This was like the traditional tamper with was installed with this configuration, and now it's everything but the reference. So let's try to do that here.
I have the demo in my repo, so. Basically what I did here is that I have a nice script that is going to boot to two VMs. So we have two VMs. Those VMs are, the hard disk is completely clean. There's nothing there. And I boot the VM using a small tamper with image,
or I don't know if you can say image. It's really minimal, it's a live image, and the only thing that is different from a normal one is that they contain the sole minion. Actually, this is not even what makes different because tamper will provide the sole minion by default. So we have the sole minion. We have a sole master in somewhere.
So we have the sole master here. And we make sure that we can see from here, we can see our minions. So we have two nodes, and both with a pin written through. So yeah, we can see that they are there.
We have pillars, and we have two pillars. One is micros.sls that is going to be applied, or this data is going to be read only one node. And we have a different data that is going to be only read for the second node. So you can see here that we have
like a section for configuration. Obviously, we need a way to define the partitions. So we describe the devices that we have and the partitions that are living there. We have this type. We have this type, in that case. It's about how this partition is going to be used.
In that case, we have FE partition or LLVM. Because we have an LLVM device, we are going to also provide all the information for the logical volumes that we expect there. We can use the same kind of parameters that are expected in the CLI tools
that LLVM is going to use. We have a different section for the file system. So we can refer to a partition or a logical volume that is for the peer point of view, is not different, it's only data. We have a very complicated schema for suite volumes with a prefix.
We have, crazy stuff here. We have an XFS file system for home. The bootloader, we require that it's going to be in one of these devices, and we want these partitions there. And this is something kind of similar for microOS. Again, we have a single device here.
The file system, those are suite volumes specific for the microOS installation. We have parameters like copy on write. We have, yeah, the bootloader, maybe it's a bit more complicated.
We need to provide some parameters that the bootloader is going to require. The pattern, of course, is going to be different. Yes, we want, of course, one service that is going to be running that is a sole minion. The nice thing is that when the machine is going to be installed and is rebooted,
the minion that is running inside the new machine is possible to be connected to the master now. We say that John is taking care of copying the certificate and everything in order to make this process transparent. So we can provide a height state. This is going to use the network,
and we have a monitoring tool that is going to capture those events that I was talking about and show the status of the installation. So meanwhile, because this is an installer, let's finish the presentation.
So now everything is more clear. We have Python code that is living in the solved space, and we have a YAML declaration file that is what is the yummy problem.
We upstream all the code. The current status is that all the key components are living now in solved, at least in the available branch. So you can take this branch, and yummy is going to work. So we take care of fixing all the bugs and missing features for partette, for zipper. We want to plan to do that for different package managers like the Debian or Red Hat.
We provide very nice new tools about hardware information. Of course, BatterFS needs a big revamping upstream. We use it quite a bit, so it's normal that we require more from different distribution. We provide a very nice CS root module
for taking care of all the dirty details of CS root. We have a very nice module that is free, that is able to clean your CS root for use. So you are able to take a picture of CS root, install garbage, and recover the previous state
of the CS root using the package manager. More crazy stuff that we do. Eventually, those declaration of the state, meanwhile you are providing more features, the tree of the state is going to go deeper and wider.
So this is a picture of two or three months ago. Now the tree is bigger. But it's taking care of all the state that we saw in the first slides about the installation below, the partition, et cetera. And much more, so. We provide the partition that is obviously the more complicated part, have different ways of operation.
This one that is using linear programming is able to make decisions for you about the size of the partition. So we have very crazy stuff already in place. One of the tricky elements of here is that you are,
Yomi is a state that is composed of different state. And composing, even for a state, is a bit tricky. So there are some challenge on composing a state, but we manage to resolve that. And the current state is that we can do crazy stuff. We install OpenSUSE like crazy,
lit, Amber with microSD. We have cubic. It's still not at the same level that just, for example, missing part and resize of the partition is planning to be, so there is a plan to provide this feature, looks, we catch. Maybe the network configuration can be better. But we have already Yomi integrating different solution.
For example, in cubic and system manager. The cubic one is extremely cool because you have a nice CLI tool that you can see, okay, we have a new node in the network. Please install tamper with and use QADM for the rest of the action that need to be done
in order to install Kubernetes. Everything is upstream, everything is open source. All my contribution to salt are possible to reach from solid stock. Yomi is leaving the OpenSUSE namespace in GitHub. And yes, all the package and all the image
are in OBS at your disposal. So, eventually this is everything I have to say. I don't know if anyone was using salt before or if you have any question or comment. So, thank you.
Yeah, maybe I have a question. So, when you use this configuration to install the machine, are you also able to use it for configuration? Like, for the long time configuration?
I don't get the question, sorry. Once you put all the configuration to install the machine, are you also able to modify it later? Yes, of course. This is an estate, so if the estate change, for example, if you provide new patterns
that you would to be there, new users, yes, this is going to change in the proportion of your change. There is always limitations. For example, if you, today we are not supporting resize of the partition. There are complications about the resizing. You can break stuff. So, this part needs to be taken care of.
But, everything else can be done during installation or post-installation. Something very neat about Yomi is that because we have a way to control CS root and we can inject the state inside the CS root, you can go super far in the configuration, in the provisioning of your node
before you have the reboot. So, imagine that you are doing a starting service inside the CS root, sending commands inside the CS root. I don't know, like half a, almost full OpenStack or Kubernet installation before you have the reboot.
You can go very far there. So, that means that if you decide to reapply the change in the current installation, those changes are going to be propagated. In that case, not inside the CS root, but inside the real machine.
Okay, question. Maybe I missed something, but this is like a chicken and egg problem. You have to have the salt agent first to make this all work. So, how do you work around this? Yeah, this is a, yeah. So, we have the salt master, no?
The salt master is a node that is going to be there, come with your laptop. So, imagine that you have pixie wood in your laptop. So, you can inject the kernel in itrd, a root file system that is going to contain the salt minion. You have another option that is the one that you set here. You have a DVD or USB stick that you plug into the machine and boot from there.
And you are going to have the salt minion. The only thing that you really need to break this cycle is a kernel, a run image, and a minion. It's the only thing that you need. You can provide whatever you want. SD card, USB, pixie wood, VR firmware, whatever you want. So, you break the cycle, injecting a minion,
whatever mechanism you have. No more questions, then? Thank you, Alberto. Thank you. And we have a short break. Before the next talk, we'll be about NixOS.