We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

openSUSE MicroOS

00:00

Formal Metadata

Title
openSUSE MicroOS
Subtitle
A new distro for a new age
Title of Series
Number of Parts
40
Author
Contributors
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
As the world moves more and more towards containerised solutions, a number of real questions start to appear. - What is the perfect platform for running containers atop? - How to use this platform as part of a flexible, scalable, highly available infrastructure fabric? - How to minimize the maintenance and administration of this platform at scale? Many of these problems are well answered in enterprise container offerings, but for developers more interested in the state of containers & kubernetes upstream, new issues start to appear. With such fast moving upstreams, developers and enthusiasts need a platform that can keep up and is closely involved with those upstream developments. This platform needs to not only be able to run containers at scale, but also on single machine, all the while preserving the attributes of low maintenance so the focus can be on the containers, not the base system beneath them. And then the question becomes "What is so special about containers anyway?" - in more and more cases, people are deploying Linux VMs, Cloud instances, or bare metal to do 'just one job', with other jobs being handled by other machines. Can we simplify the Operating System and make it easier to live with if we optimise it for these 'single-purpose' deployments? This talk introduces openSUSE MicroOS, and explains how it addresses the above, being the perfect distribution for this modern age. The session will explore in some detail how MicroOS is developed in lockstep with the Tumbleweed rolling release and can be used for a wide variety of single-purpose systems. This talk will also discuss openSUSE Kubic, the MicroOS variant focused on containers. The talk will share how Kubic collaborates with various upstreams including kubeadm and CRI-O. Transactional Updates, Kubic's system update stack will be demonstrated and the benefits from such an atomic update approach discussed in some detail. Finally the kubictl Kubernetes cluster boostrapping tool will be discussed and some future plans shared for consideration and feedback.
ComputerComputerVirtual machineQuicksortMultiplication signBuildingNeuroinformatikBridging (networking)Presentation of a groupCompact CassetteTerm (mathematics)Personal computerComputer networkPolar coordinate systemCubic graphComputer animation
Data managementRule of inferenceService (economics)Computer hardwareKolmogorov complexitySystem administratorMach's principleMaxima and minimaVirtual machineComplex (psychology)Configuration spaceVirtual machineServer (computing)Service (economics)Configuration spaceMereologyComplex (psychology)Data centerConfiguration managementOpen setDatabaseComputer hardwareNumberBefehlsprozessorStudent's t-testNeuroinformatikSystem administratorQuicksortRevision controlCuboidConnected spaceEmailWeb 2.0LaptopScripting languagePatch (Unix)Multiplication signOperator (mathematics)Local area networkPoint cloudGastropod shellSystem programmingWide area networkHeegaard splittingComputer networkCondition numberInternetworkingMenu (computing)CASE <Informatik>Process (computing)Computer animation
Visualization (computer graphics)Point cloudSingle-precision floating-point formatSystem programmingConfiguration spaceComputer hardwareData managementService (economics)Reduction of orderKolmogorov complexityMaizeDatabase transactionComplex (psychology)Service (economics)Heegaard splittingSystem programmingRevision controlConfiguration spaceProcess (computing)Operating systemComputer hardwareVirtualizationSingle-precision floating-point formatVideo gameQuicksortTouch typingCASE <Informatik>Database transactionVirtual machineRule of inferencePoint cloudMathematicsMultiplication signPoint (geometry)MereologyPosition operatorPatch (Unix)Context awarenessRollback (data management)Configuration managementSystems engineeringInformation securitySinc functionInstallation artPerfect groupSystem administratorState of matterData centerEndliche ModelltheorieBitBinary codeMaxima and minimaSlide ruleInternet der DingeOperator (mathematics)Computer animation
Database transactionOperator (mathematics)MathematicsAtomic numberBootingService (economics)System programmingSingle-precision floating-point formatComputer fileSoftware testingFile systemPoint (geometry)Computer animation
RootkitReading (process)Database transactionOpen setProcess (computing)Enterprise architectureComputer-generated imageryMiniDiscConfiguration spaceBootingStack (abstract data type)Single-precision floating-point formatService (economics)Personal digital assistantQuicksortInstallation artCASE <Informatik>Perfect groupCombinational logicMoment (mathematics)WorkloadKey (cryptography)Software testingBitComputer networkBootingService (economics)Configuration spaceMereologyMedical imagingProcess (computing)Software developerTouchscreenComputer configurationPoint (geometry)Level (video gaming)State of matterRevision controlDistribution (mathematics)Control flowSingle-precision floating-point formatCodeDifferent (Kate Ryan album)Projective planeFitness functionBuildingComputer fileRootkitData managementSystem programmingPoint cloudVirtual machineContext awarenessOperating systemRead-only memoryMultiplication signDatabase transactionFile systemFigurate numberAreaPiComplete metric spaceComputer hardwareStandard deviationOpen setConfiguration managementRight angleMoving averageMathematicsInternet der DingeComputer animation
EmulatorWorkstation <Musikinstrument>Process (computing)System programmingService (economics)Moving averageMultiplication signCubeCubic graphBuildingWebsiteProjective planeServer (computing)Product (business)CASE <Informatik>Hacker (term)BitBootingService (economics)Point cloudMereologySystem programmingPoint (geometry)Game theoryCombinational logicVideo game consoleFingerprintOperating systemVirtual machineSingle-precision floating-point formatRevision controlMetropolitan area networkHand fanWorkstation <Musikinstrument>EmulatorBlogEndliche ModelltheorieSoftware developerVideo gameCartesian coordinate systemProcess (computing)FamilyPerspective (visual)CodeDistribution (mathematics)Operator (mathematics)Web 2.0Derivation (linguistics)Software testingComputer animation
System programmingDemonWindows RegistryComputer-generated imageryDirected setRegular graphMereologyScale (map)Run time (program lifecycle phase)Operations researchComputer hardwareConfiguration spaceData managementKolmogorov complexityDisintegrationInclusion mapLoginInstallation artDefault (computer science)Computer networkSoftwareComplete metric spaceComputer networkDefault (computer science)Arithmetic meanPerfect groupRootkitSystem programmingQuicksortVirtual machineCubic graphNegative numberMultiplication signPoint (geometry)BuildingComputer clusterWordProcess (computing)Computer configurationConfiguration spaceService (economics)Medical imagingOperator (mathematics)MereologyRun time (program lifecycle phase)CASE <Informatik>Group actionProjective planeExterior algebra1 (number)Cartesian coordinate systemOperating systemHand fanKey (cryptography)Installation artDemonWindows RegistryRevision controlTable (information)String (computer science)Moment (mathematics)Set (mathematics)AdditionOnline helpBootingDifferent (Kate Ryan album)Patch (Unix)Slide ruleSingle-precision floating-point formatComputer hardwareVideoconferencingScaling (geometry)PasswordData storage deviceConnected spaceWikiConfiguration managementMetropolitan area networkRule of inferenceCubeStreaming mediaView (database)Perspective (visual)Game controllerServer (computing)Sound effectBoom (sailing)Computer animation
Hill differential equationComplete metric spaceInclusion mapComputer fileVirtual machineConfiguration spaceLoginDependent and independent variablesPublic key certificateVertex (graph theory)Revision controlClient (computing)Auditory maskingConfiguration spaceBitSocial classUniform resource locatorComputer fileBootingLaptopReflection (mathematics)Endliche ModelltheorieMathematicsPatch (Unix)1 (number)State of matterInformation securityKernel (computing)MereologySoftware maintenanceSystem programmingCASE <Informatik>Data managementComplex systemComputer networkMultiplication signFile systemCombinational logicOperating systemVirtual machineSpeech synthesisTerm (mathematics)Metropolitan area networkCubic graphRight angleCubeNormal (geometry)Hybrid computerGoodness of fitLibrary (computing)Process (computing)Perfect groupState transition systemJSONComputer animation
RootkitRead-only memoryRule of inferenceGroup actionMereologyPatch (Unix)WikiMiniDiscControl flowMultiplication signRight angleBitSpacetimeMathematicsSoftwareChemical equationInstallation artVirtual machineCASE <Informatik>Term (mathematics)Software bugData recoveryComputer fileSystem programmingFile systemGoodness of fitEnterprise architectureElectronic program guideLine (geometry)Random matrixMusical ensemblePlastikkarteSynchronizationMetadataMobile appRevision controlContext awarenessLipschitz-StetigkeitData storage deviceBuildingComputer configurationComputer animation
Videoconferencing
Transcript: English(auto-generated)
Okay, then. Thank you all for coming. I'm Richard Brown, you know who I am, and I'm going to be talking about microOS. So to kind of start, who here has heard the term microOS or cubic? Raise your hands, please. Cool. I want you to forget everything you think
you know about microOS or cubic. When I was doing this presentation, I realised I could turn this into like a history lesson of everything we've tried and what we were thinking a year ago and what we were thinking a year before that, and then
I realised that would make a really boring presentation. So I'm doing my best here to describe what microOS is today, where we're going today, and I'm therefore likely to say things which don't make any sense with your previous understanding. So kind of, yeah, do your best to forget it, go with me, I will do my best to leave room for questions at the end so we can kind of bridge any
gaps between then, now, where we're going. So the story from microOS for me kind of actually starts with my story of computing. You know, when we all started, when I started with the computing, a Commodore 64 was my first machine, and it was a machine that could do one thing at a time. You know, one cassette
tape, yeah. Put one cassette tape in, wait 20 minutes for it to load. If you want to do more than one thing, you need to have more than one Commodore 64 next to each other. And this is sort of where computing started, before networking even, and that's where things started getting interesting later
on. You know, with the PC, with networking, what did we all start doing? We started plugging them all together and, you know, building networks and using PCs and using laptops. And as we started doing that, the story in many respects became less about that one thing running on that computer, but, you
know, what can all the things do when they're connected together? You know, this became the era of the internet, and we had, you know, networking first and then the internet. And when you look at that, you know, what comes as baggage with networked computing, you realize that, you know, you end up
with a sort of a certain pile of complexity. You know, the more computers you have on a network, be it a WAN or LAN or the internet, you know, you need more infrastructure, more networking, more switches, you know, more air conditioning. You know, in businesses or even at home, you know, the more
hardware you have, you know, it's harder to get that money, especially when it's big expenses. And inside companies, you always have the issue of, like, capital expenditure versus operational expenditure. The more machines you have, the harder time you have with configuration management, you know, keeping those machines running, you know, and you want to keep them all
running kind of the same way. So, you know, we all end up with these wonderful shell scripts on our laptops or whatever to, you know, set up the machine exactly the way we want to do it. And, of course, you don't have to spend all of that time patching. And so if we're thinking sort of now, you know, 90s, early 2000s, I used to be a system administration, like, what was,
like, lesson number one of system administration? Try and have as few machines as you need. You know, always try and minimize the amount of hardware you have in the data center. Always try and minimize the amount of that additional complexity with your network, because, you know, if you just keep on throwing machines at the problem, you're just going to, you know, bulk up that kind of pile of baggage at the end. And so you end up
with servers in particular running more than one service. You know, the traditional kind of SUSE Linux enterprise, or in many cases, the traditional sort of open SUSE server doesn't just do one job. You know, it's a mail server and a web server and a database server and
something else, because that helps cut down that infrastructure baggage, that sort of the connection tax sort of side of things. But that in itself ends up bringing more complexity. You know, you may have less machines, but you still have this nasty configuration management
problem, because you've got to worry about the configuration of 20 different services on this one box. And, you know, they might be incompatible with each other, you know, try and run two versions of PostgreSQL on the same server at the same time. It's not going to be easy. Those machines are going to need to have more hardware, more RAM, more CPU individually. And a problem that I
used to have a lot of the sys admin is what's described here as problem pooling. Everything individually works fine, and then one student does something really stupid on that Apache server with PHP, and your entire infrastructure is broken because Apache ended up eating all the CPU, which then meant your
database server stopped working, which then meant the cluster crashed, which then meant the HR system doesn't work anymore. And, yeah, the whole thing cascaded because you dumped it all on this one machine. So you couldn't just bundle everything onto less servers. And then, of course, you know, the world's
changed, and we stopped talking about servers and data centres as much, and started talking more about cloud. And part of the cloud story is this idea of making IT infrastructure more modular, of, you know, splitting as much as
possible, splitting those various services into the smallest sensible chunk, managing them in that chunk, and therefore ideally, hopefully, minimising that problem of pooling problems together or complexity on an individual system. And this is, you know, the new world we're actually living in
today, you know, and, you know, it's not just a case of cloud. You know, you could say virtualisation is part of this story, you know, and generally speaking, with like virtualisation, how many are doing lots of stuff with VMs on data centres, for example? Yeah. So when you have a new service, what do you do? Do you add another service to an
existing VM, or do you just spin up a new VM? Both. Okay. But yeah, you know, but more and more, you're probably spinning up more and more VMs, especially with cloud, unless you're trying to avoid having to spend too much money. Containers live this life, IoT live this life. And so more and more, you end up with systems that are being deployed
to just do one job, a single purpose system, containing the minimum amount of service, minimum amount of binaries, it needs to do that one job. In some cases, totally ignoring patching, you know, just deploy the thing in the cloud, run the thing, destroy it, deploy a new thing. And when
you need to do when you need to add more services, you just you add more VMs, you add more containers, you add more cloud answers, whichever poison you're using in this new world. The model is kind of one that just encourages more and more installations of an operating system, actually individually doing less and less. And that solves a
little bit of the problem. You know, the the incompatibilities of running multiple versions of the same thing on the same machine goes away because you're not running multiple versions there anymore. The problem pooling goes away as well. But you're still left with the
hardware requirements getting higher and higher, the more you're putting on the bare metal, and you're left with configuration management, which is probably getting even worse, the more VMs you have, the more variant, the more various installations you have around there. So to really solve the problem of the perfect operating system
for this new world for containers for single purpose systems, we need it needs to have a an answer for the configuration management problem, basically minimizing the possibility of the configuration of an operating system drifting, changing, and ideally have as little on there to be configured as possible. Because then if
there's nothing there to configure, there's less to go wrong. patching, you need security updates, you need to be running the latest version of the right thing. And as much as possible, that should be totally automated. For obvious reasons, if it's automated, you don't have to worry about doing anything about it. And the hardware
needs to be well the hardware requirements of that operating system should be kind of minimized or optimized as much as possible to do that job. And I just realized I talked about all of that without changing the slides. When we've been looking at that in our
team, you know, we kind of ended up focusing on the configuration management and the patching side of things. Trying to, you know, as operating system engineers, you know, we're looking at the best way of minimizing the problems and mitigating the issues. And at SUSE, as you know, we've been doing stuff with BTRFS since like forever. And in SLE, we have snapshot and rollback. And with that
kind of in context, we realized that the the solution to those two problems really can be answered by sort of this one champion solution in between of this concept of transactional administration. You know, it doesn't, you know, you want to, of course, minimize the configuration
management requirements, you want to minimize the amount of patching you need to do. But you're still going to have to do something, you're still going to have to change some config on a machine, you're still going to have to patch it. So you need to be in a position that if it's worth doing, if you're actually changing the configuration of a system or changing the state of a system, you can undo it, that it can easily be rolled back to the
work the last known working state. So any change should be transactionally applied, be in a way that's sort of totally reliable, totally reproducible, and totally reversible, because anytime there's changes, there's a chance something will go wrong. And of course, we also realized at this point that every sys admin has this sort
of almost secret rule of they never want to touch a running system. It's there, it's working, you know, don't touch it, especially on a Friday night, because if you deploy on a Friday night, you're going to work on Saturday. And so, in two years ago now, we introduced into sort of the OpenSUSE ecosystem, this
idea of transactional updates, which is an update, a way of updating a system using BTRFS and snapper, but in a different way than the way you normally see it in OpenSUSE and LEAP, which is totally atomic, you know, basically, the change to the system happens in
one single atomic operation, it either entirely happens, or none of it happens. When it does happen, it happens in a way that isn't influencing the running system, you know, the system is currently running, you're updating the file system in the background, but the files currently in use don't get touched, and
then you flip from the current system to the new system on a reboot. Because it's all happening in one single atomic operation, and that's captured in a snapshot in BTRFS, it can be easily rolled back. And because it's actually happening at a reboot, it's also trivial at that point to test has the reboot happened properly, you know, have all the services
started up, right? Is everything working the way it's meant to work? So if it's not working the way it's meant to work, it's incredibly easy to just throw that snapshot away, reboot again, and get back to where you were. If you want to know more about transactional updates, there's another talk in here, 12 o'clock tomorrow, Ignaz is doing it, talking
about the state of transactional updates after the last couple of years of it being in OpenSUSE, where it is, how we're using it. So not going too much detail here. And admittedly, it's not all wonderful. We have some areas where we're trying to improve things. So on Sunday at 10 o'clock, also in here,
Torsten is talking about some of the ideas we have for improving the situation of transactional updates with ETC. So the whole configuration management side of things, things are getting better, we're minimizing it, minimizing the problems, but there's some still there, and we could do with some suggestions or hearing some of our ideas there. So with this combination of basically
using Salt and a read-only root file system, we're doing our best to solve that configuration drift problem. The idea with MicroOS, what we're basically doing is minimizing what you can change on the system at the same time using Salt, so
when you do change it, it's standardized across all of your machines as possible. On top of that, we're using transactional updates, and on top of that, we're optimizing the footprint, not installing too much, not bundling a million files on there, so of course there's less things on the system, there's less things to go wrong,
and when we originally started with MicroOS, we've always talked about it in the context of containers, but when you think about it, it's actually far more generic than that. It's a perfect operating system now for any sort of single-purpose deployment. Containers is one example, but a VM that's just doing one thing, or an IoT device, or something
like that is the perfect, MicroOS perfectly sort of fits that niche. It's a rolling release based on Tumbleweed. In fact, we're building it totally as part of the Tumbleweed project, so we advertise it and talk about it as if
it's a different distribution, but it's not a different code base. It's tested in the Tumbleweed project, it's built in the Tumbleweed project in OBS, and it's actually part of the Tumbleweed release process, so if Tumbleweed breaks in a way, or Tumbleweed changes something that breaks MicroOS, then neither of them get shipped, and vice versa.
If MicroOS breaks something, which I admit I do probably a bit more often than I should, then I'm the reason Tumbleweed doesn't have a snapshot that day. But that means, of course, if you're using Tumbleweed, you know that quality, you know what we're doing there, and this is sort of part of that at that same level of always usable.
But of course, with the additional benefit, because we have transactional updates as the only way of doing it, in some respects, it's a safer version of Tumbleweed to use, because it can roll itself back if it all goes wrong. But I'm getting ahead of myself, I've got another talk about that later today.
We've got various deployment options for MicroOS available now, so we have a fully working tested DVD and NetISO with Yast, so you can download it, boot up a system with it, you get a slightly optimized version of Yast, or workflow with Yast, than the typical Tumbleweed installation, so there's less screens,
there's less steps, there's less things to ask for. But we've still kept a lot of options there on the summary screen at the end, so you can really dig in, customize, add extra stuff, because it's Yast, what's the point if we didn't give you that option? We also don't have a bunch of these other things, kind of most of these are in some state of development.
We have VM, cloud images, and pie images, which, based on what Fabian was talking about, which are there, but still need a little bit more testing before they're officially part of the release process. We have Yomi, which is a method of installing MicroOS directly from Substack, we'll have that very soon.
And for all of these images and ways of deploying, we're using, at the moment, a combination of either cloud init or soon ignition, for configuring the MicroOS system on first boot, handling things like the network configuration, SSH keys, et cetera, so the idea being, you just deploy it, it boots up,
it's ready to go, it's already running, you don't have to do anything else, just put your workload on top of it. Yomi, the salt-based installer, is a really exciting part of that. You can come here, come to the gallery, so the other room, tomorrow at three o'clock, and Alberto's talking about that.
So I don't have to go into more detail here, which is nice, because otherwise I'll run out of time. And with all of that put together, the question then becomes, what are you gonna use MicroOS for? It's not just a container operating system now. So some examples, sort of the obvious five,
obviously, containers, it's where we started. We started this MicroOS stuff playing around as a container operating system. But anything that's hosting a single service is a perfect use case for this. So things like single service VMs, cluster nodes, hardware appliances, Raspberry Pis, IoT, the idea with it,
if you're running it in a container or you're just putting an extra RPM package on top, MicroOS should be the perfect open SUSE for that kind of use. In my case, I've become a complete MicroOS addict. So I'm obviously chairman of open SUSE, I've been using Leap, I've been using Tumbleweed.
At the moment right now, I have one Leap machine left. Everything else in my life is either pure Tumbleweed or MicroOS, including all of my personal infrastructure. So I have a NextCloud server, that's running on MicroOS as a container host using the NextCloud container. There's my blog, in fact, there's my blog,
there's a Cubic blog, there's pretty much every blog that I'm involved in somewhere. That's running on MicroOS, running Jekyll on top. In those cases, I'm not using a container, I'm just using plain Jekyll RPM packages and running that one service there to deploy the website. I have a retro gaming machine
that's plugged into the back of my TV using a combination of MicroOS with RetroArch and Emulation Station. And that's been plugged into the back of my TV now for about a better part of a year and a half. And I haven't actually looked at the console for that
for that whole time. It's just been plugged into the back, it's been updating Tumbleweed every single time, rebooting itself in the middle of the night. And whenever I feel like playing old retro games, I just flick to that on my TV, and it's there, and it's working, and it's running the latest version of Emulation Station based on what we have on OBS, and it's never gone wrong.
I didn't bring it with me today, I might bring it with me tomorrow. I take it to conferences, we shove it on the booth when everybody's bored and play a few games, and it's never gone wrong. Every time it boots up, it's there, working fine. And I assume at some point, Tumbleweed has had a bad day, but it automatically rolled back, so as a dumb user, I don't notice.
It's just there, always working with the newest stuff. And yeah, my Minecraft server, which me and my friends are used to, same again there, another MicroOS machine. In this case, I think I'm running it on Hetzner, so running it in the cloud, and that's just running a container on top of it, and it keeps on patching itself, and I don't pay any attention to it.
It's just there and working. So after I'm done rambling on about this stuff, Ish is talking about how he's using MicroOS in production at 4.15 in here, and in fact, if you want to hear me ramble on about this stuff a bit more, I have a crazy idea about using MicroOS as a desktop.
I messed around with this in the past in a Hack Week project, and I'll be talking about it more at three o'clock in here as well. If you're interested in playing with this, it's part of Tumbleweed. So download to opensuso.org slash tumbleweed in the appliances folder, in the ISO folder.
You can download this now. We don't have a website for MicroOS yet. Volunteers are welcome, please. It's all new, we're moving stuff around, so if you're interested in working on a website for that, please find me around the conference. Let's talk. You know, we needed to obviously start spreading this and just having ISO sitting on a download server
isn't going to get everybody using it, but at least technically speaking, we can say, it works, it's awesome, we're building it, we're testing it, it's good quality. So all the hard part's done, now we just need to spread it around the whole world. So that's MicroOS. What about Cubic?
With MicroOS now defined as this general purpose, single purpose operating system. So you can use it for anything, but we expect it to be deployed for just one thing at a time. Cubic is now a MicroOS derivative. Basically it's a showcase of what you can do with MicroOS when it comes to containers or Kubernetes. So we're still using the name Cubic
because people know it, and it's part of the Kubernetes ecosystem, it's known by the Cloud Native Compute Foundation and the like, but from a technical perspective, it's just a MicroOS variant, and just like MicroOS, it's built as part of Tumbleweed, tested as part of Tumbleweed,
shipped as part of Tumbleweed, and so yeah, all works as part of that family. So we have three distributions, all in one code base now. Containers in particular are fun. They do a really good job of trying to solve that kind of problem pooling problem.
Problem pooling problem? Well, should've thought about that. And by separating the service or the application from the operating system. And I've realized more and more, as a distro engineer, as a Linux geek, I don't necessarily care about that because I'm a kind of user that, I am perfectly happy doing everything in RPMs
because I care about the base system and I care about the application, I worry about all this stuff. But most users don't. They don't want to worry about the operating system, they just want to worry about that one thing that they care about. Their web server, their Minecraft server, whatever.
And containers give a really nice model of reflecting that, technically speaking. Because the developer, the user, can just worry about that service they want to deploy. And they can micromanage that. They can really take care about what's in that container, where they pull that container, how they configure it. And that's the bit of the story they want to worry about
and they just want something underneath that they can just leave and forget about and not do anything with. But now we have micro S and like my examples, I've just deployed it and I don't look at it anymore. It takes care of itself, it patched itself. So marrying these two things together works really, really nicely.
But you need to, of course, have something to run those containers inside. So we're huge fans of Podman in the cubic project. Podman is an alternative to that other container runtime, beginning with D that people like. One of the reasons Podman is a really interesting project
is from an architectural perspective, it's more interesting. It doesn't have a single daemon. So with Docker, there's this one big Docker daemon, which is a nightmare to secure. It's a nightmare to manage. And if that Docker daemon dies, all of your containers become impossible to manage. With Podman, it's acting actually
like an old-fashioned Unix application in the sense of it starts a container as a process, you can manage the process, you can stop the process, and yeah, there's no daemon to die. You can still manage it if things go a little bit weird. And it supports all of the same containers that Docker does, and in fact, it uses all the same commands that Docker does
and some fun extra ones as well. So basically, for a lot of people, if you want to transition from Docker to Podman, just alias Docker to Podman and the commands will mostly work the same. In the case of Cubic, we don't install Docker by default anymore. In fact, I don't offer Docker anywhere in the installation options anymore. So you're gonna get Podman by default.
If you don't like that, you can install Docker from Tumbleweed. It'll work as well. But yeah, please, if you're interested in containers at all, try it, play with it, it's awesome. When I wrote this slide, I forgot that Fabian was going before me. But yes, we have registry.openSUSE.org now. It's building containers directly from OBS.
It's rebuilding those containers as part of OBS, rebuilding the packages so the containers are always fresh, they're signed, they're notarized. And with Podman, it's nice and simple. You can just run a single command, download the latest official Tumbleweed LEAP container, and the more containers as we added.
So if you want to know more about that and you weren't here half an hour ago, just watch the video, because Fabian did a really good job of explaining how we can build those containers and how you can contribute to that. Who here has heard the word Kubernetes before? Hey, cool.
So yeah, Kubernetes is special. Containers, running them at scale, and when I say special, I mean it in the positive and the negative way of special. It's designed to run hundreds of containers across dozens of machines. And when you look at that from a distro engineer's perspective,
it's an absolute nightmare. You know, like Dr. T talking about Casp and Cubic this morning, you know, part of the reason why things have kind of gone a little awry there is there are just an infinite amount of moving parts. No matter which way of the stack you look at it, you know, from the user's point of view, they always want to have the latest containers.
So you have the containers always moving really quickly. And then, of course, the latest containers probably require the latest Kubernetes, so there's this need to have Kubernetes moving really quickly. And that, of course, then has an impact on your container runtime if you're using CRIO or Docker, and therefore that needs to move really quickly. And then that, of course, means the base operating system
has to move really quickly, and somehow all these different parts all have to move really, really quickly, and at the same time actually work. So it's, oh. It's the problems we were talking about earlier of, you know, configuration management, patching, hardware, you know, just kind of amped up to 11 and then some.
But with kubic, because it's based on micro S, because it's adopting this principle of sort of single-purpose operating systems, it's basically, in my mind, the perfect Kubernetes operating system. Because with Tumbleweed base, the moving quickly part is totally solved. You know, we can move as fast as all of the upstreams without worrying about things much at all.
Means we can also integrate the latest stuff from upstream right away. So kubeadm, in kubic, for Kubernetes, we don't use Podman, because Podman is kind of more designed for your single host. Instead, we're using CRIO,
which is basically the same thing, but kind of optimized for Kubernetes rather than a single host. And coming soon, well, in fact, technically, oh, yeah, sorry, I just realized my Marita was pointing slightly, there we go. Kubic, yeah, everything I was just saying.
And yeah, coming soon, we have Cured, which is a service running on your Kubernetes cluster to kind of help orchestrate the rebooting aspect of patching kubic systems. So because we have kubic where it patches itself and then it needs a reboot for the patching to take effect,
and with Kubernetes, you have a large cluster with hundreds of different machines all doing different things, you don't necessarily want to have those machines randomly rebooting when it's really inappropriate, like, you know, when it's busy. So with Cured, you have a service sitting on Kubernetes, it's aware of what your cluster is doing. Cured stands for Kubernetes reboot daemon, by the way.
And then, so Cured will make sure, okay, or Cured is now integrated with transactional updates, so it can be aware, okay, these machines are ready for a reboot, and then they'll trigger the reboot when the time is appropriate. And we also have a new tool called kubic control, which helps streamline and bootstrap
sort of the whole kubic Kubernetes story. So we're using kubeadm to actually start the cluster, build the cluster, but with kubic control, we kind of wrap that around, and help set up a salt master and configure salt all at the same time. But unfortunately, I don't have time to talk about that today. Setting up a Kubernetes cluster on kubic
is incredibly easy. The documentation's on the wiki. We need to have at least two machines. You set up the installer at the moment using Yast, or you can use the images we're working on. Basically, installs it. SSH is automatically configured. In fact, in addition to SSH,
we also have this really cool tool called tallow, which sets up basically like fail to ban. So tallow is listening to your systemd journal, figuring out who's trying to access your SSH connections, and if it's getting too many failed attempts to guess the root password, it just blocks them, sets up the IP table rules,
and yeah, nice and secure. So another one of those nice things with kubic of you don't have to worry about configuring it much after the deployment. It's already there, already taking care of itself. But yeah, once you've got your first kubic node installed, all it takes to set up that node in the cluster is one command.
And in fact, if you've seen previous versions of this slide, that command used to be really long. Thanks to working with upstream, we've managed to solve most of the issues. So it's now just one command and a string for setting up the network. And in fact, the talk tomorrow about Selium, actually there's an alternative way of doing this
to use what you will learn about Selium tomorrow. When that's finished, kubeadm gives you a nice command which is way too small to read here. Basically, that's the command you need to run on the other nodes on your cluster. So it will join the cluster, automatically have its keys configured
and trust established between the other nodes in the cluster and the mask that you just configured. Then you need to configure a client so you can manage the system. That's nice and easy, because kubeadm has already made the config files for that, so you basically just copy the config files to the right location.
Need to have a network. Again, nice and easy, because we're in this wonderful container world. So single command there, that will deploy your container network to your cluster. And after those few commands, you're done. All of you just add all the additional nodes
using that command you were given at the beginning, and you end up with a Kubernetes cluster. So you can then start deploying your containers, have the containers automatically moving around multiple machines, taking care of themselves. And that combination being, therefore, then kubeads underneath, patching itself,
rebooting itself, the containers on top, moving around the cluster, so everything is always working all the time. To know more about Kubik, yeah, because there's just so many talks about all this stuff, but I didn't have to fit it all in here. Dennis is talking here today about using Kubik with Ceph and with Rook.
Yeah, just before five o'clock. And then after him, we got to talk about Kubik and OpenSDS as well. And with that, I'm done with 10 minutes left for questions. So does anybody have any? And if you do, I think I have the speaking microphone, so I'm afraid you're gonna have to go at the back,
because I broke the other microphone. Hello, Joe. Hi, so you've really advertised using microwaves. Are there any reasons not to use it? I mean, it looks like, why wouldn't we just change everything to microwaves?
Why isn't it the perfect solution for everyone? So if the machine, if the deployment of the machine, so the VM or whatever, is just going to do one job, I think it might be the perfect answer for everything. That's what I want to explore a little more in my talk an hour from now.
I'm not quite sure on that. But if you're the kind of person that wants to tinker around with the machine once it's deployed, like let's say, for example, me as a typical tumbleweed user where I'm installing packages and removing packages and really messing around with the innards of the system, MicroOS is not a friendly system for that,
because you're going to be rebooting every time you're making a change to that part of the operating system. So if, yeah, tinkering and playing around is your thing, MicroOS isn't the best for that. But if it's more of a case of you want to deploy it, have it just do one job,
and once it's deployed, pretty much forget about it, I really think there's a place to use MicroOS in tons of places, yes. Yeah, so Adam, question. What I'm hearing from some users is that the reboots are just too frequent, because every update has to go through the reboot.
Would there be a way of doing it in two worlds, like if you have updates that are safe to apply, like non-kernel updates, so we could still do that, and then kind of make the file system read only, do the copy only if you have to do the reboot, the big ones every three months,
where you do the kernel updates, the major security updates that don't need a reboot. Would you like the answer that SUSE management would probably like me to say, or my personal answer? Personal one. I don't trust maintenance updates. They have a habit of breaking more than Tumbleweed updates do.
Partly because there's a legitimate technical reason for that. It's incredibly hard when you're just trying to change that one thing in a complex system. That kind of desire to minimize the change
brings with it certain risks, that in Tumbleweed, we just managed to blast right past, because we can change everything, we can always, if that one tiny change needs us to change 20 libraries, we change those 20 libraries, we test everything, we ship everything. And so, the MicroOS patching model is kind of a reflection of that philosophy.
Maybe there is room for a hybrid. If there is, I'm not probably the best person to find it, because I'm very much on the rolling everything side of things. Good? Okay, yeah, so by the way, I'm running my laptop on Tumbleweed, so I know what you're talking about. Yeah.
When you say doing one thing, what do you mean, in so much as, would something like OBS classes doing one thing, or is that, which part of OBS?
Are you talking about workers, et cetera, and all? Yeah, it's a bit of a nebulous term, kind of on purpose, because that kind of scope of one thing can vary around. So, the typical kind of one thing would be like a container host.
So, in that case, it's MicroOS plus Podman. That's the scope of MicroOS. The one thing is Podman. The fact is, Podman might be running 20 different things in a container. That's out of scope of the operating system. We're not gonna reboot because you deployed a new container. The system state is changing because Podman's getting an update.
So, with MicroOS, there's nothing stopping somebody installing MicroOS today, doing transactional update, PKG in 20 different packages, turning all of them on, and having those 20 things be the one thing for MicroOS, but you bring with that more of, like Joe mentioned,
the patching problem. The more things you deploy there, the more things are gonna move, the more things you're gonna need to reboot because, and so if you try to make this one machine do everything, you kind of lose the benefit of MicroOS being this kind of just simple deploy,
forget about it option. So, there's a balancing act in between. With something like OBS, I think OBS is smartly designed enough that in fact many of those parts are kind of already built in that way, like the workers. Workers would probably make perfectly good MicroOS use cases because you deploy MicroOS, you have the worker software on there,
and then everything else is VMs for the build part. That would work. My talking later about the MicroOS desktop, a desktop is kind of stretching the one thing a little bit, but that's where I want to be exploring with that.
Yeah, well, we're exploring that of, does it make sense to install Wayland and X and Gnome and kind of define that as the one thing and see where that goes. So, I don't want to strictly define it down to like, oh, it has to be one package or whatever. It's OpenSUSE, we want to figure out where that line perfectly is,
but I would, if someone's gonna file a bug on MicroOS and say, I installed these 20 things and one of them doesn't work, I'm probably not gonna be that, I'm gonna suggest they're probably gonna use something else other than MicroOS. Any more questions?
Yep, oh, cool. Hi, Brian. I have one question about file system, because I think now you can kind of create a snapshot for the system and go back to a previous version, but still, the file system sometimes may
broke or it's just filled up and you cannot write anything into the disk. Do MicroOS have something to solve the problem that the file system itself may fail
and we can kind of recover it? So, butter FS has a bit of a reputation for being a bit of a hard beast to live with. In my opinion, it's actually mostly an unfair reputation. And I'll try and answer your question
kind of in both parts. In terms of the case of butter FS filling up because of snapshots, which is something in OpenSUSE we've had a ton of, and part of that is at least partially my fault, there is a balancing act of making sure that the root file system is big enough for the snapshots caused by the root file system changing.
Until recently, I don't think we got that balance right. And currently in Leap 15.1, in MicroOS, in Tumbleweed, I really strongly believe we've solved that problem now because I spent a really hard time trying to get the libstorage-ng sizing rules for all of those things to be far more accurate
for the real world. So we generally have YAST automatically making the root file system bigger. So it has more space for those snapshots. Plus, Arvind on the YAST team has done a lot of work with Snapper, so it tidies up itself better. So those two things together mean
Snapper shouldn't be filling up the disk anymore. Full stop, that should be fixed. The other kind of part of the reputation of BTFS being a bit fragile, I talked about it actually at OSC last year. There's a lightning talk I did on it. The biggest problem with butter FS is it's aware of what's going on with the disk.
It's smart, it's got its data, it's got its data, it's got its metadata, and it's constantly checking that those things are in sync. And when something goes wrong, it takes the action of mounting everything as read-only. So people think it's broken. It's not broken, it's just taking care of itself. Unfortunately, when that happens, most people have used something like EXT4,
and what's the first thing we all do when EXT4 is misbehaving? We run FS check. If you run FS check on BTFS, especially with minus minus repair, you're probably gonna break BTFS. It's why the documentation says this is the last thing you should ever do. But nobody reads the documentation.
On the wiki for OpenSUSE, we have a 14 step guide on basically what to do when BTFS misbehaves. For 99.9% of people, you don't get past step four before it's fixed. And actually running FS check is the last step. So when you do the right things with BTFS,
it's perfectly reliable. SUSE are using it in the enterprise. So it just, yeah. Read the manual, read the wiki, and don't panic when something goes wrong. I've never had a BTFS system in the last four years I haven't been able to fix. And I've had a lot of broken systems, so.
Thanks. Cool. Good, I think I'm out of time. Thank you very much.