We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Jetpack, a container runtime for FreeBSD (part 1 of 2)

00:00

Formale Metadaten

Titel
Jetpack, a container runtime for FreeBSD (part 1 of 2)
Untertitel
Breaking the Linux/Docker Monoculture
Serientitel
Anzahl der Teile
41
Autor
Lizenz
CC-Namensnennung - Weitergabe unter gleichen Bedingungen 3.0 Unported:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen und nicht-kommerziellen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen und das Werk bzw. diesen Inhalt auch in veränderter Form nur unter den Bedingungen dieser Lizenz weitergeben.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
Jetpack brings application containers, popularized by Docker on Linux, to FreeBSD Application containers are a new approach to virtualization, popularized in last two years by Docker - a Linux implementation that all but monopolized the market. Jetpack is an application container runtime for FreeBSD that implements the App Container Specification using jails and ZFS. I will speak about how the container paradigm is different from the existing jail management solutions, how Jetpack fits into the general landscape of container runtimes, and about Jetpack's inner workings and implementation challenges. A quick demo is not unlikely.
16
Vorschaubild
39:54
SystemverwaltungSoftwareentwicklerComputeranimation
MAPEinfache GenauigkeitKernel <Informatik>HardwareOverhead <Kommunikationstechnik>HardwareWort <Informatik>Physikalisches SystemEinfache GenauigkeitTypentheorieElektronische PublikationMAPOverhead <Kommunikationstechnik>DifferenteVirtualisierungLastOffene MengeUmwandlungsenthalpieGemeinsamer SpeicherPuffer <Netzplantechnik>ImplementierungInstantiierungBildschirmmaskeProzess <Informatik>Kernel <Informatik>MereologieMinkowski-MetrikDateiverwaltungNetzbetriebssystemSystemaufrufApp <Programm>SchnelltasteVirtuelle MaschineNichtlinearer OperatorBitMultiplikationEmulatorPunktSchlussregelVisualisierungGenerator <Informatik>AbstandRechter WinkelInverser LimesAutomatische HandlungsplanungOrdnung <Mathematik>KorrelationsfunktionTropfenGruppenoperationComputeranimation
MathematikStandardabweichungVerzeichnisdienstPunktASCIIZeiger <Informatik>ParametersystemSystemaufrufPhysikalisches SystemPortscannerProgrammierumgebungInverser LimesDateiverwaltungOverhead <Kommunikationstechnik>DatenverwaltungDienst <Informatik>PolstelleSpeicherabzugDistributivgesetzSpezialrechnerInformationsspeicherungServerGroße VereinheitlichungMetropolitan area networkMehrwertnetzIntegriertes InformationssystemKnoten <Statik>Personal Area NetworkLastSeidelRechnernetzVariableGeradeOffice-PaketEinfache GenauigkeitImplementierungFreewareMultiplikationSpezifisches VolumenImplementierungServerKlon <Mathematik>Einfache GenauigkeitPhysikalisches SystemDienst <Informatik>Bildgebendes VerfahrenSoundverarbeitungProgrammierparadigmaSoftwareWurzel <Mathematik>Ein-AusgabeOverhead <Kommunikationstechnik>DatenverwaltungRechter WinkelLesen <Datenverarbeitung>BitMultiplikationsoperatorFormale SpracheProjektive EbeneFunktion <Mathematik>BenutzerbeteiligungDatenhaltungParametersystemDistributionenraumSchnittmengeInteraktives FernsehenHierarchische StrukturEntscheidungstheorieFreewareSystem FElektronische UnterschriftVirtuelle MaschineClientStandardabweichungElektronische PublikationProdukt <Mathematik>Kartesische KoordinatenMAPLaufzeitfehlerSchreiben <Datenverarbeitung>ProgrammierumgebungProzess <Informatik>VariableTopologieProgrammverifikationSystemplattformGruppenoperationDämon <Informatik>DifferenteZeitrichtungDateiverwaltungGeradeVerzeichnisdienstQuellcodePunktPrototypingBinärcodeAbgeschlossene MengeAdditionSicherungskopieWort <Informatik>GrenzschichtablösungServiceorientierte ArchitekturSystemverwaltungVersionsverwaltungApp <Programm>SpielkonsoleUmwandlungsenthalpieRechenschieberAggregatzustandZahlenbereichInverser LimesMereologieInhalt <Mathematik>KontinuumshypotheseFehlermeldungComputersicherheitVerschlingungElektronisches ForumNetzbetriebssystemKontextbezogenes SystemData MiningFlächeninhaltRadiusKlassische PhysikResultanteMinimalgradEinsInformationsspeicherungOffice-PaketSchlussregelSystemaufrufROM <Informatik>UnrundheitComputeranimation
Transkript: Englisch(automatisch erzeugt)
Thank you for having me here. I'm pretty excited about, uh, talking here nervous actually. So please bear with me. My name is Maciej. I'm a developer and system administrator and I do the DevOps thing and I will be talking about, uh, the container running for, for BSD.
I will start about talking about the technology involved, how to place it in the existing landscape. And, uh, the point is that that technology here is not new. And B, I will expand a bit on the container mindset,
which is something new about Docker and the rocket implementation. Then I will say a few words about the app container specification with Jetpack implements. And I will finish by talking about Jetpack implementation itself.
Uh, containers are form of operating system level visualization, which is something known when single host kernel, uh, runs multiple isolated guest instances. These are also, uh, for BSD, these are open VZ, uh, virtual machines.
It's old. Uh, the difference between the plane, all the virtualization, uh, the hypervisor type virtualization, which is what we usually think about when we hear the word is that in
hypervisor type virtualization, the host runs hypervisor and completely independent guest operating systems. Each guest operating system runs it. Its own kernel has its own virtualized hardware and is completely isolated from remaining guests.
And each guest believes it has all the hardware to itself. The OS level virtualization is when kernel runs, isolates multiple parts of the OS that they believe that they are the whole operating system, but they're isolated on the host level,
but they should actually share the user space from the host. Because they are visible from the same process tree, they use parts of the same host file system. The difference between OS level virtualization and the hypervisor is that on
one hand there is less isolation. Uh, the guest and host operating system must be the same or at least binary compatible because we can run Linux guests in through BSD jails as much as through BSD Linux system call emulation. I was that it has much lower overhead that for virtualization because a
system doesn't need to emulate whole, uh, hardware. It just needs to enforce, uh, access rules. There is no multiple kernels, no multiple operating systems to run. The isolation level is adjustable and it is possible to share resources.
It is possible to cross mount via null FS or bind mount parts of file system. It is impossible to share buffers for loaded files and so on and so on. And the technology isn't new. It started in 1982 it's as old as me.
Actually the CH root was introduced into Unix into this year and this is the system called that allows a process and its children switch and see selected directory in the file system as the root file system. Then in 1998 free BSD got jails and soon other operating systems followed.
And these are, this technologies are adding extra level of a separation extra additional restrictions on top of a CH root.
The newest one is Linux C groups and Alex C which is what a modern container systems, the Docker rocket are based on. And uh, these technologies isolate file system. Additionally, they isolate process tree so guests can't see process processes of other guests
and of host. There is additional restrict isolation between environments. Uh, the administrative system calls, uh, basically there are technologies to make crude behave like more isolated, more separate on system.
But the tooling around these technologies is still in a virtual machine mindset. They treat guests as a complete system that is managed from the inside. You open console in a free BSD jail or SSH into the jail. You start a services to previous DJs have their own RCD and RC system.
They're all in it. The gyros are usually long running and mutable. They can change state. They can be managed like every, like any server.
So they have also a management overhead of a whole server. You need to manage access, uh, user accounts, backups and so on and so forth. In January, 2014, Docker showed up and it brought a new mindset,
the container mindset. This is what people have been doing before as well in closed source in platform as a service, uh, in house. Docker was first open implementation to do this thing. The difference is that the containers are service oriented.
Each container is a single service. It is not a system. It is not, um, Ubuntu machine or a Debian machine. It is a Redis database. It is an NG next, a web server. It is rails application server.
The guest is managed from the outside by an API and you don't normally log into the containers. You call the API to start and stop them. If you need something changed, you destroy the container and create a new one. The images are immutable and can be distributed, can be shared.
The provisioning is fast and the scope on right. You can almost immediately clone a new container from a pre-made image. The main points that distinguish the container mindset is the layered storage, explicitly defined interaction points.
There is a limited number of places where the container interacts with the rest of the world. Immutable images, volatile containers and as I said, the service oriented. I will expand on it on the next slides. So at the beginning we have an image.
It is just a base root file system of Ubuntu longterm support version. It is read only. Once it was written, you cannot change it. It is step and to prepare a containerized application, we create two child images.
One is the Redis server and the arrow means inheritance. This means that only the difference is actually remembered. One image is built on top of one another. So one image has the Redis server, another host Ruby language runtime and from the Ruby image we get,
we make another child image with race application. So let's say Bob wants to start a race application. It starts a container. A container is just a containers root file system is writable layer on top of the image and it's volatile. You don't care what happens to it.
If you stop the container, it can disappear. You are not supposed to care about that layer state and it's blazingly fast to start because you already have the race application. You already have the image directory. So we'll just put, so we'll just, uh,
in jetpack use ZFS clone in a Docker, you just put an a UFS layer on top of it. You don't copy anything, but the application has precious data. So, uh, for that we have volumes which are persistent directories shared with
containers. We need to explicitly say this directory, we want to keep on host, want to keep that data. This is important because these are user uploads and the app wants to talk to a database. So it's linked with second container
that, uh, host readies already discuss its own volume for persistence. Now let's move that arrow a bit because when Alice wants to run a copy of the same app, she can just clone that. She doesn't need to have any copies of what's already in the images.
She just has her own content, small containers, the thing read, write layer and the volume. If we want to host another app, we can add to the same hierarchy. Nothing is repeating unnecessary.
And if Bob wants to scale his app, then he can just start second container to scale out that will share the same volume, the same readies link. It will just work. That's how it looks like. I hope it's,
I hope it's not as confusing as it looks like now. And the explicit interaction points of containers, you can interact by common line arguments and environment variables that start that you start the container with. You define network ports,
you define short volumes, and you're absolutely not supposed to care about anything that's not in a volume. You got standard input output and exit status. You don't get to interact in any other way.
The immutability is very important. Images once built are read only containers right there is throw away as volatile and volumes are the place where persistent and mutable data leads. Because of that images are reusable, are uniquely identified and are verifiable. Once images built, it is set.
It is a one single set of files that can be identified by a checksum, by a crypto signature. You can verify that it's still the same. You can share it, you can publish it, you can reuse it multiple times
because it's a read only layer. You can safely clone multiple containers, multiple children images out of it because containers right there is throw away. You can easily exchange containers. If you want to upgrade software that is running in container, you just shut down the old one and start the new,
just like that or the other way around. You first start the new one, verify it works, then shut down the old one and redirect traffic. And you are forced to clearly declare where is the data that you care about. And I believe this is a good thing because you always know what to back up,
where can you write? And the net effect is that besides the image, the stable images, the read only images,
the management overhead of running container is of a single service. Get the benefits of the jail isolation of the fact that containerized application is enclosed, is self sufficient, includes all its dependencies, but these dependencies are not copied,
are not repeated because through the image hierarchy they are actually shared and you manage the container as a single service. Docker was started in 2013 and it's actually pretty impressive because this
is two and a half year old software that is so popular that is so widely deployed. I don't know that if I've heard about any other software that's been so widely accepted so fast.
It's the first free container on time and that's not the word free because platform as a service companies had to be doing that before. Other companies or some administrators must have been doing it in house.
Docker was first tool to actually formalize that approach. It defined the, the approach, it defined the paradigm. It was adopted extremely soon and because it was defining the paradigm, it was implementation driven. But this has a lot of drawbacks.
It was the only free container on them for a lot of time. So it basically developed, started to develop a monoculture and didn't need to care this much. Didn't need to care about the details because people will use Docker anyway. It works. It exists. There's no competition.
It prototype the container product. It was the first version. It was the first approach, but because of this extremely fast and wide adoption, it was locked into their early design decisions because people were already using it. People were using it on production. There were, there was already a lot of pre-made images and they had to be compatible
because of the success. And with that process, it ended up being implementation defined. And with all due respect, Docker is awesome, but it's got its drawbacks. Nothing.
There's no software that doesn't have faults. And with this whole quick success with the new approach, this quote comes to mind from the classic on project management that first version will always throw the first version away.
We'll always re-implement and Docker because of its success, didn't get an opportunity to re-implement. I sincerely hope to see a Docker 2.0 and to see what they come up with at that point. But right now,
there are some design decisions like running with a huge binary blob, S root as a daemon that listens on HTTP that are kind of unfortunate. So people from Core OS, this is a Linux distribution that started soon after Docker got popular.
It is a Linux distribution that focuses on Docker and on containers where the host distribution is just a thin layer to run systemd and Docker and any actual service should be containerized.
And at some point they figured out that they want to try to implement their own container runtime because they cannot agree. And as they said, they cannot defend with straight face to their clients some design decisions of
Docker. So in December last year, they started their own project called Rocket, which is the first implementation of the app container specification. I will talk about the specification a bit more later.
Designed for composability, security, speed, and it breaks Docker monoculture on Linux. It is heavily implemented on, heavily uses systemd. So it's pretty much tied to Linux. What is