vmd: an virtual machine daemon for OpenBSD
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Alternative Title |
| |
Title of Series | ||
Number of Parts | 31 | |
Author | ||
License | CC Attribution 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/45270 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
1
3
5
6
7
9
10
11
13
14
15
16
18
19
21
22
26
27
28
29
30
00:00
DemonVirtual machineVirtual realityComputer networkStack (abstract data type)Point cloudMereologyMeta elementControl flowInterface (computing)VirtualizationMultiplication signInterface (computing)Process (computing)Inheritance (object-oriented programming)Medical imagingMiniDiscComputer hardwareAddress spaceImplementationKernel (computing)Endliche ModelltheorieStack (abstract data type)Dependent and independent variablesPhysical systemDemonSoftwareLine (geometry)Point cloudConfiguration spaceMereologyDevice driverOpen setSoftware maintenanceGame controllerCodeError messageComputer fileTap (transformer)Data modelDiagramVirtual machineComputer configurationFile systemNetwork socketControl flowMessage passingProjective planeXMLJSON
08:47
Kernel (computing)MereologyComputer fileInterface (computing)Modal logicPoint (geometry)Functional (mathematics)Dependent and independent variablesSemantics (computer science)Serial portMiniDiscVirtual machineProcess (computing)Volume (thermodynamics)Software bugSoftwarePhysical systemPlanningStandard deviationDifferent (Kate Ryan album)BitIntegrated development environmentFerry CorstenComputer configurationInjektivitätEndliche ModelltheorieCodeSound effectProcess capability indexBoolean algebraMedical imagingSubsetArithmetic progressionSystem callInheritance (object-oriented programming)XML
17:28
MiniDiscInterface (computing)Group actionUser interfaceLocal GroupVideo game consoleSerial portRevision controlPrincipal ideal domainBefehlsprozessorDependent and independent variablesProcess (computing)File Transfer ProtocolMechanism designVirtual machineCASE <Informatik>Principal ideal domainSilicon Graphics Inc.Default (computer science)MiniDiscBootingVideo game consoleFirmwareRoutingRevision controlKernel (computing)Limit (category theory)MereologyPhysical systemDevice driverVirtualizationGroup actionOperator (mathematics)Multiplication signStandard deviationComputer fileComputer configurationInterface (computing)Installation artIP addressGateway (telecommunications)Android (robot)Function (mathematics)Medical imagingFlow separationType theoryConfiguration spaceSubsetSound effectMathematicsElectronic mailing listGame controllerJSONXML
25:47
Video game consoleMiniDiscPrincipal ideal domainBefehlsprozessorProcess capability indexInterface (computing)Control flowFault-tolerant systemVirtual machineDifferent (Kate Ryan album)Computer fileServer (computing)Moment (mathematics)Point cloudMedical imagingConnected spaceIP addressMetadataArithmetic meanScripting languageMultiplication signHuman migrationConfiguration spaceFunctional (mathematics)Key (cryptography)TelecommunicationUniform resource locatorStack (abstract data type)Core dumpArithmetic progressionWeb 2.0Web pageBefehlsprozessorAlgorithmSparse matrixOpen setDefault (computer science)Sound effectRange (statistics)Video game consoleDirectory serviceInstallation artSemiconductor memoryComputer configurationMiniDiscJSONXML
34:06
Firewall (computing)RootPrincipal ideal domainBefehlsprozessorDressing (medical)Open setServer (computing)Direction (geometry)Firewall (computing)BitComputer animationJSONXML
35:37
Hacker (term)Electric currentOpen setDressing (medical)Limit (category theory)CuboidMoment (mathematics)XML
37:19
Condition numberOpen setServer (computing)Computer clusterBlock (periodic table)Moment (mathematics)BitMereologyBefehlsprozessorSoftwareGroup actionProduct (business)Student's t-testElectronic mailing listFreeware
40:05
WindowArmSoftware developerPoint (geometry)EmulatorVirtual machineiSCSIMathematicsMedical imagingKernel (computing)Gastropod shellInterface (computing)IterationState of matterInternet service providerPlanningInformation securitySoftware protection dongleTraffic reportingComputer hardwareSoftware testingSoftwareCASE <Informatik>Extension (kinesiology)Software bugCross-platformSemiconductor memoryBefehlsprozessorMiniDiscDifferent (Kate Ryan album)Operating systemSpacetimeConfiguration spacePhysical systemKey (cryptography)Operator (mathematics)Server (computing)Level (video gaming)Computer animation
49:02
Projective planeComputer animation
Transcript: English(auto-generated)
00:05
Is it two releases now, or maybe longer? My talk is specifically about the user-facing part of it. So I'm not going into the details of the VMM kernel implementation, because that's mostly Mike Larkin's
00:23
work anyway, and I only barely understand what he does there. So when I made this slide, I figured out I had these last talks about the cloud networking stack in OpenBSD.
00:44
And now, with VMD, I can talk about the cloud stack in OpenBSD. So it's actually part three of a series unintentionally. So what is VMD? VMD is a daemon responsible for the
01:03
execution of virtual machines. That's the definition in the manual page. It is a process that runs, that reads the configuration, executes VMs, interfaces with the kernel driver, the VMM
01:22
driver, to set everything up to have the CPU-assisted virtualization of Intel and AMD CPUs, and all that. So VMD basically does all the maintenance and the device
01:40
layer of these virtual machines. And VMD is something that you can actually compare in some ways with QEMU. The model of VMD is very similar to QEMU and KVM,
02:00
where you have a kernel driver and then a device layer that runs in userland. But we wanted to provide something that is very simple to use, but also from the code and the functionality, easier, cleaner, something that can be an all-base system.
02:23
So it's also a license issue, but it's definitely not everything. And it's designed in a way that fits into OpenVST. And actually, we also wanted to have our own implementation, because OpenVST is a research project as
02:41
well, and it's sometimes good to try something new to get experience. So it's not always, well, we have to do one solution or the other. We use VMD to experiment, to implement our ideas, so we can implement a hypervisor in OpenVST.
03:06
So what's the history of? Mike Larkin wrote the VMM driver. Actually, he started some years ago. I don't quite remember, but it was a hackathon we had in
03:21
Berlin, and we had beer. And he showed me a few lines of a de-message from a virtual machine from the host, where it said, VMM monitor attaching, initializing EPT, and so on. And I was like, wow, you're running this on OpenVST?
03:42
What is that? Did you port anything? And then he said, well, I'm working on this. But he didn't want to share it with anyone at this time. Only a few people were actually even involved that something was going on there. I promised to help him with this, with the userland part.
04:07
And I really wanted to have this on OpenVST, because I use virtualization all the time. And it is so painfully annoying that the hosts are not running on OpenVST.
04:24
Every time I run a VM that runs on something else, it just reminds me how bad it is to run on anything else. And QEMU on OpenVST without hardware acceleration is not really an option for me. It might be useful for testing, but for two reasons. I don't like the QEMU configuration.
04:42
I never get used to it, and it's just slow. So I was really excited about this work. And some time later, it might be, I don't know, one or two years or something like this. It was a long time. Thank you, Henning.
05:01
Cigarettes? Not here, please. Small coffee break. So some time later, Mike Larkin showed up with the first code that he shared. And he had an initial implementation of a very simple
05:21
user VMD. And with his permission, I basically jumped on it and turned it into a privilege-separated daemon. I added a pass Y, like a configuration file, and so on. And I started working on this. And I added a tool, VMCTL.
05:43
There was something, but I kept some parts that were hardware-specific, that basically to have Mike's error and all the configuration, the PRIVSAP and all that, and
06:00
we did most of this. So an overview of what I'm talking about today is VMD, it's tool VMCTL, V-M-M-C-I, the control interface, metadata, something that's not an optimistic base, but also useful. And the VMM itself is all out of scope.
06:23
So the virtual machine daemon. That is the typical PRIVSAP diagram that we tend to show for our daemon. So VMD itself consists of multiple processes.
06:41
You run it. You run VMD, user sbin VMD. And then it executes initially three processes. The one is called control, another one PRIV, and a third
07:01
one VMM. Actually, we used to fork these processes, but in the last release, we changed the model of our PRIVSAP to actually execute these processes. So it is a whole different story, but actually it
07:23
improves the protection against certain attacks. The address space is randomized again when you execute, you're not sharing anything with the parent. And with VMD, it was very easy because it basically was designed from the beginning to support this model.
07:42
So it starts up, it has VMD and these three other processes. The VMCTL tool is an external tool that can talk to the control process via the Unix socket. VMD itself, the parent process has a few privileges
08:04
to open files on the disk, like the configuration, but also tap network interfaces and disk images and so on. And once it has opened these disk images, it can send them
08:21
via the iMessage socket to the VMM process. That's not the kernel part, that's the name of the process in VMD as well. And the VMM process itself is unprivileged, is pledged, and doesn't have access to the file system.
08:41
So it cannot open anything, it doesn't have the permissions to open disk images and so on. But it can run virtual machines. Each virtual machine is a process of its own, and so VMM runs the virtual machines and passes it all the
09:05
necessary file descriptor for like the disk, the console, and the network interfaces. So one nice model that you see here already is that the virtual machines themselves cannot open any files on the disk. Because in this change-rooted, pledged environment.
09:27
But that's another process that does it, passes the file descriptor. VMM, the parent, and each virtual machine communicate with the kernel via ICTLs.
09:43
And this is done all the time, like a virtual machine handles its exits via ICTL. So it runs something, when there's an exit, the kernel triggers it, the virtual machine runs it and handles
10:00
like the device I owe, for example. So the interface between kernel and virtual machines is in this, I'm not sure if you can read it, this part between VMs and the VMM. The VMM only has the responsibility of starting, stopping, and listing VMs, maybe, if you
10:24
have to do it as well. And the VMs themselves, they do all the I owe and run the actual machines. One very important thing of VMD is that it is designed in
10:44
a way that we really, well, I would say mitigate. Of course, we cannot be 100% sure, but the model is really, really sane. We avoid the possibility of so-called VMS capes.
11:02
So a very popular example is the QM volume attack. Well, there are multiple like this, but back on the floppy driver allowed to execute code on the host side
11:21
from the virtual machine. So the virtual machine could trigger back. Then suddenly, you're still in one of these VM processes. Let's say in QM, it's, of course, a different design, but they share something here. So you're in the VM, and one part is the actual virtual
11:42
machine, the guest side, and one part is the host side that is running the device layer, the host emulation, and all that. And with the volume attack, there was a bug in the floppy driver so that you could execute code on the host side of the VM, or you could get status from other VMs, inject
12:00
something into other VMs. And actually, in VMD, each VM process runs in a very restricted, pledged environment. So pledges are, how would you pronounce it?
12:22
If I say capability systems here would get angry. Pledges are something on its own. So a POSIX subset, basically a restriction subsystem.
12:41
When you specify pledge, standard IOM, VMM, then from this point on, this process is only allowed to do the most basic libc functions. Like it limits the syscall that the process can do. Standard IOM allows to do read, write, malloc, very
13:04
basic thing, but standard IOM doesn't allow to open files or to send traffic or whatever. There's many, many pledge options. VMM is something that restricts the process to do
13:22
the VMM IOCTL, but it cannot do any other IOCTL, actually. And even the syscalls, they're limited to what standard IOM and VMM allows. The VMM process, the one that creates the virtual machines, needs a little bit more.
13:41
It has received file descriptors, so it can get the file descriptor via iMessage from the parent process, like as I said, the disk images and so on. But for example, it cannot send a file descriptor. And proc means it can execute other processes or
14:01
fork processes. And then there's one trick that is not visible here. In the kernel, the kernel detects if the process is the VMM master process, the one that creates the virtual machines for that one, or it knows if it's
14:22
like an actual VM. And then the semantics for VMM are different. So that means the VM itself is only allowed to do VMM IOCTL on its own VM. It cannot even request something from other VMs. It cannot get status. It cannot create new VMs.
14:43
So this is something that hopefully restricts the process to doing these side effects that the Venom, for example, did. And of course, the process is change-rooted, running as an unprivileged user, E2C.
15:04
So whenever there is such an advisory of another escape, I look at this and say, well, I think it's a good way, it's a good design. And we see, if you're able to break it, let us know and re-improve it. But it's a very interesting model.
15:25
So the VMD device layer implements several really old legacy devices. VMD is very limited. So we only have a subset of the most important hardware devices.
15:42
So like timer, inter-wet controller, a clock, a serial card, console, PCI is there, and then we're using Virtio. But this is enough. And then a few other, of course, even smaller devices. But this is enough to run most operating systems that don't
16:05
need actual graphics, VGI. So by now, we can run OpenBSD and Linux. I tried Plan 9.
16:21
Thus, I didn't try solos yet. But netBSD works. FreeBSD is work in progress. But FreeBSD had some assumptions that we didn't map, but it will work. And I guess in the next days, Mike is currently working on
16:41
fixing FreeBSD support. So that's all we need. And if you say, oh, but also need a graphics adapter, I need, I don't know, something special, USB, then use something else.
17:01
We might add a few more inter-devices to VMD that we need to support, like SMP, for example. So we're thinking about adding some ACPI support and so on. But it is not like other hypervisors where I have 10
17:22
different network interfaces and so on. So it's this simplified. You get a request and you send a response. And then there's no timeout. So it was very simple to implement. And the IP address is auto-generated from a prefix.
17:40
I'm currently using the CGN prefix, 100.64 something. It's because I didn't want to conflict with RFC 1918 here. And so when you have enabled forwarding, the effect will be your host type interface has an IP address.
18:00
The guest has one. The guest uses the host as a default gateway. And then everything else is standard NAT and routing. Are you finding auto-install? No, it's just that BootP has a limited subset of options,
18:23
actually. Yeah, but that's a standard option. The next server, right? Yeah, that was in BootP already. Yeah, next server, but auto-install, so the auto-install. That is currently not done.
18:42
Yeah, it could probably work. I would consider it as a feature. But on the other side, we have auto-install in OpenBSD. So yeah, it could make sense. Yeah, but it's not just triggered by. No, ah, it's triggered by the.
19:05
If you send a DHCP reply containing the filing action set to auto-install, the auto-installer will be started automatically. OK. That's not good. So all we would need is a way to turn this auto-install
19:22
on, basically. Well, it should not be on by default, but if you need it, then yeah. Not yet. Yeah, if you turn it on by default, I don't know how random Linux also behaves.
19:42
But yeah, it could be useful, definitely. We found one case where BootP was not supported, and I didn't try it myself, but Mike tried to boot an Android image on it. But Android is too cool for BootP, so.
20:06
VMD uses some groups. It's also a very interesting thing. Is it useful? I don't know. I use it. Some people experience with it, and over time, we will see how much sense it makes.
20:20
There are definitely a few parts missing right now to make it even better. But you can pre-configure a VM that doesn't work from the command line and set an owner. So it's either a user or a group, and then this owner can start and stop the VM or attach to the console.
20:42
So this makes sense if you have, I don't know, you want to give someone access to the console on your machine without giving root access, right? So you pre-configure the VM, and then these commands are allowed. Usually, a user cannot access to your VM's console,
21:03
by default. The missing part here is a way, basically, to basically provide a default disk image, let's say, like a system-wide disk image,
21:20
and then allow any user to run it, and then maybe do some kind of copy-on-write that you have this master disk image, and the user just starts and then has changes on this, or something like this. So the disk image is the part that is the limitation here. BIOS, actually, in the abstract,
21:41
that is on the BSD-CAN website, I still wrote, VME doesn't support BIOS, and we only support booting OpenBSD. This has been changed. We do support BIOS now. That means that we can boot all the other operating systems, so you can run Linux on OpenBSD, for example.
22:03
There's no problem, and there are different approaches to handle this, that there's no BSD-licensed BIOS out there. Our solution was quite practical. We considered it as firmware, so we create a part, we compile the CBIOS and SGI BIOS combination,
22:25
and it creates a binary block, and then we already have this mechanism in OpenBSD of ETC firmware that basically means some device drivers need a firmware that cannot be distributed with the base system,
22:40
but when you install OpenBSD, it runs fwupdate, and fwupdate detects which devices are there that need a firmware and fetches them as a package from a separate FTP server. So we ship the BIOS as a firmware, actually,
23:02
and it's a single file. In practice, you will always have this. If you install OpenBSD on an internet-connected machine, it's installing this automatically. If you have a machine that is somehow not connected to the internet, you have to make sure that you get the VMM firmware package and install it,
23:23
and then it boots, and the BIOS does everything that we need to run the standard boot loaders and other operating systems. SGI BIOS is only useful for the boot loader. Yeah, it emulates a VGI adapter on the serial console.
23:46
Thus, as soon as the kernel takes over and you're not doing BIOS calls anymore, this stops working. VM CDL. So VM CDL is a control tool where you can start virtual machines,
24:03
stop them, list the starters, and there are several different subcommands indifferent to other CDL tools in OpenBSD. This is not in the Cisco CLI style. My first version was, and some people,
24:21
and I think Theo as well, complained, and I changed it to a simple get-opt style. But I will insist that the get-opt, the command line options here remain simple, and I don't want long options, right?
24:41
I think there's even a man page where the get sub-options or long options. Sub-options are really nice. You see it in QM here. I think BIF has it as well where you have a column separated list, like you're working with CSV, so something you usually do in Excel
25:01
and not in a configuration. So I'm not assisting to add any more complicated settings, but okay, a simple flag, and then something here. 512 megs of memory, three interfaces, the disk image, and dash C is like connect to the console when it starts up.
25:25
The starters tool, of course, this output is going to change. We already decided, and I just didn't have time yet. For most users, the PID of the VM process is not that important, so we change it a little bit,
25:43
and the second step, when we do VMCTR starters, my VM, you will get some more information, maybe like it's a signed IP address and all that. This is the current status. Like this, but it will be improved before the next release.
26:04
Here, you can attach to a running machine to the console. This VMCTR create command just creates a sparse disk image at the moment. We're thinking about renaming it from create to something else because people saying create
26:25
means create a new virtual machine, but it actually just creates a sparse disk image. So it's something that you could theoretically do with DD and some options. That's what the VMCTR create does. And some other commands, load, reload, reset, lock.
26:45
Send and receive is not in VMD yet, but it's work in progress. It's already working. I tested it. It has some bugs, but it's working. So we will get migration and possibly even live migration in VMD.
27:03
It has been designed to support this from the beginning. And what you can do at the moment is like these commands here. You send the status of the running VM and just dance it to standard out so you can pipe it into a file.
27:22
Then the VM is stocked up, and then later you can receive. I mean, just you pipe in the file and the VM will continue running. So if you have like anything you did, it's like hibernating that basically. The nice thing is you can also,
27:40
if you have two identical VMDs at the moment, you can pipe it over an SSH connection. It will send over the VM. Currently, it's not a live migration, so it stops the VM while it's copying, and then once it's done, the VM starts on the other side.
28:02
This is something very useful, actually. Of course, there are some more things to consider. For example, how do we handle the effect when there are two incompatible CPUs on the two hosts? Another thing is like the live migration
28:22
is just an algorithm, the way how you send the memory to the other side. Currently, we just halt the VM, dump the memory, and then start once it's done, but you can copy the unused pages and then have an algorithm to basically narrow it down,
28:41
and when you reach a certain threshold, you flip over. So this is something that you have like in vMotion, for example, where it just feels like you're still using uninterrupted, maybe that's a short blip, and then it's running on the other side,
29:01
and this is just an algorithm. It's nothing much more complicated than the current dumping. VMMCI is something that I wrote.
29:20
At the moment, since we don't have ACPI support, we don't have a way to shut down VMs gracefully. So all we can do is like turn off the power, basically, and when you didn't do a shutdown on the VM,
29:42
then you have a problem. So that was the main reason that VMMCI also allows us to add some other features that you find in VMware tools, for example, things like that. So it's a different protocol, but the concept is the same.
30:01
So it adds a communication channel between the host and the guest, and then the host can request, oh, Claire, please shut down gracefully now, and then the VM starts the shutdown scripts and everything. Or it provides a time counter, for example,
30:20
that's all at the moment, but it could add a few more things that you find in Zen, Hyper-V, and VMware, for example. Hyper-V also has a heartbeat. I'm not sure if this is useful, but things like that. So that's why we added this,
30:41
but it's only available for OpenBSD guests at the moment. So if somebody wants to use this on Linux to improve the shutdown behavior on Linux, have fun and implement a Linux driver for it. Other stuff, metadata is something
31:03
that is not in OpenBSD, I put it on GitHub because I was deploying virtual machines to AWS. So as a story, we did all the work
31:21
to make OpenBSD running in Amazon, right? And in AWS or in the cloud, you have this auto provisioning and they don't use auto install. They have pre-installed VMs, but when they start up this pre-installed default VM image,
31:40
you somehow have to give it your SSH public key, for example. So once it's started, you want to log in. And user data is a way to pass it some configuration. Ubuntu, for example, uses cloud-init and cloud-init as kind of a yaml configuration file and then you can configure the system with it.
32:02
And this is kind of a standard. It's not just an AWS, it's also an open stack and cloud stack and all these stacks are there. Only Azure works a little bit different. So I said, okay, I want to test the same images
32:22
that I deployed to AWS. I want to test some with VMM. And for that, I need something that provides this cloud-init functionality. And cloud-init itself, or this, we call it EC2-init, specifically the VM starts up and makes an HTTP request to a pre-defined IP address
32:43
in the 169 range. And then it simply fetches 169 something, the IP address slash later slash something open SSH key. And then the web server returns the open SSH key.
33:01
So this is a defined key value set based on a URL on a fixed server. And of course, it's a bit tricky because the server, when it gets a request, it has to know the related VM. So metadata is written in C and KCGI.
33:24
And when it gets a request, it gets the IP address and then looks up the VM by doing some IOCTLs and so on on the host machine. And then it has a directory where it just finds a VM
33:41
by name and it values and it returns it. So with this metadata server, I can run these cloud images from Ubuntu, for example, and our own stuff as well. Yeah, we will probably make a package for it. As I said, it depends on KCGI from Christops.
34:01
So it's something that would live in ports. Actually, when I started working on VMD and this, people kept on mentioning Firefox, Firefox and Firefox,
34:22
because we did some efforts in open DSD to maintain Firefox on a port, to make it more secure, to handle like W, XOR, X, and all this, but Firefox keeps on being like the risk, right? So of course, running Firefox in a VM
34:42
is something that people like. I experimented a little bit with it. So I install like two VMs. You could probably also just use one, I use two. One is the Firefox VM, one is the firewall VM. It's just a separate, tiny open DSD with PS and all that.
35:01
You could also do it in a host directly. And then when I started, while I experimented, so either using X forwarding or VMC. VMC is a bit faster, but the VMC server is scary. X11 VMC, right? So there's no clean solution for that yet,
35:21
but it works actually. So we see there's more experimenting happening, and if you have ideas, just try it. But we will think about this a little bit further. Now you can run Docker on open DSD.
35:43
And it sounds funny, but somebody really did it and wrote an article about this. It's somehow, there's more in between. Isn't it the same way how Docker ported Docker
36:02
to the Mac towards X? They used a little shim, which is like a single Linux, and these OCaml kernel, what's the name? Oh, I don't know. So, and of course instead of using like Alpine Linux,
36:23
you could probably also use Linux KIP, which is like very limited Linux layer and run it and run Docker. I don't know why you want to do that, but it's, people have a strong need. At least many of you can tell your management, yes, of course, Docker runs on open DSD, and then you get your open DSD boxes approved at work.
36:42
So, yeah, quite frankly,
37:02
you can run anything that is cool and hot at the moment and on open DSD now. Yeah, we don't do that. Okay, not cool and hot, right? Fair.
37:21
To do. I didn't fill in the list because as you see, I posted an abstract and most what I wrote in the abstract is obsolete because we have bios, we can run Linux and so on, but there's definitely a long to-do list. So my most important to do at the moment is to make it more stable. There are some certain conditions where VMCTL start and stop
37:44
don't work and your VM doesn't terminate gracefully or something like that. So there's, for my features are fine, but it has to be stable and yeah,
38:02
awesome open DSD quality. And there's a bit more work, but this evolves quickly, actually. And there's lots of to-do on the VMM side. Mike is making changes.
38:21
It's hard to follow them. So we have working AMD support, for example, now he just fixed free BSD support. SMP is a big part for me. So I have a few, let's say semi production servers
38:44
that run on virtual machines, but the host is not open BSD. And as I said, it's a pain and I want to replace them as soon as possible, but I need multiple CPUs to replace them. The performance is something that is also being worked on
39:02
the device layer of the networking and the block interfaces is we implemented in a way that it's run like asynchronously and so on. So it will be much faster, hopefully. There are many other things. And we're working on getting the sent and receive patches ready and in.
39:23
As I mentioned, this is a group of students and Mike Larkin is actually their professor and he's advising them. Where would you go?
39:41
There's no priority, no. No, no. I think zero, actually. But since my challenge after a few years or so, but it's a monolith that we don't really need this now.
40:08
You yourself said you needed like a USB key for the Windows software for your whole system, but I guess without the support, that's a little bit. But what's the use case provided by you for the USB password? Yes, we, yeah, that's a horrible thing,
40:23
but it's a German accounting software that you have to use as a company that runs only on Windows with this USB dongle. But anyway, we're not planning to support Windows
40:40
because Windows needs more devices. It needs a graphics emulation and so on. But here the tricks is that you have the kernel interface, VMM, and that's related to def VMM interface.
41:00
And there's more, actually people started and stopped, nobody really finished it, but it's possible and that's what we want to do. So we make it possible to run QEMU instead of VMD. So if you don't care about like VM escapes or security or the configuration and all that,
41:23
but you need Windows, you can run QEMU or maybe whatever, like Beehive or something like that. It doesn't really matter as long as somebody makes work. We don't really care because we will not put this into the base system, but the idea is, for example, that you can do package at QEMU, it supports VMM,
41:41
and then you just run it instead of VMD. That's true, that's true. So if you need these fancy devices, then you run QEMU. Some people still use QEMU on OpenBSD, for example, because it provides more ways to test certain things.
42:03
I'm not talking about the ARM emulation or things like that but like the real QEMU has more options. So QEMU will still be used and we're not aiming to be feature compatible with QEMU. That's not our goal. The goal is basically you run VMs, OpenBSD VMs,
42:24
maybe Linux VMs, you run some servers or you do some interesting things on your local system. That's a Firefox VM. And we're using that to experimenting with certain features.
42:40
The last weeks, there was some crazy ideas about like experimenting with CPU extensions and so on to protecting the memory and so on. So there are many things that we can do in VMD. And yeah, somebody needs to finish the QEMU port. I'm not sure if anyone is actively working it at the moment, but it was started before.
43:03
More questions. Have you run any really exotic operating systems like US2 or? Plan 9. Oh, Plan 9. And I gave up and I didn't know how to log in but it worked good.
43:28
It's just very slow. I don't know how to make it fast. I rarely use it. But if you're on- There's a little change. Do what? A little change to natively. In other words, it's slower.
43:40
Yeah, yeah. But it's even slower. Not as developers and then use whatever or something doesn't work. And I can say, oh great, I don't have to upgrade my main machine to current. I can just do a VM. But is that helpful or worse?
44:02
In the sense of if you're doing bug reports. Yeah, oh. So if you prefer bare metal versus possibly introducing VMM bugs in the middle of that reporting. So should I add two more?
44:25
Yes. Well, it's hard to answer. Just let's say, maybe I'm answering something different. But in general, okay, we all agree, virtual machines have a risk and bare metal is the safest thing to do.
44:41
But on the other hand, bare metal these days, you're most likely running on some kind of proprietary hypervisor anyway, right? But so, a VM gives us, and that's the amazing thing I realized recently after this discussions that I heard from CEO and so on,
45:02
allows us to experiment with things that bare metal wouldn't do. For example, if we're the host, we can add certain hardware restrictions basically, or emulate them, and even protect the VMs in a way that the host could not do. So, I never really thought about this before,
45:22
but it gives us the opportunity to even strengthen the security of the virtual machines. And for testing VMs, what if your host is, you need to restart them. I hope that the send receive works soon so that you can store your state
45:43
and then switch the host and upgrade, for example, or you copy them over and then continue. You probably need to use iSCSI or NFS to store the disk images in a shared space. And we need to make it more stable, actually.
46:02
One more thing, VMM will also support nested operation. Mike had it running already up to, I don't know, he tested I think four levels of like VMM and VMM and VMM and so on. But it's currently not enabled.
46:22
I think there was a bug that shows up after the third or fourth iteration. Any other questions?
46:57
Yeah. Is the code insane and all that?
47:01
Maybe, yeah. Who's using vagrant?
47:33
VMM or VMD as a vagrant provider. So you can use vagrant on open VSD for all VMDs. That's why we're not using vagrant,
47:41
even the next report. See you next year. Too soon. Okay, thanks. I would say I use Packer, but not vagrant. I thought it was easier just to wire in a whole bunch of TPM shell scripts. Yeah, but that's not the point here. See, I'm a purist.
48:01
I prefer if we use the tools, VMCTL, VMCONs and all that. But as Philip convinced me, there's definitely a point for vagrant. For example, if you want to have one VM that works on multiple platforms and all that. And so we- Questioning how it often works. Yes, yes. One way to do it, as simple as this metadata thing,
48:23
but there are much more sophisticated things actually. Yeah. I tried to convince Amazon, why don't you support nested virtualization? And we could test VMM running in AWS, but maybe someday we can do that.
48:47
Bink. Okay, any other questions? Then I think we're out of time. Almost. I think.
49:02
One more say. I know some of you are happy to support us in the evening when we go to the bar, but please also donate to the project.