"Enlightening" KVM
This is a modal window.
Das Video konnte nicht geladen werden, da entweder ein Server- oder Netzwerkfehler auftrat oder das Format nicht unterstützt wird.
Formale Metadaten
Titel |
| |
Untertitel |
| |
Serientitel | ||
Anzahl der Teile | 561 | |
Autor | ||
Lizenz | CC-Namensnennung 2.0 Belgien: Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen. | |
Identifikatoren | 10.5446/44117 (DOI) | |
Herausgeber | ||
Erscheinungsjahr | ||
Sprache |
Inhaltliche Metadaten
Fachgebiet | ||
Genre | ||
Abstract |
|
FOSDEM 2019211 / 561
1
9
10
15
18
19
23
24
27
29
31
33
34
35
38
39
40
43
47
49
52
53
54
55
58
59
60
63
65
67
69
70
78
80
82
87
93
95
97
102
103
104
107
110
111
114
116
118
120
122
123
126
127
131
133
136
137
139
141
142
148
153
155
157
159
163
164
168
169
170
171
172
173
174
181
183
185
187
188
193
196
197
198
199
200
201
205
207
208
209
211
213
214
218
221
223
224
226
230
232
234
235
236
244
248
250
251
252
253
255
256
257
262
263
264
268
269
271
274
275
276
278
280
281
283
284
288
289
290
293
294
296
297
300
301
304
309
311
312
313
314
315
317
318
321
322
327
332
333
334
335
336
337
338
339
340
343
345
346
352
353
355
356
357
359
360
362
369
370
373
374
375
376
377
378
383
384
387
388
389
390
391
393
394
395
396
406
408
409
412
413
414
415
419
420
425
426
431
432
433
434
435
436
438
439
440
441
445
446
447
448
453
455
457
459
466
467
471
473
474
475
476
479
480
484
485
486
489
491
492
496
499
500
502
505
507
508
512
515
517
518
529
531
533
534
535
536
539
540
546
550
551
552
553
554
555
557
558
559
560
561
00:00
EmulatorBildschirmfensterServerVirtualisierungCoxeter-GruppeComputeranimation
00:33
Physikalische TheorieInterface <Schaltung>HardwareTreiber <Programm>SpeicherabzugEmulatorHypercubeBildschirmfensterVersionsverwaltungSpeicherabzugSoftwareBootenInterface <Schaltung>HardwareQuellcodeCASE <Informatik>Physikalische TheoriePhysikalismusEmulatorRechter WinkelVirtuelle MaschineTreiber <Programm>VirtualisierungSichtenkonzeptPunktPhysikalisches SystemNetzbetriebssystemInterrupt <Informatik>MultiplikationsoperatorTypentheorieDifferenteMultiplikationCoxeter-GruppeMereologieKeller <Informatik>UmwandlungsenthalpieNichtlinearer OperatorBus <Informatik>Computeranimation
04:53
HypercubeVersionsverwaltungWeb-SeiteMagnetkarteBefehlsprozessorDefaultIndexberechnungPartitionsfunktionCoprozessorVirtuelle RealitätSchedulingInformationSystemzusammenbruchZeitbereichMaßstabVirtualisierungAutomatische IndexierungCoprozessorInterrupt <Informatik>BefehlsprozessorOrdnung <Mathematik>Web-SeiteZahlenbereichUmwandlungsenthalpieEndliche ModelltheorieVersionsverwaltungInformationHypercubeBildschirmfensterSystem FHardwareTermSystemaufrufAggregatzustandRechter WinkelBitGrenzschichtablösungPhysikalismusProzess <Informatik>HalbleiterspeicherRechenwerkSchlussregelIndexberechnungGemeinsamer SpeicherDefaultEreignishorizontDifferenzenrechnungMomentenproblemPlastikkarteTouchscreenAusnahmebehandlungMultiplikationsoperatorTaskZentrische StreckungSystemzusammenbruchDifferenteMAPFahne <Mathematik>KoroutineSchedulingProgram SlicingSpeicherabzugSoundverarbeitungProtokoll <Datenverarbeitungssystem>Wort <Informatik>Open SourceNP-hartes ProblemLaufzeitfehlerInjektivitätNichtlinearer OperatorDienst <Informatik>Flash-SpeicherComputeranimation
11:36
SystemzusammenbruchInformationZeitbereichMaßstabQuellcodeHypercubeOperations ResearchGamecontrollerInterrupt <Informatik>ImplementierungEmulatorEreignishorizontMessage-PassingSynchronisierungBefehlsprozessorProgrammierumgebungTLB <Informatik>DefaultEmulationSystemidentifikationBefehlsprozessorSampler <Musikinstrument>CoprozessorZentrische StreckungGebäude <Mathematik>Puffer <Netzplantechnik>BildschirmfensterZusammenhängender GraphWeb-SeiteBenchmarkMultiplikationsoperatorÄhnlichkeitsgeometrieHyper-VQuellcodeProgrammierumgebungGemeinsamer SpeicherProtokoll <Datenverarbeitungssystem>HardwareNichtlinearer OperatorSystem FDatenbankDatensatzBeanspruchungDifferenteSchnittmengeInterpretiererMessage-PassingEreignishorizontLoginGamecontrollerRechter WinkelSpeicherabzugParametersystemInterrupt <Informatik>Bus <Informatik>MomentenproblemUmwandlungsenthalpieEndliche ModelltheorieFrequenzWechselsprungMigration <Informatik>PhysikalismusSystemzusammenbruchLineare RegressionInformationSoftwareVirtuelle MaschineNetzbetriebssystemVirtuelle RealitätGradientZeitstempelHalbleiterspeicherDefaultTLSTLB <Informatik>Leistung <Physik>Translation <Mathematik>MultiplikationLesen <Datenverarbeitung>Computeranimation
18:12
QuellcodeFrequenzHypercubeMigration <Informatik>VirtualisierungInterrupt <Informatik>Gerichtete MengeATMMessage-PassingComputersicherheitBildschirmfensterHypercubeAggregatzustandMAPATMRichtungUmwandlungsenthalpieAdditionMessage-PassingBefehlsprozessorBus <Informatik>Stabilitätstheorie <Logik>MathematikFrequenzShape <Informatik>EreignishorizontMailing-ListeQuellcodeE-MailProtokoll <Datenverarbeitungssystem>RechenschieberComputeranimation
20:53
QuellcodeHypercubeBefehlsprozessorSoftwaretestThreadTLB <Informatik>BenchmarkQuellcodeHalbleiterspeicherBefehlsprozessorPhysikalisches SystemNichtlinearer OperatorSoftwaretestMultiplikationsoperatorElektronische PublikationMAPDifferenteMereologieCASE <Informatik>System FHardwareLesen <Datenverarbeitung>PhysikalismusVirtualisierungZahlenbereichParallelrechnerFlash-SpeicherPivot-OperationRechter WinkelNetzbetriebssystemPhysikalischer EffektCoprozessorComputeranimation
23:02
HardwareRichtungRechter WinkelInterrupt <Informatik>ATMPhysikalisches SystemNetzbetriebssystemDefaultCASE <Informatik>VersionsverwaltungBeanspruchungMigration <Informatik>AusnahmebehandlungThermische ZustandsgleichungDiskettenlaufwerkSchreib-Lese-KopfFastringSoftwareentwicklerMultiplikationsoperatorDifferenteInterface <Schaltung>BitrateSichtenkonzeptMailing-ListeVirtualisierungHypercubeSoftwaretestE-MailPunktEinfache GenauigkeitNichtlinearer OperatorComputeranimation
26:57
Computeranimation
Transkript: Englisch(automatisch erzeugt)
00:09
I hope it is. Okay. So, hello everyone. Let me welcome you to FOSDEM and to our virtualization and infrastructure, the server's dev room,
00:21
and I'm the first speaker of the day, and in my presentation, I'm going to talk about how you can run Windows guests on KVM efficiently. So, in your infrastructure, you're running virtual machines, and some of these virtual machines are Linux VMs, some of them are probably Windows VMs.
00:43
So, does it make any difference from the virtualization to stack point of view, like which operating system you are running in your guests? The answer, well, it depends. So, in theory, it doesn't, because with QEMU and KVM, we are actually trying to emulate some existing physical hardware
01:01
by building a virtual machine, right? But then, if you boot your Linux guests on KVM, and take a look in the log, you will see something like this, right? You will realize that your guest knows pretty much everything about the fact that it's running virtualized.
01:21
It knows that it's running on KVM, and it's actually using some features. So, why do we do that? Well, the thing is that when we are trying to emulate physical hardware in software, that some interfaces were not designed for that, and it can actually be slow in some cases.
01:42
So, how do we solve these problems usually? Well, if the hardware interface we need to emulate is slow, and we cannot make it fast, we come up with our own solution, and we invent so-called paravirtualized interface, which is fast and which is software friendly, right? But then, when we have our own interface,
02:02
we have to put support for this interface in the guest operating system, right? Because it doesn't know anything about it. But the question is, what do we do about proprietary operating systems like Windows? How do we put these interfaces there, right? We don't have the source code.
02:20
Well, we can probably try writing drivers, and that's actually what we do, for example, with virtualized devices, right? But the thing is that not everything is a device from Windows point of view, and some very core features of it like interrupt handling or clock source are actually not devices, not drivers.
02:41
They're in the core of the operating system. So, you may have hard times writing these drivers for your proprietary operating systems. Moreover, there are multiple different Windows versions, and you basically have to check that this solution works for every of these. So, what else can we do? Well, we know that
03:01
KVM is not the only hypervisor out there. There are other proprietary hypervisors, and the thing is that these hypervisors have to solve the exact same issues. Because, well, for them, these hardware interfaces are also slow, and they also have to come up with their own interface. So, in Windows world,
03:21
this hypervisor is called Hyper-V, and we do emulate Hyper-V in both KVM and QEMU, and there are basically like two different types of emulation there. We emulate these core features, which in Hyper-V world are called enlightenment,
03:43
and that's why my talk is called Enlightening Hyper-V. I'm going to talk about the first part. Device drivers is something which would make it possible to replace, for example, VirtIO. So, if we write VM bus device drivers, then we won't need VirtIO drivers for Windows.
04:01
And there is such effort, and VirtIO as a company is currently working on it, but it's not currently upstream, and I'm not going to talk much about it in my presentation. So, Hyper-V features which we emulate. Where can you read some documentation about them? There is no in QEMU and KVM for you as a user,
04:25
and in Libvert, you get this. That's basically it. Probably not much. You may or may not understand what these features are, and if you want to know more,
04:40
you can go and basically read the specification. Hyper-V folks were generous enough to publish their spec there on Microsoft website, or you can listen to me now. So, what features do we have in KVM, and what are they needed for?
05:00
So, I'll be showing you both QEMU syntax and Libvert syntax, how you can enable the feature, and I'll tell you a few words what this feature does. So, let's start with this one. It's called Relaxed Timing. It's enabled by HP relaxed in QEMU, and this features Hyper-V something in Libvert, and mostly these Hyper-V enlightaments in Libvert
05:22
are enabled like that in features, but there are some notable exceptions. I will show you them. And this feature basically tells your Windows that it's running virtualized, so it should disable all hard watchdogs on different events because different operations can take different time
05:43
when you're running virtualized, right? So, if you put some hard watchdog there, your Windows can crash. Actually, more than Windows versions, they don't require these. They will detect Hypervisor CPU flag and enable this automatically, but for older Windows versions, it makes sense to enable it.
06:03
Para-virtualized Epic. So, it's enabled by HP Epic, and it basically provides a shared page for each CPU to assist dealing with Epic, and the notable feature here is paravirtualized end-of-interrupt.
06:24
So, here is a good example when emulating hardware interface is slow. When you have an interrupt, right, and a level-triggered interrupt pending, your Hypervisor will stop your guest, inject interrupt there, and resume your guest. Your guest will notice the interrupt
06:41
and probably will start doing something about it, launching an interrupt service routine, but when it's done, it needs to somehow signal the fact that it's done with the interrupt and it's ready to receive the next one, right? And in hardware, like in physical Epic, you basically write to a register, and the operation is pretty fast, right?
07:02
So, you write to the register, it resets a bit, and then you can receive a next interrupt, but if you do it under the Hypervisor, you will get a VM exit, right? So, your guest will be stopped, you will drop in the Hypervisor, and Hypervisor will basically mark that the interrupt is not pending anymore and resume your guest.
07:21
It takes time. So, so-called PV end of interrupt was invented. It's basically like the guest is just clearing one bit in the shared page, and the Hypervisor will periodically look at this bit and when it's not pending anymore, we are ready to inject next interrupt.
07:41
We don't need to do it synchronously most of the time. And there is a side effect that this feature is also required for Enlightened VMCS feature. I will tell you about this feature later. Para-virtualized SpinLocks enabled by HP SpinLocks, and you can tell it, like,
08:01
to you how many attempts to do before giving up. The thing is, there is a core concept of a SpinLock, right? When two CPUs are trying to get the same resource, they may do this, like, cheapest possible locking. It's basically like checking a variable in memory and seeing if somebody else is doing something with the shared resource,
08:22
and you set, like, basically one there, you do something, you reset it, right? The other CPU looks at it. Oh, it's busy by someone else is doing the job, and it just spins. It doesn't do anything. It constantly checks the state of this indicator to see if it can do something. In virtualized world, it can take significantly longer because your virtual CPU,
08:42
which actually took the resource, may not be running at this moment, right? It can happen that it took the resource, and then it was offloaded, right? And some other guest is running there. So your CPU, which is trying to get the lock, will have to wait for quite some time. Instead, we can basically give up
09:00
and give a chance for other CPUs or other guests on the same physical CPUs to run, right? And that's what the feature does. We also have a counterpart in KVM, but Windows cannot use this KVM feature, so we can enable this Hyper-V feature. Next one is a simple one.
09:20
It's like VPN, Virtual Protocol Index. It basically creates a virtual model-specific register where each CPU can read its own number. And in QEMU, they almost always match, like, the order in which they were created. CPU one will get one, CPU two will get two. But the thing is that we need this model-specific register
09:40
for some features I'm going to tell you about. And Windows, if it won't see this feature, it won't use this PV2 with flash and PVIPIs, for example, because in these hypercalls, CPUs are actually specified in these VP index terms. Runtime information, right?
10:00
So you have a virtual CPU, and sometimes it runs, sometimes it doesn't, and some other virtual CPU or the host is doing something on this physical CPU. And if you want to do some fair scheduling, for example, you may want to give your tasks same slices of time to run. But the thing is you think that your task is running,
10:22
but actually it's not. And something else is running there. And how can you know that, right? So there is a protocol, basically, again, like a shared, like a model-specific register, where Windows can read the information for how long the vCPU was running
10:40
and for how long something else was running there. But the thing is how it's done in Hyper-V, it's done through a model-specific register. It's not a shared memory page. So reading it will trap in the hypervisor. So it's kind of slow. And Windows, as far as I know, doesn't do that for scheduling by default, because it would be really slow to switch between tasks.
11:01
And I'm not exactly sure when it actually does use the feature, but maybe sometimes it does. Crash information, that's quite interesting. So your Windows crashes. Everybody knows that, right? So you will get the blue screen of death. But the thing is that not all of them are the same.
11:21
So you may want to know, especially if you're running VMs on a larger scale, you may want to know if you're actually seeing same crashes on different hosts, or these are different crashes, or how many different crashes do you have? So you can analyze them. And Windows can provide some information, basically like five registers,
11:42
I think it's five, on crash. And you can get this information. If you enable the feature, then in libbert log, if you're running through libbert, in QEMU you can get this information too, but I think you need to do a QMP common, so it's not easy to get this information from libbert.
12:01
You will get it by default in the log, I think. Windows will tell you basically where it crashed and some parameters like registers. So by comparing these in the logs, you can see like if you're seeing same crashes or different crashes, it can come handy in some situations.
12:21
Clock source. It's actually one of the most important enlightenments. And the thing is that in some workloads, we need to get timestamps pretty frequently. For example, we are trying to timestamp records in the database or network packets.
12:42
So your operating system will constantly be reading from the clock source it has, right? But the thing is, what is the clock source it's trying to access? And on physical hardware, it's usually, nowadays it's TSC, it's a register in your CPU, which is usually good, but in virtualized environment, you cannot do that
13:03
because your VM can actually, for example, migrate and there's gonna be like a jump in TSC value and the jump can actually be backwards, so not nice. And virtual machines came with this concept of a power virtualized clock source. And in KVM world, it's called KVM clock,
13:21
but Windows is not gonna use your KVM clock, right? By itself. So we emulate Hyper-V clock, which is basically the same concept. It's a shared memory page with two values. And to get the timestamp, it reads the TSC register from processor, multiply by like scale and add the offset. If your VM migrates,
13:41
Hypervisor will update its values and the reading will stay like persistent, so it won't jump anywhere. So it's quite useful and it spins up Windows a lot, so if I will have some time, I will show you some benchmark at the very end of the talk. So synthetic interrupt controller. So that's the core component of building VM bus.
14:04
VM bus is the key component, how you can create this PV devices, which I'm not gonna talk about, but that's how you can create PV devices in Hyper-V. So it allows you basically to,
14:21
it's something like a communication protocol between the guest and the host. You can basically post messages and signal events. And it's not interesting by itself unless you have some VM bus devices which are not yet implemented, but this enlightenment is required for Windows to use synthetic timers.
14:42
And synthetic timers, so synthetic timers is something like an alarm clock, right? You want to get an event in like one second, say, right, so you set a timer, you get an event. And Windows does this pretty frequently.
15:01
So again, in hardware world, you can use something like TSC deadline timer now, right? So you set next TSC value and you will get an interrupt when it happens. It's gonna be quite slow because you will have to program this every time there is an event.
15:21
And again, it means that you will be exited to the hypervisor for each event. You can set a periodic timer with this enlightenment. And actually there was an update of Windows 10 and Windows 2016 last year when they changed the frequency of basically setting up these timers. And there was like a huge performance regression
15:43
for Windows guests under KVM. Users were seeing their guests constantly spinning, like consuming 30% of the CPU, even when they're idle. You enable this and this goes away because Windows sets this timer once and gets this event when it needs it without any hassle.
16:02
TLB shutdown. Again, as you know, like when you map something in memory, you may want to flush a TLB buffer, which is like a fast translation from one to another, from virtual to physical. And in x86 world, if you wanna flush this buffer on other CPUs, you send IPIs there.
16:22
So it basically interrupts and you wait for them to perform the shutdown. In virtualized world, it may happen that these virtual CPUs you want to flush are not actually running. So it's kind of pointless to flush buffer there in the first place. And second, you will spend quite some time waiting for this to happen.
16:43
So they came up with this concept of pure power virtualized shutdown. So you tell the hypervisor to do the shutdown operation on your behalf. And hypervisor actually knows which vCPUs it needs to flush and which are not running and they don't require flushing. So this speeds up some overcommitted environment significantly when you have like more virtual CPUs
17:02
than your physical CPUs. Pretty similar concept with pair virtualized IPIs, but it cannot just drop the API because this interprocessor interrupt, they have to happen. The only thing that we can send IPIs to, for example,
17:20
more than 64 CPUs at a time with this. And in hardware, you'll have to do VM exit for every 64 CPUs you wanna send. So it becomes like cheaper. Yeah, there are a couple of like useless things you can do. Like you can set a Hyper-Ruby vendor ID.
17:42
Microsoft Windows doesn't care about what you put there. You can put like Hyper-Ruby, TVM, Microsoft, Hyper-Ruby, it doesn't really matter. The other one is pair virtualized reset. So another model specific register which allows your guests to reset itself. And the thing is that even genuine Hyper-Ruby doesn't recommend using it.
18:01
So the feature is there, but for no particular reason at this moment. But maybe for some very old Windows guests, it was required. For modern guests, it's not required. So there are also a couple of features which are required if you are running nested guests. If you're running like Hyper-Ruby on TVM
18:20
or if you're enabling some security features in Windows which actually enable Hyper-Ruby underneath, there are such features there. And first is if you wanna get stable clock source, and I just told you how important is to have a stable clock source. If you're running nested, you will need a couple of additional enlightenments. One of them tells your level one hypervisor
18:43
about your epic frequency. The other one tells it when it changes. For example, when you migrate your level one guest with all its guests somewhere else. So it actually needs to know that the frequency changed. And that's how you do that. It's not currently fully supported in KVM. So actually it doesn't send these re-enlightenment events.
19:03
So if your CPU is more than enough and you have TSC scaling, it's not an issue. But if you're running on older CPUs, your clock may start ticking at the wrong frequency. It can happen, so we know about it. Enlightened VMCs, I was telling,
19:21
yeah, giving a talk about it like last year. It's a pretty complex feature, but the thing is that to run virtualized guests, you're dealing with such called VMCs state on Intel and you're using specific CPU commands, specific instructions, which are first, not very fast.
19:40
And second, I mean, if your level one guest is building this state for its level two guest, you don't know what it's actually doing there because it runs on the CPU natively. So you'll basically have to read the whole state. There is a PV protocol for that which speeds things up for that. So we have more features than works.
20:03
And this one is already on the mailing list and that's why I put it on the slides. If you are running Hyper-V on KBEM, it would also like to see synthetic timers there. But it cannot use synthetic timers in their current shape,
20:22
in the shape in which Windows uses them. So because it doesn't set up like full this infrastructure, the Hyper-V is like a very minimal hypervisor there. It wants like a simplified mode and a simplified mode is getting an interrupt instead of a VM bus message.
20:40
And for that, there is a timer direct enlightenment which is already implemented in KVM and which will land in QEMU shortly, I believe. So as I promised, some benchmarks so you understand how important these enlightenments are.
21:00
So this is Hyper-V Clock Source. What we do in the test, we basically spin and we do clock at time. It's like basically what's the time right now, right? In the operating system. So if you run it with and without HV time, you will see the tremendous difference because with HV time, it's basically reading from memory.
21:20
So it's not very different from actually like reading like TC register from parallel processor on bare hardware. Without HV time, it means VM exit to the hypervisor every time. So the speed up is great here. Enlightened VMCS. If you're running Nested Guest
21:41
and you will do some operation which actually traps in the hypervisor and CPU ID is, as you know, like gives you this like CPU features you have, but it always needs to trap in the hypervisor. So you will see that with HV VMCS, we achieve like 10% difference here.
22:02
We still be shut down. The test case is quite complex here and this one is like part of it. But the thing is we are doing a map and a map of some like big file in chunks. And this operation is known to cause TLB flushes on other CPUs. And then what we do,
22:20
we are running the same test on the same host, but we are just adding more and more virtual CPUs to our guest. And as you can see, when the number of like virtual CPUs matches, there is almost no benefit in the feature. It's the same, right? As sending these IPIs and doing flush natively. But as we go over committed, like more and more CPUs we have,
22:42
this PVTLB flush on the right, the number stays more or less the same because we don't really need to flush these CPUs which are not running and they cannot be running at the same time. But with physical PVTLB flush, you will see the slow down of the same test case on the same physical host.
23:02
So that was it from me. Thank you for listening. Any questions? Yes. Just regarding the features you mentioned on which versions we can expect to have them and to make use of them.
23:21
The question is on which versions we expect to see these features. I'm guessing that you're asking about both like KVM versions and QEMU versions. Right. So everything I was telling you about today is already upstream in KVM, including the synthetic timer's direct mode.
23:42
In QEMU, I don't actually remember like off the top of my head but I think that everything except for like PVTLB flush, PVIPI, and enlightenment VMCS is there in like 2.12 or something.
24:02
In 3.0 we were adding like PVTLB flush and enlightened VMCS, something like that. So if you grab current QEMU, it has everything but this synthetic timer's direct mode, the RFC is on the mailing list. I'm also trying to come up with a simplification
24:22
which would be called like HVall which will enable all Hyper-V features for you. It's like a little bit controversial because the question is what happens when you migrate such a VM, right? Your other host may have different Hyper-V enlightenment support, like you have different KVM versions. So like libword folks prefer to have
24:40
all these enlightenments listed there so prefer to keep them like fine-grained. And they may not support it but in QEMU it may actually come handy for like development test cases for a single host usage and stuff like that. So expect to see this feature in near future.
25:01
More questions? Yes, oh, so many. At the back you were the first to raise your hand so please go ahead.
25:23
Yeah, the question is why are these features not enabled by default and what's the cost for enabling them for the guest operating system? So the cost is basically zero except the notable exception is enlightened VMCS because enlightened VMCS comes with a penalty.
25:41
For example, you will have your posting interrupts disabled and for some workloads when you have, for example, some physical hardware which is actually able to deliver post interrupt that's gonna be a slow down. In other cases when you don't have such hardware it will be a speed up. So this feature we cannot enable by default.
26:00
The rest, the cost is zero even if your guest operating system is not using them. You can enable them for a KVM guest and you won't notice anything. Why we don't enable them by default? Probably because of how the virtualization stack is designed and the most important thing there is migration, right?
26:20
So if you don't need these features but you enable all them later you cannot migrate this VM to some host which doesn't have this feature because from the hypervisor point of view we don't know if the guest is using the feature or not or we will have to come up with an interface. Oh, what's the guest using this feature or not? Can we disable it? We don't have this in either QEMU or KVM.
26:43
So yeah, thank you guys very much. We're out of time. So I will take your questions here in the corridor. Yes.