Unikraft: Unikernels Made Easy
This is a modal window.
Das Video konnte nicht geladen werden, da entweder ein Server- oder Netzwerkfehler auftrat oder das Format nicht unterstützt wird.
Formale Metadaten
Titel |
| |
Serientitel | ||
Anzahl der Teile | 561 | |
Autor | ||
Lizenz | CC-Namensnennung 2.0 Belgien: Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen. | |
Identifikatoren | 10.5446/44627 (DOI) | |
Herausgeber | ||
Erscheinungsjahr | ||
Sprache |
Inhaltliche Metadaten
Fachgebiet | ||
Genre | ||
Abstract |
|
FOSDEM 2019467 / 561
1
9
10
15
18
19
23
24
27
29
31
33
34
35
38
39
40
43
47
49
52
53
54
55
58
59
60
63
65
67
69
70
78
80
82
87
93
95
97
102
103
104
107
110
111
114
116
118
120
122
123
126
127
131
133
136
137
139
141
142
148
153
155
157
159
163
164
168
169
170
171
172
173
174
181
183
185
187
188
193
196
197
198
199
200
201
205
207
208
209
211
213
214
218
221
223
224
226
230
232
234
235
236
244
248
250
251
252
253
255
256
257
262
263
264
268
269
271
274
275
276
278
280
281
283
284
288
289
290
293
294
296
297
300
301
304
309
311
312
313
314
315
317
318
321
322
327
332
333
334
335
336
337
338
339
340
343
345
346
352
353
355
356
357
359
360
362
369
370
373
374
375
376
377
378
383
384
387
388
389
390
391
393
394
395
396
406
408
409
412
413
414
415
419
420
425
426
431
432
433
434
435
436
438
439
440
441
445
446
447
448
453
455
457
459
466
467
471
473
474
475
476
479
480
484
485
486
489
491
492
496
499
500
502
505
507
508
512
515
517
518
529
531
533
534
535
536
539
540
546
550
551
552
553
554
555
557
558
559
560
561
00:00
Projektive EbeneOpen SourceSoftwarewartungComputeranimation
00:31
ExpertensystemManagementinformationssystemMereologieMigration <Informatik>BootenKappa-KoeffizientKernel <Informatik>Minkowski-MetrikKrümmungsmaßAdressraumROM <Informatik>Dichte <Physik>BefehlsprozessorFlächentheorieKomponente <Software>Prozess <Informatik>MultiplikationsoperatorZahlenbereichPhysikalisches SystemZentrische StreckungBinärcodeMultiplikationsoperatorElektronische PublikationVirtuelle MaschineKartesische KoordinatenServerMultigraphEinfache GenauigkeitBootenZweiKeller <Informatik>Dichte <Physik>EinflussgrößeArithmetisches MittelKernel <Informatik>Dienst <Informatik>ZahlenbereichMinkowski-MetrikTaskProgrammierumgebungFlächentheorieCloud ComputingInstantiierungMigration <Informatik>Komponente <Software>VirtualisierungDämpfungHalbleiterspeicherStandardabweichungOrdnung <Mathematik>SoftwareParametersystemZehnHardwareÜbertragungsfunktionBetriebssystemDifferenzkernVirtuelle RealitätRechter WinkelGraphfärbungRandwertProzess <Informatik>SpeicherabzugURLBefehlsprozessorWhiteboardDifferenteQuader
06:37
IntelSharewareCachingServerHydrostatikZahlenbereichMenütechnikDämon <Informatik>Public-domain-SoftwareLambda-KalkülKeller <Informatik>Migration <Informatik>BootenCodeFlächentheorieProgrammverifikationÄhnlichkeitsgeometrieComputersicherheitGebäude <Mathematik>Prozess <Informatik>Notepad-ComputerHypercubeArchitektur <Informatik>BefehlsprozessorFramework <Informatik>ProgrammbibliothekKomponente <Software>Funktion <Mathematik>BitNeuroinformatikBefehlsprozessorInternet der DingeMini-DiscEntscheidungstheorieProzess <Informatik>CASE <Informatik>ProgrammierumgebungFlächeninhaltMereologieSoftwareÜbertragungsfunktionProgrammbibliothekSoftwareentwicklerVirtuelle RealitätTaskKomponente <Software>HardwareSpeicherverwaltungMultiplikationsoperatorBenutzerbeteiligungFramework <Informatik>MultiplikationVirtuelle MaschinePublic-domain-SoftwarePunktKartesische KoordinatenGebäude <Mathematik>Kategorie <Mathematik>DatenbankMikroarchitekturGüte der AnpassungGemeinsamer SpeicherArithmetisches MittelCodeProjektive EbeneSpannweite <Stochastik>DifferenteÄhnlichkeitsgeometrieServerSoftwareplattformParametersystemImplementierungDateiverwaltungTermFunktionalDienst <Informatik>SchnittmengeEindeutigkeitSpeicheradresseKernel <Informatik>ZweiComputersicherheitVirtualisierungMobiles InternetProgrammverifikationDynamisches RAMURLInstantiierungFlächentheorieDatenfeldSchedulingLambda-KalkülMigration <Informatik>Computeranimation
12:44
RechnernetzStandardabweichungKeller <Informatik>ProgrammbibliothekROM <Informatik>BetriebsmittelverwaltungGebäude <Mathematik>Physikalisches SystemDatentypMenütechnikKonfiguration <Informatik>SoftwareplattformKonfigurationsraumBinärdatenSpezialrechnerMessage-PassingBootenSpielkonsoleProgrammbibliothekStochastische AbhängigkeitZahlenbereichReelle ZahlFormale SpracheBitInverser LimesRechenschieberCASE <Informatik>KonfigurationsraumKonfiguration <Informatik>Keller <Informatik>Elektronische PublikationSharewareSoftwareComputerarchitekturGebäude <Mathematik>ProgrammierumgebungMenütechnikSystemplattformPhysikalisches SystemMultiplikationSkriptspracheBefehlsprozessorSoftwareplattformTypentheorieKartesische KoordinatenSchedulingProjektive EbeneKernel <Informatik>VirtualisierungDateiverwaltungDifferenteEindeutigkeitStatistische HypotheseImplementierungBildgebendes VerfahrenComputeranimation
16:36
ImpulsSpeicherabzugFunktion <Mathematik>SchedulingBinärdatenRechnernetzKeller <Informatik>SystemprogrammierungFormale SpracheProgrammierumgebungFramework <Informatik>SystemplattformVirtuelle RealitätMini-DiscIntelVersionsverwaltungProjektive EbeneDienst <Informatik>Formale SpracheLaufzeitfehlerFokalpunktVirtuelle MaschineBildgebendes VerfahrenPunktwolkeVirtualisierungSchedulingp-BlockMini-DiscFramework <Informatik>ComputerarchitekturFunktionalSpeicherverwaltungBitProgrammierumgebungDateiverwaltungStapeldateiInterleavingSoftwareNavigierenGebäude <Mathematik>URLProgrammbibliothekZahlenbereichImplementierungEindeutigkeitARM <Computerarchitektur>Physikalisches SystemUniformer RaumKeller <Informatik>SoftwareplattformKomponente <Software>SpeicherabzugKartesische KoordinatenQuaderOffene MengeTaskStrömungsrichtungVollständigkeitOpen SourceElektronische PublikationComputeranimation
20:13
Open SourceVollständiger VerbandSchedulingTropfenE-MailOpen SourceProjektive EbenePublic-domain-SoftwareARM <Computerarchitektur>BitE-MailGrundraumMailing-ListeSharewareSchedulingt-TestTermHardwareMultiplikationsoperatorKernel <Informatik>EindeutigkeitMixed RealityComputeranimation
22:05
SharewareSLAM-VerfahrenMenütechnikNabel <Mathematik>EmulationSpeicherabzugProgrammbibliothekHIP <Kommunikationsprotokoll>PunktUnrundheitSocketProgrammbibliothekEinfach zusammenhängender RaumMenütechnikKonfigurationsraumKonfiguration <Informatik>TypentheorieSoftwareplattformSchnelltasteSystemaufrufSoftwareQuellcodeServerGebäude <Mathematik>Keller <Informatik>BimodulBetriebssystemPhysikalisches SystemBildgebendes VerfahrenStandardabweichungComputerarchitekturKernel <Informatik>LoopWeb-SeiteKartesische KoordinatenSchedulingTrennschärfe <Statistik>Uniformer RaumElektronische PublikationVirtuelle MaschineRechenwerkSchnittmengeBitEllipseSoftwareentwicklungZeichenketteAdressraumComputeranimation
25:37
Normierter RaumNabel <Mathematik>Objektorientierte ProgrammierspracheGrundsätze ordnungsmäßiger DatenverarbeitungServerSoftwareentwicklungProxy ServerPunktSoftwareFormation <Mathematik>MultiplikationsoperatorGeradeAdressraumTouchscreenSpielkonsoleKonfigurationsraumExogene VariableKartesische KoordinatenBildgebendes VerfahrenKernel <Informatik>Bridge <Kommunikationstechnik>Schnitt <Mathematik>Web-SeiteZeichenketteBitBildschirmfensterProgramm/QuellcodeComputeranimation
28:28
SharewareNamensraumOpen SourceWikiMultiplikationsoperatorBitProgrammbibliothekAutomatische HandlungsplanungKomponente <Software>ProgrammierumgebungVererbungshierarchieKartesische KoordinatenFunktionalAusnahmebehandlungSystemaufrufSpeicherbereinigungMinkowski-MetrikMAPRPCKernel <Informatik>SchnelltasteCodePerspektiveFormale SpracheATMOrdnung <Mathematik>SoftwareSoftwareentwicklungPhysikalisches SystemKeller <Informatik>Endliche ModelltheorieVirtualisierungPrototypingSchnitt <Mathematik>Nabel <Mathematik>Projektive Ebenet-TestREST <Informatik>PunktQuick-SortRechenwerkDatensatzMIDI <Musikelektronik>SkriptsprachePunktwolkeAppletDifferenteVirtuelle RealitätDienst <Informatik>MomentenproblemKnoten <Statik>VideokonferenzEindeutigkeitRechter WinkelInstantiierungComputeranimation
35:35
Computeranimation
Transkript: Englisch(automatisch erzeugt)
00:05
So, Unicraft is actually an open source project that we started at NEC while we were experimenting and playing around with unicorns in the past. So I'm also from that research lab where I'm coming from, so it's in Heidelberg
00:21
based in Germany and I'm a senior researcher there and also the lead maintainer of that open source project. So you probably are aware that VMs are around for a while and they were really good in features
00:42
like consolidation, migration, and isolation. And then this hive of containers happened and they became much more popular, people use them now and they're actually pretty great. But what you hear them from these people is they're much easier to use because I have
01:01
this Docker file and I go, my containers are much smaller than your VMs, my VM usually is 10 gigabytes but my container is just a hundred of megabytes and they're also much faster to bring up. The VM takes minutes to boot but the container is up in a few seconds.
01:20
And then we say usually, wait, wait, wait, maybe all correct but actually did you hear about unikernels because you should be aware that VMs still have some advantages that not necessarily get from container environments and most importantly is strong isolation.
01:43
So let's give you a really short overview of what a unikernel is. Let's take this example, on the left side you see a cloud service deployed with virtual machines and each service entity is one application running in an own isolation box, so in an
02:03
own virtual machine. You have then a standard operating system underneath, meaning most of the time it's a Linux kernel, you run on a hypervisor, it might be KVM, Zen or VMware or whatever you deploy. And yeah, it's quite a heavy weight, the stack is quite big.
02:22
So what do we do in unikernels? So first of all we want to keep the same service as before, so we take that application, keep it still in the isolation boundary, but what we do is replacing that general purpose kernel and put a purpose built kernel towards that application underneath.
02:42
So you see that service A has a different kernel underneath than application B. That whole thing is a monolithic binary that contains just a few kernel layers and the application and also only features that the application needs.
03:02
You don't need isolation anymore if you have that assumption that you have anyway just one application in one virtual machine, so you don't separate between user space and kernel space anymore, which gives you also the advantage that you have further freedom in further specializing the kernel towards your application, right?
03:23
You can tweak and tune it so that it performs that task quite well. So the gains that we found with our previous research doing that, mainly coming from a
03:42
network function virtualization space, is fast instantiation, destruction, and migration times in order of tens of milliseconds, a really low memory footprint of few megabytes of RAM. You could achieve extremely high density of these services, so we were able on a single
04:01
hardware server x86 to run 10,000 guests and then we run out of RAM. We could achieve high performance by just using a single guest CPU, so we're easily able to cope with 10 or 40 gigabits network throughput at that time, so it's kind of that number is already two, three years old, but even there I demonstrated already
04:24
how fast this stuff can go. And last but not least, you have also reduced attack surface because you have the argument that you have much less components in a unique kernel, and next to it the strong isolation is provided by your hypervisor environment.
04:42
So what I want to do is give you just a few graphs to demonstrate what we found there from our research work. So let's talk about the instantiation times. So you see on the horizontal axis the number of simultaneous running instances in the system.
05:04
What we take here is, because it's a baseline measurement, an application that actually does nothing, just comes up and tells, okay, now I'm here, and then it just stays in the system. And on the vertical axis you see the instantiation time of the nth guest that was created.
05:24
And also, please, this is a logarithmic scale. So we start with that application as a standard Linux process, and we get numbers like 0.7 to 10 milliseconds of creation time if we create like 1,000 on the machine.
05:41
If you put that into Docker, that is still good, but is increasing already to 150 to 550 milliseconds. But if you take now a standard VM, so let's say you just do bootstrap Debian, and then wait until the application is up, especially if there are lots of VMs in the system,
06:03
that time goes up to 82 seconds. And then let's take a unikernel doing the same thing, right, and then also running as a VM, and we are around 63 to 1.4 seconds. And I want to add here as well that this measurement didn't modify anything on the
06:24
tool stack, which we also did in our research, so if you're interested, I can point you to some papers where we replaced the tool stack with something written from scratch which is much more lightweight than standard ZEN, and then we could boot even in 30 milliseconds or less that unikernel.
06:43
But so far, so good. In performance, to show you a purpose-built HTTP web server, so it's completely purpose-built, nothing ported. In terms of throughput, you have here a Debian virtual machine running nginx, so this is
07:02
the DN. You have a Debian virtual machine running light HTTPd. The T means here it's Tynix, which is actually a small compiled Linux kernel, then just running the process directly from the init RAM disk. We're all getting one CPU assigned, and the same amount of RAM assigned, I think
07:26
it was 512 megabytes or something, and the file system is from RAM disk, if there's any. So in terms of throughput, you see here not that much of an advantage as soon as we go
07:41
more up to parallel connections because the bottleneck here is the actual hardware NIC because you have so many offloading features in the meantime, like segmentation offloading and so forth. The interesting part is dealing with requests per second, and the yellow part is actually our unikernel, and we are six times faster with the same resources in that virtual
08:03
machine environment just being extremely purpose-built to that use case. So application domains. So unikernels you can actually have in a wide big area and fields, and it's also
08:21
that we found that the properties that unikernels give you, each use case makes use of a different set, let's say. So what do we have? We have actually fast migration and destroy time. There we would go in something like reactive NFV, so imagine web servers that just pop up when your request is coming into your server, or serverless or Amazon Lambda, and
08:45
you know these kinds of things. We have extremely high resource efficiency, also good for serverless if you consider you have high consolidation, lots of serverless tasks on a machine. IoT and mobile edge computing, where you go into an area where you have more resource-constrained
09:06
devices that host your services on the edge of the network. High performance, really important for network function virtualization, also mobile edge computing. And then mission critical, because we have a lower attack surface, potentially we would
09:24
have a cheaper verification, which is then getting interesting for even industrial IoT cases or even automotive. And you may ask here now, so this is all great, so we have similar speed and size as containers, or even less.
09:42
We have then even strong isolation security, but why is not everybody actually using it? So that's a bit weird. And the problem is actually the development of these unikernels. So each of these highly optimized unikernels is until now a manual task.
10:01
It takes really months or even longer, so if you, let's say, have the target to create a web server with a unikernel, you start developing here, then a driver there, and then you choose a hypervisor where you want to support it, and then you make use of some specialization features so that it runs quite well on that platform.
10:24
And then somebody else comes along, cool, but now I want to run it on KVM, and you're like, okay, I can start from the beginning because you have different drivers, you have a different virtualization environment and so forth, so it's a throwaway in the end. So then even imagine you come now with a different application like a database server or
10:42
something, you start the whole process again and again, and that's in fact not something you actually want in a more production environment. So this is where we come along with Unikraft, where we actually want to provide a bit like
11:00
a unikernel build framework. So the motivation that we set us is we want to support a really wide range of use cases, meaning also supporting wide range of specialization techniques or whatever you want to do in your unikernel environment, meaning also to us probably don't know what the end users actually are, the unikernel developers actually using for optimizing
11:26
his use case. So we should be open for that and not dictate any design decisions. We want to simplify the building and optimizing process, simplify porting of existing applications, so most applications luckily use something like, for instance, the POSIX API, so that's
11:46
a good share point. And then also for lots of these unikernel projects to get rid of this throwaway argument or problem, we want to have a more common and shared code base for all these unikernel
12:04
projects that they can just reuse. And with one compile, we want to also support different hypervisor and CPU architectures. And this is actually Unikraft, where we use, it's a quite well-known concept where
12:21
we say everything is a library, but in our case also OS functionalities are libraries and we provide multiple implementations for schedulers, for instance, or memory allocators. And Unikraft actually consists of two main components. One is the library pool and the other thing is the build tool itself.
12:45
So let's give you an overview. The library pool, we distinguish actually into three types of libraries. One is the main libraries, which are kind of independent of actually any target execution environments.
13:00
This could be network stacks, this could be file systems, scheduler implementations, libc's, drivers, and so forth and so forth. Then we have the libraries that are specialized for a hypervisor execution environment like Xen or KVM or VMware or whatever you want.
13:24
And then architecture libraries that is like the last piece of missing pieces for implementing CPU requirements for your Unikernel. So when you take or you create or build your application, select and configure the
13:43
libraries you want to use in that case. Type make and the build system is then creating you multiple Unikernel images, each fitted to or specialized to the target platform that you want to run it on. Also, what I need to add here is that the system is also built so that you can come
14:04
and replace libraries in that pool or even add your own libraries to it as well. Let's say, one might say, Lightway IP is a nice network stack because it's small, but it doesn't give me all the TCP features that I need for my application. So I'd rather get for something big, I would like to run a ported BSD network stack.
14:26
So then he would select a different network stack or maybe he has an own written network stack so he would just take his library instead. An example system, imagine you have a Python script that you want to Unikernelize.
14:44
You would just select a language environment, in this case, this would be, you know, MicroPython coming from the embedded world, network stack, VFS and whatever you need and you get your Unikernel just running, executing your Python script.
15:04
The build tool is quite close to what you're used to when you use Linux. So it's kconfig based and has also lots of Makefile magic behind. But the workflow is actually make menu config.
15:20
Then you see like, you know, different options where you can then start selecting the libraries that you need, configure them, choose your target platforms. And afterwards, it's just a simple make and you have your images. To give you some numbers as a baseline example,
15:42
so I will show you after the slides actually a bit more real world demo with a small network stack that replies to HTTP with HTTP requests or HTTP replies. To give you a baseline, when we started the project, we could compile a small Unikernel
16:04
that does nothing else than just come up, say here I am and shuts down afterwards with 32.7 kilobytes. And also you had only to assign 208 kilobytes of RAM to get that thing running, although
16:21
we had to modify the tool stack because hypervise builders thought less than four megabytes, you will never have, right? So even there, you had to remove these hard-coded limits. What is going to happen soon, so we're around now since one year, so I announced actually
16:42
in last FOSDEM, the start of the project. So we have now an upcoming release in the upcoming week, which gets a new version TAC 0.3. What you will have in there already is support for Zen KVM Linux and even a bit experimental
17:05
by metal port for various architectures. As core functionality, we provide you a corporate scheduler library, although a preemptive scheduler library is in work currently. Then a binary body managed heap allocator, although you could also replace that one
17:24
if you don't want that. We have pretty new in that release networking, which is where we introduced an API, which is pretty close, what you may know from Intel libdk. It's still interrupt-driven, but it provides you specialization features like you can
17:45
batch number of packets and so forth. Library IP as a first TCP IP stack to the system. We have a VFS implementation where we then can move on later to add file systems underneath that we can mount in there. And then the two libc's that we have for now is no libc, which is a written one in the
18:07
Unicraft ecosystem to support more minimalistic builds, but most applications use some more fancy POSIX functions, so for that purpose we have also new lib available.
18:21
On the roadmap, we have, we want to concentrate our effort on getting more complete ARM64 support. Actually, it's the ARM folks by themselves providing us the ARM64 architecture and platform
18:40
support for KVM. We have started internally playing around with more libraries like muscle, libuv, zlib, open SSL and so forth, which are more like for cloud environments, more standard components that you need in your software stack. Since we have a focus on serverless, we're also looking in language runtimes like JavaScript,
19:07
Python, Ruby, C++ and so forth. We want to come up with an OCI container target support so that you can even build a container image that you could just launch in your container environment instead of just
19:23
having a virtual machine image. File systems, we've come up with, first of all, with an in RAM file system, but then also with block drivers to support actually reading something from virtual disks or 9PFS, actually.
19:41
Then network drivers, since we have VirtIO for now only, we have in the pipe zen with on that front and for the Linux target, a tab driver. And then also, we want to support frameworks like Node.js, PyTorch for maybe machine learning tasks in the network, Intel DPDK, we actually would like to port the whole framework to
20:05
Unicraft so that you would build a unikernel and a VBoxes directly. And it's open source and actually we still need support because we have actually quite a lot of stuff to do.
20:22
So as I said, actually we started December 2017 under, well actually as an incubator project from the zen project. So it's also covered by the Linux foundation, we get actually quite nice support from them. The community grew since then, so we started with two contributors, we are now at 23.
20:47
Mainly we have, to mention the big contributions from Romania, we got networking scheduling support, which from professor and students from university in Bucharest. From Israel, we had someone that was looking into bare metal support and was providing
21:05
a VGA driver so that you could actually, without any hypervisor underneath, run Unicraft directly on hardware. And from China, there is a lab from ARM that actively works and contributes for the 64-bit support for ARM, which is quite nice.
21:25
We actually are mailing list-based, our project, so we have, actually we hijacked the Minimax devel mailing list from zen, maybe you heard about that. The idea is also maybe in longer term we are able to replace that Minimax unikernel
21:44
base there so that also the zen folks have something to build their stop domains or something for that. Then we have an IRC channel on Freenode called Unicraft. And, yeah, probably I'll flush you now, let's go for a bit demo time and then you
22:05
get some more references and points and we can also then do a question round. So what I'm going to show you is actually, probably you all had a textbook that explained
22:24
you how to program a socket on a Unix system, so that you need to have a socket call, you need to set addresses, bind it and this is now being a socket server that is going to listen on a port and you have a simple while loop that then will just, as soon there
22:47
is a connection and the first byte arriving, it will just send a static string, which is by chance something that is HTTP 1.1 compatible here and sends out a web page.
23:02
So this should demonstrate you a bit, one of our targets that we want to have a unikernel that is kind of the same way as you would run or develop the application for a standard operating system. So let's go here.
23:25
You have here main.c, this is the program, you have a makefile that I can show you that looks like quite similar to external Linux kernel modules, so you get another make call
23:41
invoked that then kicks in the unikraft build system and then you have a makefile.uk which actually describes for unikraft which modules do you have or actually which source files do you have and so forth, which adds actually just main.c to the build and registers a library which we call here now app HTTP reply.
24:06
So if you type make, let's do clean so that I can prove you it builds, it is now
24:26
and at last, okay, some ellipses, scheduling and then you have the final KVM image. In the menu config, oops, ah, yes, yes, yes, yeah, that's a funny K config thing.
24:58
So you see here you have a menu for architecture selection, so I built enough for x86 but you
25:03
can choose also, you know, other architectures, platform is like one cent, KVM and so forth, although we have only networking support on KVM for now. On the libraries, you see here is lightweight IP, we could go in here and select features in there, we could even build network stack without TCP support, it's actually funny
25:23
to call it then still TCP IP stack and so forth, right? You see it's quite a library in there, we have some build options and that's it, right? And if I run that now as a KVM guest, so do you see that line or maybe I move up the
25:45
window a little bit, I load the kernel image, reply KVM and I need to attach it because it has networking to a network bridge, ah-hah, wonderful, I have mistyped here,
26:09
so now it's up already, you see it was going through the virtual bias, was loading the image, this is still chemo and then from this point on it was the unikernel, it found its network device,
26:21
brought it up and then the DHCP server in background replied with this address. So what we can do here now, we can ping the host to see the response, that you believe me I will kill now the guest, it's not responding anymore, nothing happens,
26:48
I reboot it and here it's back again, right, and then if you want you can see also the web page served, oops, oh my god, I think next time I will clone the screen so I see the console here too,
27:15
ah, proxy configuration, yes, so it came, got served and it's that string that got sent, right,
27:43
so the kernel image itself or let's say the unikernel is just 222 kilobytes in size, so quite small actually, and now to show you that it's the same program you can just go and
28:00
you know take GCC and build it as a Linux application, now we have a.out right, which is of course smaller because we don't have a virtual driver in here and so forth, but it does the same thing, right, right, so open for questions if we have a bit of time, yeah.
28:39
I understand the like isolation guarantees and stuff, I'm wondering if just sort of
28:44
operationally there are additional challenges to monitoring and operationalizing the kernel. So yeah that's actually a good question because the thing is it depends what you built into your unikernel, right, you could say do it as minimalistic as
29:00
possible which is actually our main target, but then you don't have like a shell or something in there anymore so you can't SSH into it, so you're kind of forced to use the tool that the hypervisor environment provides you, but at the same time we could still say and it's also
29:20
what we have a bit in our mind on the agenda, it's not written actually, is to provide a library that gives you kind of remote access maybe with the REST API or whatever so that you can even look what's going inside, what's happening inside the unikernel, yes.
29:48
You substitute the POSIX API of this instead of system calls, it's function calls. Right, yeah, actually probably I should repeat what he said for the recording, right, so
30:00
the question was or actually more a confirmation that we run everything in super user mode so it's kernel space in our perspective and calling something from the POSIX API is just for us a function call and there's no system call. So, one more question, yeah.
30:23
So, the question is how hard is the driver, how hard is it to port the driver from the Linux kernel? Here I need to say it's pretty hard because we have a different license. We use BSD and actually, you know, to allow you also, you know, build up some unikernels
30:44
that use non-GPL software or a library that is non-GPL. So, yeah, Linux is for as a no-go, but actually we would look into BSD, these OSes to port something. That kind of depends what you want to port. So, for now we have just a networking subsystem with, you know,
31:07
defined APIs for the drivers and we on purpose, you know, looked at DPDK because there everything runs a user space, they have kind of a similar thing of a library system there and we thought, okay, maybe can you reuse those network drivers. How this will look like for other drivers,
31:25
let's see, depends which APIs we will come up. But on the other hand, since you're a target on virtualized environments, it's not that you need, you know, a whole bunch of tons of drivers. What you need to support is the virtual driver model of your hypervisor environment. Yeah.
31:43
Yes. I'm curious about your experience with LWIP because I can see that this would be the TCP IP stack that you picked for first prototype of the implementation, but my experience is that if you want performance and especially in presence of
32:04
multi-processing, LWIP being focused on embedded systems, it won't cut it likely. Yeah. What are your experiences if you have a road map or plan for the future? So, what are the, the question is, what are the experience with LWIP? There I need to agree,
32:22
it's quite limited in that sense if you run, you know, multiple threads or whatever, this can be quickly a bottleneck. Also, features what are missing still are, you know, supporting segmentation of loading, for instance, to make use of, you know, 64 kilobyte packets that you can send to your, to your actually NIC driver that runs on the
32:44
hypervisor host and then the NIC is chopping the TCP segment into smaller packets. So, what, for that we actually have on the road map that we want to port a network stack from BSD.
33:01
Let's see if, probably we can also do the shortcut to go through OSV, OSV is also a unikernel project and they ported it already. So, we have maybe a bit more new environments so that the extraction from an existing kernel environment is a bit easier for us. So, the question is about language support for components of the system or also
33:35
applications and this is what we are actually after that release really trying to focus on,
33:42
especially, you know, languages that are really popular like JavaScript and so forth and cloud environments. Let's say, probably it's still someone looking a bit deeper in order to make even system libraries be able to be compiled with the whole system. I'm not sure yet where there are
34:02
pitfalls or not, but I could still imagine, at least for C++ it's quite easy to bind that to C code so that some of the system libraries could be in C++ and then you need, you know, this extra small code that you need for exception handling and so forth that with C++ comes. But
34:20
I could also imagine something like Go which is also compiled language and is easy to link with C code. A garbage collector we would need then which I usually don't like but that language requires it. Yeah, so I guess this is more that can be answered when we have some more language environments ported and then it's trying out and see what's missing. So we just started with
34:50
Python. So we actually took the MicroPython project that is down there and got a nice unikernel running Python programs running. We also looked into V8 which is the Node.js
35:06
JavaScript engine. There we're still missing some POSIX functionality to get it actually working but I think we probably this year we might be able to reach that point. And also, you know, Ruby, we have a student that is interested in porting Ruby to it, so
35:25
we need developers. Okay, thanks.