The HIPPEROS RTOS
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Subtitle |
| |
Title of Series | ||
Number of Parts | 490 | |
Author | ||
License | CC Attribution 2.0 Belgium: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/47392 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
FOSDEM 2020212 / 490
4
7
9
10
14
15
16
25
26
29
31
33
34
35
37
40
41
42
43
45
46
47
50
51
52
53
54
58
60
64
65
66
67
70
71
72
74
75
76
77
78
82
83
84
86
89
90
93
94
95
96
98
100
101
105
106
109
110
116
118
123
124
130
135
137
141
142
144
146
151
154
157
159
164
166
167
169
172
174
178
182
184
185
186
187
189
190
191
192
193
194
195
200
202
203
204
205
206
207
208
211
212
214
218
222
225
228
230
232
233
235
236
240
242
244
249
250
251
253
254
258
261
262
266
267
268
271
273
274
275
278
280
281
282
283
284
285
286
288
289
290
291
293
295
296
297
298
301
302
303
305
306
307
310
311
315
317
318
319
328
333
350
353
354
356
359
360
361
370
372
373
374
375
379
380
381
383
385
386
387
388
391
393
394
395
397
398
399
401
409
410
411
414
420
421
422
423
424
425
427
429
430
434
438
439
444
449
450
454
457
458
459
460
461
464
465
466
468
469
470
471
472
480
484
486
487
489
490
00:00
Forschungszentrum RossendorfView (database)Projective planeView (database)Game controllerDifferent (Kate Ryan album)WebsiteGoodness of fitData storage deviceHybrid computerProcess (computing)Computer animation
00:36
ComputerForschungszentrum RossendorfInformationProjective planeDegree (graph theory)Formal verificationOperating systemUniverse (mathematics)19 (number)Perturbation theoryVirtualizationEnterprise architectureBitPanel paintingComputer animation
01:13
Visualization (computer graphics)Enterprise architectureKernel (computing)ScalabilityFormal verificationSoftware testingFormal grammarForschungszentrum RossendorfWordMikrokernelOperating systemVirtualizationKernel (computing)ScalabilityFormal grammar19 (number)Point (geometry)Multiplication signComputer animation
01:42
Real-time operating systemEnterprise architectureAxiom of choiceThermodynamisches SystemRun time (program lifecycle phase)Model theoryPoint (geometry)Multiplication signProjective planeUniverse (mathematics)Real numberFreewareRule of inferenceImage resolutionStudent's t-testIdentifiabilityComputer animation
02:04
VerdampfungUniverse (mathematics)Category of beingStudent's t-testComputer animation
02:25
Product (business)BefehlsprozessorParallel computingReal numberEnterprise architectureSystem programmingProduct (business)Ferry CorstenOperating systemComputer scienceEnterprise architectureThermodynamisches SystemDifferent (Kate Ryan album)DigitizingScheduling (computing)Real-time operating systemComputer animation
02:45
Parallel computingReal numberEnterprise architectureSystem programmingReal-time operating systemComputing platformResultantEnterprise architectureThermodynamisches SystemParallel computingEvent horizonMultiplication signComputer animation
03:06
Parallel computingSoftwareEnterprise architectureLink (knot theory)Continuous functionThermodynamisches SystemReal-time operating systemService (economics)Product (business)Real numberOperations researchSystem programmingKernel (computing)Scheduling (computing)Digital signalSystementwurfArithmetic meanFamilyOperating systemSingle-precision floating-point formatThermodynamisches SystemReal-time operating systemMultiplication signBit rateFreewareProduct (business)Internet der DingeMikrokernelPlane (geometry)Projective planeLink (knot theory)Scheduling (computing)Software architectureKernel (computing)Programming paradigmBuildingMereologyWhiteboardRing (mathematics)Wave packetEnterprise architectureSoftware developerDifferent (Kate Ryan album)Medical imagingComputer animation
04:37
CodeHypothesisProduct (business)Real-time operating systemPersonal digital assistantComputer networkKernel (computing)Computer hardwareParallel computingData managementComputing platformThread (computing)MultiplicationScheduling (computing)Scale (map)Machine visionEmbedded systemRoboticsSystem programmingStandard deviationChainLibrary (computing)Installable File SystemSoftware frameworkSeries (mathematics)Arc (geometry)ArchitecturePowerPCPartielle RekonfigurationField programmable gate arrayAerodynamicsDevice driverUDP <Protokoll>File Transfer ProtocolServer (computing)Dynamic Host Configuration ProtocolCommunications protocolLink (knot theory)Control flowTask (computing)Process (computing)Data structureSeitentabelleContext awarenessInheritance (object-oriented programming)CodeWhiteboardProcess (computing)Software developerInformationSheaf (mathematics)Real-time operating systemOperating systemRun time (program lifecycle phase)Different (Kate Ryan album)Thermodynamisches SystemComputing platformMulti-core processorCartesian coordinate systemDisk read-and-write headParallel computingThread (computing)Power (physics)Multiplication signLine (geometry)Enterprise architectureComputer programmingComputing platformWordProjective planeSet (mathematics)Level (video gaming)Reading (process)Open sourceData managementOptical disc driveRow (database)Category of beingSoftwareDevice driverOpen setPublic key certificateMachine visionComputerMereologyStudent's t-testNumberCoprocessorCoroutineFrequencyPoint (geometry)Image processingTask (computing)Division (mathematics)Bit rateGame theoryService (economics)Arc (geometry)Arithmetic meanPowerPCAsynchronous Transfer ModeMikrokernelLogic synthesisParallel portNP-hardKernel (computing)Domain nameBefehlsprozessorFile systemInternet service providerSoftware frameworkStandard deviationLibrary (computing)Scheduling (computing)Stack (abstract data type)Computer animation
10:11
Multiplication signTask (computing)InformationMedical imagingRun time (program lifecycle phase)Thermodynamisches SystemHypercubeComputer configurationComputer animation
10:38
Computer programTask (computing)Revision controlLatent heatInformationComputer fileMultiplication signThermodynamisches SystemNumberProcess (computing)OscillationThread (computing)Recurrence relationComputer programmingFile formatFrequencySet (mathematics)Task (computing)Computer animation
11:25
Task (computing)Parameter (computer programming)Magnetic-core memoryRevision controlConfiguration spaceLipschitz-StetigkeitModul <Datentyp>Read-only memoryKernel (computing)Thermodynamisches SystemLoop (music)Scripting languageComputer-generated imageryBuildingIntegrated development environmentContinuous functionSoftware developerSystem programmingServer (computing)CASE <Informatik>Multiplication signInformationCore dumpProcess (computing)Operator (mathematics)Primitive (album)NumberBuildingComputer configurationMedical imagingConfiguration spaceScheduling (computing)Task (computing)Model theoryThermodynamisches SystemSemiconductor memoryCartesian coordinate systemSlide ruleOperating systemComputer fileComputerKey (cryptography)Rational numberHypercubeRevision controlVirtual machineFamilyXMLComputer animation
13:10
Computer multitaskingComputer hardwareAbstractionSoftwareInformation securitySystem programmingArchitectureComplex (psychology)Object (grammar)Procedural programmingEnterprise architectureThermodynamisches SystemInformation securityPredictabilityLabour Party (Malta)Computer animation
13:38
CodeVector potentialKernel (computing)SpacetimeComputational complexity theoryThermodynamisches SystemRead-only memoryVisualization (computer graphics)Process (computing)Semiconductor memoryOperating systemDifferent (Kate Ryan album)Component-based software engineeringSoftware developerAttribute grammarVirtualizationProcess (computing)Address spaceSingle-precision floating-point formatSeitentabelleKernel (computing)Semiconductor memoryComputer animation
14:21
NP-hardReal numberCore dumpMultiplicationEnterprise architectureKernel (computing)Thread (computing)Broadcast programmingScheduling (computing)Model theoryEmpirical distribution functionBit rateInheritance (object-oriented programming)ResultantReal-time operating systemBit rateEnterprise architecturePrototypeDevice driverImplementationMultiplication signPressureRevision controlProcess (computing)MultilaterationInversion (music)Mechanism designInheritance (object-oriented programming)Computer animation
15:16
InterprozesskommunikationSystem programmingTelecommunicationSpacetimeAbstractionMultiplicationTexture mappingField programmable gate arrayModule (mathematics)Food energyThermodynamisches SystemTask (computing)Computer networkDefault (computer science)Process (computing)SpeicherschutzPower (physics)Multiplication signInformation securityMereologyData managementService (economics)Expert systemProcess (computing)Forcing (mathematics)TelecommunicationThermodynamisches SystemTask (computing)Dynamical systemScaling (geometry)FrequencyComputer animation
16:25
Component-based software engineeringComputer hardwareHeat transferAbstractionUsabilitySoftware developerInformationSoftwareKernel (computing)Enterprise architectureModul <Datentyp>Semiconductor memoryEnterprise architectureSoftware developerModule (mathematics)Flow separationThermodynamisches SystemMeasurementSpacetimeDifferent (Kate Ryan album)Mechanism designWhiteboardUsabilityComputer hardwareKernel (computing)AbstractionComputer animation
16:46
WritingModul <Datentyp>Kernel (computing)Different (Kate Ryan album)Memory managementMultiplication signScheduling (computing)Mechanism designSpacetimeKernel (computing)Thermodynamisches SystemModule (mathematics)Medical imagingRow (database)Hypercube1 (number)Computer animation
17:19
Enterprise architectureLatent heatModul <Datentyp>Computing platformHeegaard splittingKernel (computing)Execution unitSoftware testingDisintegrationContext awarenessEvent horizonThermodynamisches SystemDecision theoryLocal ringSystem callThread (computing)Core dumpReal numberComputing platformComputer configurationBlogSemiconductor memoryKernel (computing)Component-based software engineeringImplementationUnit testingEnterprise architectureState of matterMultiplication signSymmetry (physics)BitProcess (computing)Group actionNumbering schemeProjective planeExecution unitComputer animation
18:14
ComputerInformation managementMagnetic-core memoryCore dumpInterprozesskommunikationKernel (computing)Core dumpGroup actionCodeContext awarenessMereologyNumbering schemeProcess (computing)Task (computing)Medical imagingLocal ringAsynchronous Transfer ModeThermodynamisches SystemBitPrincipal ideal domainMultiplication signInterrupt <Informatik>Buffer solutionContent (media)InformationState of matterComa BerenicesReading (process)Information securityExecution unitRevision controlData structureOrder (biology)Computer animationProgram flowchart
19:44
ScalabilityCache (computing)Message passingScaling (geometry)Scheduling (computing)Computer networkComplex (psychology)Kernel (computing)Scale (map)InterprozesskommunikationData managementWeb pageThermodynamisches SystemSeitentabelleSingle-precision floating-point formatIndependence (probability theory)Process (computing)Task (computing)Read-only memoryModel theorySemiconductor memoryBuffer solutionTelecommunicationSystem programmingHeat transferIdeal (ethics)Device driverLocal ringControl flowType theoryShared memoryCausalityRight anglePower (physics)ScalabilityWeb pageCommunications protocolResultantThermodynamisches SystemBookmark (World Wide Web)Level (video gaming)Menu (computing)Different (Kate Ryan album)Semiconductor memoryPoint (geometry)Data managementSequenceNumberMeasurementEnterprise architectureCodeComa BerenicesWhiteboardMereologyTask (computing)Model theoryKernel (computing)Scheduling (computing)Computer animation
21:20
FrequencyMathematicsMechanism designThermodynamisches SystemAsynchronous Transfer ModeComputer configurationProcess (computing)Temporal logicException handlingSimilarity (geometry)Thread (computing)Scheduling (computing)Mixed realityLevel (video gaming)Partition (number theory)Cache (computing)Shared memoryExclusive orSemiconductor memoryRead-only memoryPhase transitionSystem programmingCone penetration testReal numberCommercial Orbital Transportation ServicesImplementationPower (physics)WhiteboardData managementComputing platformMixed realityNetwork topologyDifferent (Kate Ryan album)ResultantProjective planePublic key certificateOperating systemImage processingComponent-based software engineeringSemiconductor memoryDirectory serviceTask (computing)Scheduling (computing)Computer animation
21:51
Personal digital assistantComputing platformMedical imagingOperating systemImage processingComponent-based software engineeringProjective planeReal-time operating systemCASE <Informatik>MathematicsString (computer science)Quantum stateVideoconferencing
22:20
Personal digital assistantField programmable gate arrayDigital filterProjective planeDifferent (Kate Ryan album)VideoconferencingCASE <Informatik>Bit rateThermodynamisches SystemProcess (computing)Computer animation
22:40
Software developerReal-time operating systemComputer configurationThermodynamisches SystemComputing platformField programmable gate arrayParallel computingMultiplicationCore dumpScheduling (computing)SpeichermodellPredictabilityAnwendungsschichtVirtual realitySpacetimeRead-only memoryOpen sourceParallel portStandard deviationDifferent (Kate Ryan album)Fitness functionReal-time operating systemAdaptive behaviorVotingProjective planeThermodynamisches SystemOpen sourcePoint (geometry)Operating systemComputer animation
23:22
Kernel (computing)Numbering schemeDuality (mathematics)AlgorithmOpen sourceFreewareReal numberSpacetimeModel theoryCodeExpert systemOpen setLanding pageOpen sourceCodeModel theoryKernel (computing)BefehlsprozessorRemote procedure callDuality (mathematics)Cartesian coordinate systemAxiom of choiceMedical imagingNumbering schemeCache (computing)Real-time operating systemAuthorizationAlgorithmStudent's t-testMultiplication signComputerPlanningSystem callThermal conductivityGoodness of fitComa BerenicesComputer animation
26:10
Point cloudFacebookOpen source
Transcript: English(auto-generated)
00:06
OK, the next speaker is Antonio, who's now working at Huawei. And he's going to give an overview about HyperOS, which was his project from his previous job. Yes, exactly.
00:20
Thanks for the introduction. So I'll start with some pretty disclaimer, saying that this only reflects my view and not the one of my current or previous employer. So that's done. We can start now. So you may have noticed the strong reference in the title. It's totally intended, of course. So like this good show, I think the HyperOS project
00:41
was very promising, had an amazing cast, and also had a somewhat controversial last season. But we'll get into that. First, a little bit of highlights about my own curriculum. So I did a master's degree and PhD in this university. And in the meantime of my PhD, I joined a spin-off project
01:02
called HyperOS until 2019, where I joined the Huawei resident research center, which is new, to work as an operating system researcher. So small words about this resource center. It's a microkernel and operating system lab
01:21
where we do research, such as scalable kernel architecture, design exploration, formal verification, virtualization techniques, and these kind of things. The lab started in February 2019. And we are more than 20 researchers now. So if you're interested, we can have a chat about that.
01:43
So the agenda. There's a lot of points and not a lot of time. So I will go through it. First, I want to start with the human story behind the RTOS. What's the story of this project? So it all started here in the university. There are plenty of research labs in ULB.
02:04
And usually, we identify three missions or roles for a university. It's teaching, like getting the students to get their diploma, their courses, research, and then valorization, which is some kind of taking some things out of the university to the real world.
02:20
And in our case, HyperOS falls in that later category. So HyperOS was a spin-off. So it's taking science from a laboratory into a commercial product. It started with two different research labs, one which was focusing on real-time scheduling theory in the computer science department,
02:41
another one in the engineering department about digital system design. Basically, researchers and entrepreneurs met together and brainstormed new ideas about operating system architecture and how to create a new real-time operating systems, how to applying the research results
03:03
from the real-time literature into modern platform. And this became HyperOS, which is a very ambitious acronym, meaning High Performance Parallel Embedded Real-Time Operating Systems. And as you may have noticed, there is an S in the end of systems, meaning that it's not one single operating system.
03:20
It's a family of operating systems. So the basic idea was to create a company that wants to sell products, all revolving around the business of real-time operating systems, including creating a new microkernel for embedded systems, like applying new software architecture
03:40
paradigms and scheduling policies and so on. And within the business, somehow keep strong links with academia and research and actually performing research on the operating systems. So it was this very ambitious and crazy idea to get the OS into any device imaginable,
04:01
from small IoT device to complex, certifiable products, such as planes or cars. So in 2006, the project started with the two research labs I was mentioning. And there was some kind of funding, storytelling, design ideas that were laid out.
04:20
And in around 2012, the first developers, young researchers, joined the project. And it was mainly funded by European projects. That's the time where we laid out the foundation of the new kernel. So it was basically doing embedded development, everybody
04:41
with the board on their desk and trying to get it working and on different architecture. And so between 2013 and 2015, we started to the code base, actually. And we were very proud and very inexperienced. We were very happy when the code, for the first time,
05:01
jumped into user mode. And in 2014, the actual company was created. And nice anecdote I'd like to tell is we met Andrew Tenenbaum, because we wanted to do a microkernel. So we had a conference with him. And at some point, he said, OK, to create a new kernel,
05:20
you take three students. You make them work for three years. And that's only the beginning. And that's exactly what we lived. So it's a very long process to have it working. So from 2016 to 2019, we secured other fundings or European projects, landed the first customer.
05:40
And basically, around that time, we were between 4 to 15 people, depending on how and when and how you count, actually, the staff, if it's employees or people working with us, collaborator, and so on. It was also the time where I finished my PhD, but this is not relevant for here.
06:01
So the question I get asked more often is why to create a new, yet a new real-time operating system, yet a new kernel. And the basic answer are the following. We wanted to facilitate the development of high-performance high-end embedded systems, because it's sometimes hard to enter this domain
06:21
as a developer. We wanted to use modern hardware, meaning multi-core, SMP, and even heterogeneous platform efficiently and safely. And we wanted a low footprint, but still with a rich set of features operating system. And so this whole set of requirements,
06:41
multi-trading, power management, real-time scheduling, meaning hard real-time scheduling guarantees, parallelism in the kernel, and support for heterogeneous modern platforms, including also possibly the certification of all these aspects. At that time, we were not able to find one, so maybe that was not really true.
07:00
But somehow, having all of that combined together was not pretty common in 2006. So now I will go through the high-level features of the operating system. So basically the vision is to have an operating system for high-end embedded systems. That must be performance, reliable, efficient.
07:21
And for very demanding application, modern like computer vision, embedded artificial intelligence, robotics, this kind of very autonomous systems, but still that needs safety. So we support the C standard library with POSIX compliant API.
07:41
We have some kinds of exotics also course available to do some native stuff like managing internet service routines in user mode, IPC. These are the tools we support, CCL, LVM, also Xilinx tools to do high-level synthesis.
08:01
We had some POSIX compliance, mainly what is usually targeted for embedded systems. That was required to support large frameworks like OpenCV for image processing. Architecture, we supported our RV7 mainly. RV8 also, X86 on emulators, PowerPC and ARC
08:23
that we discontinue at the end, but at some point we were supporting it. The device drivers, so one important features that we really loved, it was to being able from the operating system to reconfigure the FPGA like on the Xilinx platform,
08:41
where you have one FPGA with CPUs. The operating system is running on the CPUs and it's reconfiguring the FPGA when the user needs this feature. Ethernet, SDIO with file systems and other usual drivers. We supported the network stack by basically porting
09:01
an existing open source stack, LWIP, and the only thing we had to do was to develop the different ethernet drivers for the different platforms we supported. So what's the runtime model? What's the build environment of HyperOS? So it's basically revolving around the concept of task, process and threads.
09:21
Tasks are basically the offline information about the program, that it's like the code, but also real-time requirements that you can add, like you can add a deadline, you can add periods, you can add execution time budgets to the concept of task, and at runtime a task becomes a process,
09:42
and this process has all the dynamic information to make it run. And when the process is running, the operating system is enforcing that the real-time properties that are documented in these tasks are enforced. And then you can add, a process can be multi-threaded,
10:02
like having a different number of threads running in parallel with some processor affinities. So how do you create an application with HyperOS? You have basically your tasks that you configure, you can configure it with CMake, that's the notion, it's a recommended option,
10:20
and you have this file, the task set, which contains all the runtime information about the task, like timing behavior, and these kind of things. And you have to link it with the HyperOS package, which comes as a binary together to create the final image running on the system. So in practice, tasks are some plain C, C++ programs,
10:44
like here it's a Hello World example, that use native HyperOS printf, and that's pretty much it. And this is a task set, which is an XML file, where you give all the information about your task. So this is the idea of a very, somehow static,
11:02
it's not like a general purpose operating system, where you spawn dynamically the different threads, as it is an embedded one, you have a predefined number of processes that are in the system, and they are all documented in an offline XML file, where you have the name of the file, the size of the stack in byte,
11:20
how it behaves regarding recurrence, and you have other information as well, like timing information, offset, worst case execution time, deadline period, all these things that you put to add information about the timing behavior. Also the core affinities, if you want the process to start on a specific core, you can specify it here.
11:43
And this is some configuration, basically, this slide is here to say that you can use CMake to build your HyperOS application, and you have some basic primitives of CMake that are shipped to being able
12:00
to automating the configuration. Yeah, okay, this is not really important. What about this build environment? So we had this idea that we wanted to do a familiar operating system, so with very large number of build options that can be configured at build time.
12:22
So you can, for example, select your scheduling policies, the way that the memory model is laid out, or if you want independent application running as L file, or a statically linked approach, where you link statically the operating system, the different tasks together, and you have only one final image.
12:42
So depending on the use case, we supported different approaches. So like I said, HyperOS is distributed with CMake-based build environments, and with SDK, which run inside the Docker container image to be able to set all the dependencies
13:01
with the right version, and not interfering with your system. This, of course, is totally optional if you want to install all the dependencies on your machine, it's always possible. So architecture overview. So when you have some strong objectives, like strong targets, like embedded systems
13:21
that must be safe and efficient, you usually have these criteria, like reliability, predictability, performance, and security, and now I will go through them and say, oh, what we choose to try to accommodate these different requirements. So what are the HyperOS features for reliability?
13:41
Basically, it was a microkernel-based operating system. I think in that room, I don't have to explain what a microkernel is. But basically, we're pushing as much as possible the different components in user space, especially drivers. Memory virtualization. Again, you can choose the different layout that you want.
14:01
If you want one page table per process with very isolated paging, or you can have a single page table with protection attributes, that basically is a shared virtual address space for the different processes where the kernel is protected.
14:22
About predictability, basically it was the implementation of the research results of the real-time literature, implementing the different hard real-time schedulers, such as the classic one, the rate monotonic, earlier than first, but also some more recent results that are proven to be more efficient in the resource usage,
14:42
such as unfair EDF, for example. We had some prototype. We had some monitoring of the real-time behavior, and also a master-slave architecture that I will explain in details a little bit later.
15:00
Okay, and also some mechanism of priority inheritance between a process when you're sending an IPC to a device driver. For example, this device driver can take the priority of the sending process to ensure that there is no priority inversion. About performance, so one of the main thing
15:20
is using a performance scheduler, fast communication mechanisms, such as zero-copy IPC, and try to have fast user-based abstraction, such as lightweight tracing, multi-trading supports, mapping the IO to user-land. And exploiting the modern hardware, such as FPGA,
15:43
and also power management, like dynamic and voltage, dynamic frequency and voltage scaling. And security is perhaps the weakest part of the kernel, because we are not at all security experts, but there are some things that were enforced
16:02
that may protect by design the system, such as the whole system tasks are known at compile time. There was no hidden processes or things that you could attack. Yeah, you can disable the network by default, but then you cannot contact your target.
16:21
And basically, the memory isolation also serves for security purpose. We had this kernel and hardware abstraction layer separation in different modules, allowing to test them independently and really ease the development of the system. For each board or each architecture that we supported,
16:41
we only had to change the hardware abstraction layer. These are the mechanisms that are in the kernel, so memory management is not in user space, it's done in a kernel space. And you have the scheduler is also in kernel space, of course IPC, and different system cores,
17:03
and all the things that you see on this slide. The kernel is written in C, and you have several variants for the different modules. You can replace, for example, the scheduler, because all scheduler of high-performance respect the same API, and you can just replace at build time the one that you want. The R has these really low-level hardware-specific mechanisms,
17:26
such as configuring the interrupts, memory management, caches, and clocks, and so on. Yeah, okay, the advantage is that it was easy to port to new architecture, it was also easy
17:41
to test the different components independently, such as the kernel was tested without the AL, and the AL was tested directly with unit tests running on the targets. And now about the master-slave architecture, which is one of the innovations, I think, in this project, was the idea was to implement
18:02
SMP support with this asymmetric architecture, like the master kernel was actually a big piece of code, and the slave kernel was as thin as possible. Here is a scheme to have a look, a little bit more detailed. The master's kernel is basically responsible
18:21
to manage the transition between states for the processes. So each time you have to call the scheduler, you need to do a remote system code. For example, if you want to exit or create a new task dynamically, you go to what we call a remote system code, which is basically inter-processor interrupts, waking up the master core to say, okay, you need to enter in kernel mode
18:44
to update the structure, the internal image of the kernel. And then the context switch were also in the other way. It's inter-processor interrupt to ask a core to manage the, to execute a context switch. The slave kernel is basically managing context switches,
19:02
and the master is managing everything, everything else. Except for some very critical part of the kernel that need to be very performance, and to avoid overflowing the master, the idea, for example, to execute an IPC, copy the content of a buffer IPC,
19:22
was done without intervening of the, intervention of the master. So there was a remote system code, IP context switch, and yes, if you want, for example, to, I don't know, very simple things like getting your PID or asking read-only information to the kernel, this was only a local system code
19:41
without calling the master. So this approach has several advantages. The first one is, it becomes really easy to design kernel features, because you don't have to mess with different level of locking, of granularity, and so on. The only place of the kernel where there was lockings
20:02
was in this system called protocols, to exchange between the master and slave, but it's only in one specific point of the code base. Everything else is like simple sequential design. It also has been proven as a scalable solution
20:22
when, if you enter the number, if you increase the number, of course, it's in the paper, but I need to speed up, because I have five minutes left. Okay, so like I said, there is a different memory model if you enable or not DMMU, and if you want to isolate the different tasks. There is two different kind of IPC,
20:42
one which is synchronous, where you share memory, and you have basically handshaking protocols that is managed to avoid, to write where you, when you cannot write to the pages. This is useful when you have big,
21:01
big data exchange, and otherwise you have the kernel copy. This I will skip, and then some, we had some research results. We used the kernel basically to publish some papers, some results about our architecture, some about power management, or memory scheduling.
21:21
So this is power management measurements, where you have the board connected to an oscilloscope. Mixed criticality scheduling, where you have different tasks that have different certification requirements. We also tested it on directory on the platform, and also some scheduling policies
21:42
to avoid memory interferences. All these results are published, and we can discuss afterwards if you're interested. And these are the kind of use cases that we had in the different European projects. It was, Tulip was to create a platform for image processing, and we had to deliver
22:02
the operating system components. So this is a drone that we are doing, obstacle avoidance by doing a depth math, a depth image, and so it was basically doing real-time image processing to avoid obstacles.
22:21
The same with an automotive pedestrian detection camera, and finally x-ray filtering that also requires some real-time video for the surgeon to operate. These are the different kind of use case where we deployed HyperOS for research projects. So the conclusion, HyperOS was a very big idea
22:43
of this real-time operating system managing different devices with modern heterogeneous parallelism. And I think it was a nice fit to the adaptive AutoSAR standards.
23:00
So this, one of the point of doing this talk today was that initially this operating system was fully proprietary, but recently the company went bankrupt for lack of funding, and so we found out that open source is actually a nice opportunity to make the project live.
23:23
So as a future work, I would like to open, I have the authorization to open the code base, but we still need to decide about, for example, what kind of license we want, what kind of, and we need to also have dual licensing scheme
23:42
because it will not be completely open source, there will be some proprietary open source. And by open sourcing it, the idea is that the kernel can become a playground for new idea algorithm policies for the real-time community, for researchers or students.
24:00
But I'm no IP expert, so I'm open to the discussion if you have idea or suggestion. And I would like also to have a model that do not scare external contributors that can include them. It's quite hard to find somewhat between fully open source and proprietary to find the good way to go.
24:23
So my agenda for future work is to first do a code base of the cleanup, sorry, a cleanup of the code base, define the license and the contribution model, create a landing page, and then finally open source the code base. Okay, thank you.
24:46
We have time for maybe one very quick question. Your choice. Oh, okay. Yeah, then. Okay, this is just about the symmetric design.
25:01
As I understand, typically in the fast-performing computational use, okay, the data structure, you try to put it per CPU and avoid remote code. As I understand, you mostly try to use it on the remote code from the CPU idea. So how it improves your performance
25:21
or how it's ever application in your performance. You mean the fact that it's, you need to wait for the master? So you mean, yeah, so the advantages was that when you do that, all the image of the kernel,
25:42
for example, stays in the cache, and then you have less cache bouncing between cores. That's one of the advantages. Typically, when you, this is the question. The question is whether you have a implication of performance, whether you have a worse performance than you can be. You can. Okay, we can take that offline, maybe.
26:02
Okay. Okay, sorry. Once again. Okay, thanks.