ELISA - Advancing Open Source Safety-Critical Systems
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 637 | |
Author | ||
License | CC Attribution 2.0 Belgium: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/53379 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
00:00
Open sourceSystem programmingOpen setKernel (computing)Software maintenanceContext awarenessComponent-based software engineeringAmenable groupStatement (computer science)Process (computing)Element (mathematics)Network topologyContinuous functionParameter (computer programming)ReliefCollaborationismLimit (category theory)Open sourceSystem programmingComputing platformProjective planeGraph coloringArmLimit (category theory)Different (Kate Ryan album)Dependent and independent variablesModul <Software>SpacetimeInteractive televisionElement (mathematics)Maxima and minimaProcess (computing)Software developerElectronic mailing listTerm (mathematics)Connectivity (graph theory)Hybrid computerMereologyMathematicsCartesian coordinate systemMessage passingPlastikkarteResultantRule of inferenceBuildingArithmetic meanBitFerry CorstenBootingSet (mathematics)Goodness of fitLatent heatKernel (computing)Selectivity (electronic)Component-based software engineeringComputer hardwareMathematical analysisPatch (Unix)Performance appraisalEndliche ModelltheorieComputer animation
04:45
SubgroupEnterprise architectureGroup actionProcess (computing)Kernel (computing)Electronic mailing listPersonal digital assistantSystem programmingHazard (2005 film)ANSYSStrategy gameCodeState of matterElectric currentEmailPresentation of a groupGroup actionElectronic mailing listBitSlide ruleCASE <Informatik>Computer architectureKernel (computing)Systems integratorSemiconductor memorySystem programmingArchitectureDifferent (Kate Ryan album)Modul <Software>Mathematical analysisHazard (2005 film)Software developerStrategy gameFocus (optics)BuildingExpert systemConnectivity (graph theory)Vulnerability (computing)State of matterCovering spaceWordProcess (computing)Configuration spaceComputer configurationEnumerated typeAreaLatent heatParameter (computer programming)MereologySubgroupArithmetic meanContext awarenessModal logicSuite (music)WeightRemote procedure callArmRight angleOpen sourceRoutingCellular automatonWeb 2.0Service (economics)Level (video gaming)Rule of inferencePersonal digital assistantNumberError messageStaff (military)Sound effectGraph coloringReading (process)Multiplication signFreewareINTEGRALComputer animation
11:41
Strategy gameHazard (2005 film)Personal digital assistantElectric currentState of matterMathematical analysisRevision controlAlgebraControl flowArtificial neural networkSystem programmingOpen setFeedbackProcess (computing)TheoryANSYSKernel (computing)IterationData structureDiagramComponent-based software engineeringAlgorithmSpacetimeService (economics)Point cloudContinuous functionCollaborationismFocus (optics)Demo (music)Enterprise architectureElectronic visual displayData bufferGraphics processing unitMaxima and minimaÜberwachungszeitgeberCyclic redundancy checkSystem identificationComplete metric spacePoint (geometry)Thermal expansionCharacteristic polynomialAsynchronous Transfer ModeData managementVulnerability (computing)American Physical SocietyCASE <Informatik>PlanningGroup actionParallel portComputer configurationData managementComputer architectureMathematical analysisSystem programmingAreaSoftware developerProcess (computing)Software testingDemo (music)Mereology1 (number)Power (physics)Flow separationFocus (optics)Electronic visual displayCartesian coordinate systemKernel (computing)Total S.A.Configuration spaceOpen setElectronic mailing listRevision controlSystems integratorFeedbackComplete metric spaceLevel (video gaming)Modul <Software>Term (mathematics)Endliche ModelltheorieView (database)Derivation (linguistics)BitExpert systemResultantGame controllerPresentation of a groupArmStaff (military)DeterminismCovering spaceMetropolitan area networkWeightScripting languageAnalytic continuationShared memoryCAN busRight angleStudent's t-testPlastikkarteExpressionInheritance (object-oriented programming)Zirkulation <Strömungsmechanik>Different (Kate Ryan album)Open sourceRemote procedure callComputer animation
18:32
Process (computing)Kernel (computing)Hazard (2005 film)Complete metric spaceSystem programmingCharacteristic polynomialAsynchronous Transfer ModeData managementVulnerability (computing)Personal digital assistantState of matterElectric currentCodeMathematical analysisComputer programMetric systemDisintegrationArithmetic progressionMusical ensemble2 (number)WebsiteSurface of revolutionFluid staticsTerm (mathematics)Projective planeGroup actionProcess (computing)Level (video gaming)Error messageCollaborationismLimit (category theory)Computer programmingSocial classSystem programmingData analysisArmBuildingBarrelled spaceConfidence intervalRoutingCellular automatonAddress spaceMathematical analysisMatching (graph theory)Metropolitan area networkSoftware developerData structureLetterpress printingCAN busGrass (card game)MereologyCanonical ensembleCharacteristic polynomialMultiplication signCASE <Informatik>FehlerschrankeKey (cryptography)Graphics tabletReading (process)Link (knot theory)Rule of inferenceSemiconductor memoryBellman equationNP-hardAverageFlow separationScalable Coherent InterfaceCartesian coordinate systemRight angleConfiguration spaceIdentifiabilityCodeEnumerated typeSystem callKernel (computing)Vulnerability (computing)Asynchronous Transfer ModeMultiplicationAdditionSubgroupAttribute grammarRepository (publishing)Patch (Unix)Slide ruleWeb 2.0ResultantPerformance appraisalChainProzesssimulationOcean currentPointer (computer programming)Concurrency (computer science)Computer animation
25:23
Computer programCodeMetric systemKernel (computing)DisintegrationElectric currentState of matterPersonal digital assistantEnterprise architectureSurvival analysisArmCASE <Informatik>Metropolitan area networkOnline helpComputer architectureComputer animation
26:06
Online helpComputer animation
26:41
Element (mathematics)Computer animation
Transcript: English(auto-generated)
00:05
Hello, everybody. Today, my name is Shua Khan. I'm going to talk about advancing open source in safety critical systems. Thanks for joining me today. And let's go ahead and start looking at what we are going to talk about today.
00:25
Linux and safety critical systems. What does that mean? Assessing, let's look at what does it take to assess safety first in a system. Assessing whether a system is safe or not requires understanding the system sufficiently.
00:44
You have to have a good understanding of what's happening in the system in terms of the interactions between different modules, components of that system. And if you are using Linux in that system, then you have to understand how Linux interacts with different pieces in your
01:06
system, whether that be hardware components and the kernel itself, various kernel modules and then the user space running on top of that top of Linux. So you have to all of these have to come together and you have to understand how these different modules are interacting inside.
01:31
In what is, what are we doing, what are the challenges involved in running Linux or building products, safety critical systems based on Linux. What does that mean?
01:48
So we have to select Linux components and features that can be evaluated for safety, because you have to have a good understanding of the system itself. And then we have to identify gaps that exist where more work needed to be done, we have to do more work to evaluate systems safety of those systems.
02:12
So let's take a look a little bit about, let's talk a little bit about ELISA and what we are doing. This is the challenge we have taken on in the ELISA project to make it easier for companies to build and certify Linux based safety critical applications.
02:27
So what are we doing? Our mission is to define and maintain a common set of elements and processes and tools that can be incorporated into specific layer space to safety critical systems.
02:44
Linux, as you know, is an open source project, they have, the project has their own processes and the way the development happens, reviews, patch reviews happen, and then the content accepted into those, into Linux releases.
03:01
So we have to look at, we have to, that is a part that doesn't change, we have a, that part we have to take, look at those individual development processes and see how we can do safety analysis on those processes and decide how we can come up with a safety model for those.
03:26
So another thing we're also, we're also trying to understand the limits, it's understanding the limits is another component of the whole process. We cannot engineer your system to be safe.
03:41
We want to ensure, we have to understand and ensure how to apply the described process and methods to a safety critical system. We, we cannot create, create an out of three Linux kernel for safety critical applications, because of the way Linux kernel development works.
04:06
The Linux kernel development continues to make progress, you have a new release coming up every three, roughly three months. And then we are used as a result where it's continuously moving forward. And
04:22
we have to, we cannot relieve you from the responsibilities and legal obligations and liabilities. So, what we have to do is, we have to provide a path forward for, forward and peers to collaborate.
04:40
To, to be able to use Linux in safety critical systems. So a bit of overview of our project overview. We have what we have a technical mailing list, you can engage with, and we have various working groups, I will talk more
05:01
about these working groups in detail, a little bit later on but this is a quick snapshot of the working groups we have. We have a automotive working group and we have medical devices working group, and we have a safety architecture sub group, and then also kind of development process subgroup. So we are focusing on various aspects of putting a safety critical system based on Linux, using these in with
05:31
these work groups and different groups focus is automotive and medical focus on use cases and then the rest of them.
05:40
Bring together the rest of the story. So let's go talk a little bit about technical strategy. What is we, what, what we are looking to do in Elisa is develop an example, qualitative analysis for auto automotive and medical device use cases we want to take an automotive use case, and they medical use case, and then keep the Linux Cardinal as the
06:15
twofold, whether it's qualitative and quantitative. In, as in Elisa what we're focusing on is we are using common and weakness
06:28
enumerations to identify as a base for us to identify hazards for the two use cases. And then all of these data, data will both will be available for system integrators to use Elisa to analyze
06:46
their own systems to context here is, as Elisa, we do not know the full picture of the systems themselves. So what we are trying to do is take Linux and provide a safety enough resources and analysis
07:06
on these two use cases as examples for system integrators to use to do analysis on their own systems. Let's talk a little bit about how we are doing this.
07:23
We have automotive work group, which is, we'll, we'll see a little bit about the use case later on, but we have a use case from the automotive work group. Then we have a use case from medical work group, both of those use cases, then gets gets into the development focused work
07:44
groups in Elisa, which is safety architecture work group is looking at taking these use cases and say what are all the different components that could would make up maybe watchdog, or, you know, memory subsystem, and various piece parts that that make up the safety critical system.
08:06
And then we in the kernel that we all these working groups are, we have experts from various different areas of kernel safety experts, and then we also have industry members that are looking to build safe safety critical systems.
08:29
So we have participation from experts from all of these areas. We are interacting of collaborating to come up with our deliverables we just talked about in the previous slide. So, I don't
08:46
know the word group for example, they are working on putting together a use case that gets into the rest of the work groups in this bubble here, safety work architecture working group and criminal development working group and tools and investigation.
09:00
So we look at that use case, and we develop based on that will identify for that specific use case we identify different cover modules that are necessary and then we look at those kind of modules and then identify hazards and make a safety argument for those.
09:21
So next, I'm going to move on to the next slide to talk more about our strategy. We're continuing continuously defining our strategy, based on new use case use cases that we look at and it's a it's a continuous improvement of taking feedback from different working groups and then defining our strategy.
09:42
We are defining to explore. First of all, we have to first identify the hazards. And how do we represent those hazards in a way that makes sense to system integrators. For example, we might talk about a use after free, but what does that mean
10:01
to as a to a system integrators, how does it impact a common weakness that says, use after free, what do you, what does it mean to the impact on the system itself. So those are the kinds of things that we are trying to take the hazard and define a system state that you could get into because of that hazard, and how do we avoid that.
10:29
Looking at the cardinal, what kind of options, features and configurations we have in the cardinal to be able to detect and mitigate and find a way to mitigate that hazard.
10:50
And we are also looking to see what ELISA isn't as well. We're going to clearly define that. We are, it is not a, we're not trying to certify, we're not trying to provide a new
11:03
distro, it is a base, it is really providing resources for to system integrators to build their safety systems. We're also looking to, once we have, we have a technical strategy which we
11:24
are going to execute that strategy in defining the hazards coming up with configurations, making recommendations on which configurations would work very well for your safety critical system and make resources on how to enable them, how to, what are the best options. So if
11:44
you have a kernel configuration that has debug options. Do you want to use the debug option. Can you put it down in terms of, do you want which level of debug option you would want to run on your system, and so on. So we are, we are going to provide those kinds of resources and then how do you validate them what kind of testing.
12:03
Test makes sense for that particular kernel module, and so on. And then, after coming up with all of these deliverables how we are publication, we, how do we make available to you to the system integrators we are working on best also figuring out best ways to, to make these available for system integrators to use.
12:31
We currently we are working on our scope is automotive and medical use cases. Other use cases are welcome.
12:42
So let's talk a bit about what we are doing in the medical devices working group. Medical devices working group. We are using open APS analysis. And we are looking to review the analysis and then with the STP experts.
13:04
And we will present our results offer analysis to open APS community for feedback. And we publish open APS STP analysis on GitHub and put it under version control. So that's what we are planning to do for the medical devices, working group.
13:21
And there will be a use case coming out of this working group that would help guide other working groups, focusing on this particular use case and developing a safety safety analysis model. So here is a detailed view of open APS safety analysis. I won't go too much into detail into detail here.
13:51
This is for you to look at. So the next working group automated working group, our working group is working, collaborating with a GL.
14:05
And we are this working group, developing the automotive use case, and take that telltale use case, and then consolidate console. We're also working on content consolidation of the concept and demo application. And then we, and in this working group, we are defining the architect, we do the
14:28
architectural refinement which will feed into rest of the work groups for this use case definition. So here is a example telltale display and monitoring. This is the application, use case, safety critical application, down application that we are using.
14:50
And to be able to do analysis on this. So, we take all of these use cases and then the safety architecture working group is the one that is going to look at these use cases.
15:12
They focus on different aspects of the use case safety architecture working group will look at this. These use cases and come up with a complete definition of top level safety
15:23
requirements for the cardinal, because we have to identify for this particular use case, what the cardinal what with the modules in the cardinal and configurations that would make sense for that use case. So we do that. And, and we start safety analysis for the total safety application we talked about in the automotive currently, the, the
15:49
focus in this work group is automotive. So you will, you will see some of the total safety application and then coming up with a couple configuration that would serve that use case well.
16:04
We concentrate and refine the qualification methodology using the total use case as a driver. And then start freedom from interference, considering also the non safety critical
16:22
parts of the kernel as well. non safety critical parts of the kernel as well as the user space, you have in this in a safety critical system you have a safety parts safety critical resources, and then also non safety, critical resources. So for example, if you are thinking about
16:47
an auto car, you will be looking at safety critical ones would be that will drive the drive train. The automotive, the actual door shifting kind of things and then power management. Whereas
17:02
if you're thinking about Nancy non safety critical things, it could be infotainment applications. So we are kind of coming up with a separation of those of partitioning in, in a way, saying these are the list of safety critical areas, and then non safety critical areas. And then what would make sense.
17:21
And the expanded the plan is to expand the focus to medical working group use cases. And, and all of these activities happening in parallel. Right now the focus is automotive. So, kind of development working group so once in the current data meant working group, what
17:43
we are doing process working group what we are doing is we're assessing Linux kernel management process. When you are thinking about safety. Safety critical applications. The way safety analysis has to be done.
18:03
In some ways, it is. We'll have to look at the capital management process itself, and then look at the development process closely and look at see where we can derive evidences for safety, like for example, review process that's happening.
18:21
How do we have the cardinal releases happen. What kind of testing that gets done on the kernel. What kind of evidences we can gather from the cardinal development activity that happens in me mailing list and so on. So that's kind of an example of what we are planning to derive from that and as we go do this.
18:45
Do this analysis. In some cases, we are looking to also improve documentation kernel documentation to improve it, add it. When, if we find gaps.
19:01
And that's part of the process as well. Within the limited resources we have in, in a Lisa, keeping in that keeping those resources in mind. And like I mentioned earlier, that we are using were leveraging common weakness enumerations.
19:23
We take the CWS and look at what would what, which one, which CWS makes sense for Linux kernel that is running on a safety critical system. As an example, concurrency and locking and memory related errors and pointer
19:44
errors would be something that we're closely looking at CWS related to those. And we take those, and we define system characteristics, what kind of system characteristics do we need for the safety critical application in terms of responsiveness, being able to, to handle memory related errors timing and timing related errors.
20:11
Some of these come out of FFI and related to take those define these high level system characteristics that we are, we
20:23
need to be able to build and safely operate safety critical systems, and then identifying failure modes and attributes of the Linux kernel. We are focused, we are main primary focuses Linux kernel, looking at that, and then we identify kernel configurations and features important to safety critical systems.
20:46
And, like, we do have we we are heavily focused on automotive and medical use cases for this work. A subgroup of kernel process development process work group is tools, what we are trying to do in that group is
21:05
subgroup is looking at various kernel tools available to be within the kernel and some outside that are available to evaluate. We do the static analysis on the kernel and static analysis is a one of the important parts of a safety evaluation of
21:32
a system. So we are looking to provide resources and derive the tools chain to say, Hey, these are the tools you can use.
21:43
And for example, Colonel does have several safety analysis static analysis tools available. There are several within the kernel repository itself.
22:02
In addition to that we are looking at code checkers additional code checkers available. And then we are looking to investigate results as well. And our goal is to have these continuously running on
22:20
Cardinals that come out releases and be able to provide this static analysis results on the CI on tools investigation, at least a CI that that's just kind of a stretch goal that's what we're going forward in the future. So you will, you will see that happening in the next couple of six months or so.
22:44
And we are also assisting this group is also assisting newcomers with onboarding in terms of familiarizing themselves anybody interested in familiarizing themselves in how the development process works and how do you send patches and what does it mean to be part of a
23:13
improvement subgroup. In addition to that, we also have an ambassador program. We have a group of
23:24
ELISA members that volunteered to represent and speak about ELISA in various conferences. And we're also working on putting together a material. So we can share our progress as well as what we are doing what we are all about with the community Linux kernel community and community.
23:50
Like, for example, I'm speaking to you, and fast them. So we will be continued continue to do that to engage with our goal is multi fold we want more people coming in and collaborating with us from.
24:04
We were so that that is part of the reason why we are looking to speak to you at various places as well. And then, and color and join us for the call. Welcome you to join us for the collaboration.
24:21
And another important work we are doing we are sponsoring under Elisa is mentorship projects. And these two we have 2020 projects that are wrapping up, you can, these are links you can go check, check out what these projects are all about. So like for example we talked about in the previous slide code checker that are integrating current development tools with code checkers web UI.
24:47
That is one of the activities that's happening in the mentorship project, and which will then become a part of Elisa CI that we can so we are continuously looking to see looking to create projects that we can train new developers as well under
25:06
the mentorship program. And we will be considering future projects for summer session. That's coming up. In addition to that we're also doing white papers and outreach. We would like you to come join us
25:37
and help define safety architecture, help defining safety architecture and medical use cases we are a little short on
25:49
volunteers that can help us with, with safety, medical uses especially if we are focusing currently on automotive use case, we
26:01
have good coverage there but we are looking for help there in the medical use case. So this, how are we doing all of this, this, this is not possible without support from our Elisa members. We, we have our members listed here, and without their help we won't be able to do
26:25
what we have been doing, and what we would like to achieve in the next coming years. Thank you very much. Please feel free to ask any questions, I can answer them.