We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

7th HLF – Lecture: Can We Trust Autonomous Systems? Boundaries and Risks

00:00

Formal Metadata

Title
7th HLF – Lecture: Can We Trust Autonomous Systems? Boundaries and Risks
Title of Series
Number of Parts
24
Author
License
No Open Access License:
German copyright law applies. This film may be used for your own use but it may not be distributed via the internet or passed on to external parties.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Can we trust autonomous systems? This question arises urgently with the perspective of massive use of AI-enabled techniques in autonomous systems, critical systems intended to replace humans in complex organizations. We propose a framework for tackling this question and bringing reasoned and principled answers. First, we discuss a classification of different types of knowledge according to their truthfulness and generality. We show basic differences and similarities between knowledge produced and managed by humans and computers, respectively. In particular, we discuss how differences in the system development process of knowledge affect its truthfulness. To determine whether we can trust a system to perform a given task, we study the interplay between two main factors: 1) the degree of trustworthiness achievable by a system performing the task; and 2) the degree of criticality of the task. Simple automated systems can be trusted if their trustworthiness can match the desired degree of criticality. Nonetheless, the acceptance of autonomous systems to perform complex critical tasks will additionally depend on their ability to exhibit symbiotic behavior and allow harmonious collaboration with human operators. We discuss how objective and subjective factors determine the balance in the division of work between autonomous systems and human operators. We conclude emphasizing that the role of autonomous systems will depend on decisions about when we can trust them and when we cannot. Making these choices wisely, goes hand in hand with compliance with principles promulgated by policy-makers and regulators rooted both in ethical and technical criteria. This video is also available on another stream: https://hitsmediaweb.h-its.org/Mediasite/Play/e1dbc878bf6b4df6b47236c56cc0b6241d?autoStart=false&popout=true The opinions expressed in this video do not necessarily reflect the views of the Heidelberg Laureate Forum Foundation or any other person or associated institution involved in the making and distribution of the video. More information to the Heidelberg Laureate Forum: Website: http://www.heidelberg-laureate-forum.org/ Facebook: https://www.facebook.com/HeidelbergLaureateForum Twitter: https://twitter.com/hlforum Flickr: https://www.flickr.com/hlforum More videos from the HLF: https://www.youtube.com/user/LaureateForum Blog: https://scilogs.spektrum.de/hlf/
Goodness of fitTouchscreenTuring test2 (number)Autonomic computingPhysical systemInternet forumComputer animation
Prisoner's dilemmaMachine visionAutonomous system (mathematics)Autonomic computingInternet der DingePhysical systemInternet forumBoundary value problemAutonomic computing
Computer hardwareMathematicsOrder (biology)Self-organizationSoftwareProduct (business)Electric generatorIntegrated development environmentMathematical optimizationSystems engineeringLine (geometry)Limit (category theory)Complex (psychology)Physical systemAutomationVirtual machineDataflowServer (computing)InternetworkingOperator (mathematics)Response time (technology)Parameter (computer programming)MassHypermediaConnectivity (graph theory)PredictabilityExistenceSet (mathematics)Cartesian coordinate systemInteractive televisionInternet service providerElectronic mailing listAddress spaceClient (computing)Characteristic polynomialEndliche ModelltheorieAutonomous system (mathematics)Different (Kate Ryan album)Contrast (vision)AuthorizationObject (grammar)Machine learningEvoluteStandard deviationMessage passingAutonomic computingService (economics)Internet der DingeMathematicsSelf-organizationSoftwareTelecommunicationProduct (business)Electric generatorINTEGRALIntegrated development environmentTask (computing)Hybrid computerComplex (psychology)Physical systemPhysicalismResultantMachine visionInternetworkingOperator (mathematics)Basis <Mathematik>Adaptive behaviorConnectivity (graph theory)Remote procedure callPredictabilitySystementwurfInformation securitySet (mathematics)Arithmetic progressionEmoticonInteractive televisionDirection (geometry)Internet forumMixed realityEndliche ModelltheorieAuthorizationNeuroinformatikObject (grammar)Auditory maskingAutonomic computingService (economics)Game controllerTwitterSoftware developerInternet der DingeComputer animation
Computer chessSequenceInformationMathematicsOrder (biology)Combinational logicGame theoryWave packetIntegrated development environmentFlow separationDecision theoryArithmetic meanBitDivision (mathematics)Functional (mathematics)Line (geometry)Group actionComplex (psychology)Physical systemProjective planeAutomationSubsetTerm (mathematics)Virtual machineNumberConfiguration spacePlanningComputer fontCASE <Informatik>Process (computing)Adaptive behaviorRoboticsBound stateSet (mathematics)Optimization problemModule (mathematics)Frame problemReflection (mathematics)Endliche ModelltheorieAutonomous system (mathematics)Different (Kate Ryan album)Object (grammar)Degree (graph theory)Context awarenessAutonomic computingGame controllerBoom (sailing)Data managementImplementationInformationValidity (statistics)Function (mathematics)Integrated development environmentState of matterFunctional (mathematics)Line (geometry)Hybrid computerInterpreter (computing)Complex (psychology)Physical systemHeat transferPlanningAdaptive behaviorStrategy gameRoboticsNumber theoryoutputData storage deviceBit rateChannel capacityWhiteboardEndliche ModelltheorieObject (grammar)Single-precision floating-point formatContext awarenessBounded variationAutonomic computingGame controllerComputer animation
Computer hardwareOrder (biology)PeripheralSelf-organizationTelecommunicationLevel (video gaming)DialectDimensional analysisTask (computing)Division (mathematics)Functional (mathematics)Limit (category theory)Complex (psychology)MereologyTheoryPhysical systemSlide ruleTerm (mathematics)Virtual machineAreaOperator (mathematics)Parameter (computer programming)CASE <Informatik>Information securityArithmetic progressionCommunications protocolInteractive televisionCorrespondence (mathematics)Autonomous system (mathematics)Representation (politics)Degree (graph theory)Single-precision floating-point formatMultiplication signStandard deviationCollaborationismAutonomic computingSpacetimeWeb 2.0Interface (computing)Data managementValidity (statistics)Wave packetIntegrated development environmentTask (computing)Uniformer RaumComputer configurationDivision (mathematics)Hybrid computerLimit (category theory)MultiplicationPhysical systemDevice driverVirtual machineLink (knot theory)Dependent and independent variablesIndependence (probability theory)Operator (mathematics)Process (computing)Personal digital assistantBridging (networking)Regulator geneReflection (mathematics)Context awarenessStandard deviationCollaborationismAutonomic computingService (economics)Game controller
Medical imagingComputer hardwareImplementationMathematicsCombinational logicSoftwareFunction (mathematics)Type theoryData analysisWave packetAxiomCategory of beingPhase transitionSoftware testingPhysical lawAnalogySystems engineeringState of matterBoolean algebraForcing (mathematics)Irrational numberCalculusPairwise comparisonTheoryPhysical systemPythagorean theoremData recoveryDot productVirtual machineGoodness of fitDivisorArtificial neural networkInstance (computer science)Bridging (networking)Set (mathematics)outputModul <Datentyp>WordPredicate (grammar)Data miningComputational intelligenceFilm editingEndliche ModelltheorieDifferent (Kate Ryan album)Machine learningElement (mathematics)ArmMultiplication signStandard deviationAutonomic computingMechanism designMedical imagingInformationValidity (statistics)Task (computing)Systems engineeringState of matterForcing (mathematics)Hybrid computerPhysical systemAutomationDevice driverDataflowServer (computing)InternetworkingArtificial neural networkComputational intelligenceHierarchySource codeEvent horizonEndliche ModelltheorieContext awarenessWritingAutonomic computingService (economics)Game controllerMechanism designComputer animation
MathematicsSimulationSoftwareValidity (statistics)Type theoryLevel (video gaming)Integrated development environmentFormal verificationDecision theoryCausalityState of matterComputer simulationFunctional (mathematics)SmoothingGroup actionHybrid computerComplex (psychology)Power (physics)Physical systemProjective planeSlide ruleResultantSubsetConfiguration spaceBasis <Mathematik>DistanceCASE <Informatik>Process (computing)Instance (computer science)Complete metric spaceSet (mathematics)Arithmetic progressionWordInternet service providerElectronic mailing listEndliche ModelltheorieAutonomous system (mathematics)Different (Kate Ryan album)AuthorizationContext awarenessMultiplication signStandard deviationMessage passingPosition operatorMathematical analysisDrill commandsMathematicsSimulationSoftwareGame theoryValidity (statistics)Function (mathematics)Computer musicWave packetLevel (video gaming)TetraederAverageSoftware testingIntegrated development environmentCausalityAxiom of choiceComputer simulationFormal grammarGroup actionHybrid computerIntelLimit (category theory)Performance appraisalMaxima and minimaMereologyPhysical systemPhysicalismExecution unitVirtual machineTime zoneLink (knot theory)DataflowReal numberDependent and independent variablesPlanningCovering spaceOperator (mathematics)Basis <Mathematik>DistanceHypermediaProcess (computing)Adaptive behaviorMach's principleAsynchronous Transfer ModeInformation systemsRandomizationVariety (linguistics)Regulator geneCrash (computing)File formatSoftware frameworkTraffic reportingSource codeSign (mathematics)ScalabilityEvent horizonTrailSelectivity (electronic)Endliche ModelltheorieThresholding (image processing)NeuroinformatikArmContext awarenessPermanentStandard deviationWritingMessage passingAutonomic computingPattern languageGame controllerSoftware developerComputer animation
Internet forumComputer animation
Transcript: English(auto-generated)
Good morning everybody. We are ready for this, the second day's morning session.
My name is Helga Holden. I'm the Secretary General of the International Mathematical Union. I will be the chair of this session. So our first speaker is Joseph Sifakis. He's a senior researcher at Vérimag Laboratory in Grenoble in France.
He received the Turing Award in 2007 and he's asking the question that everybody wants the answer to. We can see it on the screen. Can we trust autonomous systems? So please, Joseph.
Do you hear me? Yes. Okay. So yesterday evening we had over dinner an entertaining talk about dilemmas of autonomous systems, whether the autonomous car should decide to kill the old lady rather than the young baby, things like that.
In this talk I will discuss this issue of trust of autonomous systems. And without delay I will show this. It's the IoT vision where autonomous systems are central. And, okay, so the key idea is that you integrate services, devices to provide global services and address global challenges.
I'm not going to say more about that, except that the Internet of Things hides two different challenges of uneven difficulty.
One that is the human Internet of Things, that is a mere improvement of the Internet as we know it today. The basic interaction model here is a client server. You ask for a service and you eventually get an answer to that. And the industrial Internet of Things, which is a more challenging issue, how to build systems that are autonomous,
services that are autonomous, that replace humans in their mission in complex organizations. So here I'm listing autonomous transport systems, industry for zero smart grids, and the list would be longer.
So the idea of autonomy is that the system works without any human intervention and humans just may change some parameters. So what I call next generation autonomous systems emerge from this need to further automate existing organizations
by progressive and incremental replacement of human operators by autonomous agents. So this is my definition. And you understand that these systems are critical. They require today massive use of AI-enabled components and they should exhibit some broad intelligence.
These are the main characteristics you see there. To manage dynamically changing sets of possibly conflicting goals, so many goals, and this idea is in line also with the idea of transitioning from narrow or weak artificial intelligence to strong artificial intelligence.
To cope with uncertainty of complex unpredictable cyber-physical environments and also to collaborate harmoniously with human agents, what we call symbiotic autonomy. So these are the main characteristics of the autonomous systems I'm interested in.
And today everybody, I mean any people who understand the technical issues agree that we are quite far from these industrial things
just because of serious limitations. One is that we cannot provide transcoerfiness assurance techniques for learning-enabled components. This is quite obvious. You have also poor transcoerfiness guarantees for all the networking infrastructure. And there are other issues that are equally important, that is impossibility to guarantee response times in large networks.
So in my talk I will consider as an illustrative example self-driving cars, autonomous driving cars. This is an example that is very well understood because also it's the object of a lot of discussion in media.
And it's really emblematic and of course this raises a lot of challenges, economic stakes and of course their application will have a deep societal impact. Okay, I would like to emphasize that despite of the limitations that we
know and in contrast to standard practice on systems in aerospace or in industries, autonomous vehicle manufacturers have not followed standard design approaches, standard design approaches that are safety by design approaches,
just because they cannot be applied. It's very simple because in particular of the existence of machine learning-enabled components. So they have applied end-to-end machine learning-enabled approaches, design approaches.
And also something else very important is that the same manufacturers applying only some statistical evidence to say that their systems are correct. They say I have driven my car so many millions of miles, so I'm happy with that and that's fine.
And also that public authorities allow self-certification, which is funny because self-certification is applied by the car manufacturer. So Mr. Tesla says my cars are good enough, you can drive them.
And there are other practices like the fact that you update critical software every month and these are practices that are against standard systems engineering practice. Let me remind you that for aircraft, personally I have been involved in many avionics projects, aircraft are certified as products.
So once an aircraft is certified, then you cannot change anything, not only a line of software, but even a component of hardware. So aircraft manufacturers buy all the hardware components for 50 years of production of a particular aircraft.
So we have this deviation from standard practice and also it is interesting to note that people exhibit some optimism and some kind of optimism and ignore all the problems.
So you have people that say okay, let's go ahead, accept the risks, the stakes are huge. Other people say rigorous methods do not help in anything, these are complex problems and they can be solved only by empirical methods.
And other people say okay, we have the routers, we will do that, sometimes cars are just around the corner, something like that. So as an academic and a researcher and an engineer, I am very much concerned by this situation.
It's a pity also that the press, the media give large echo to these opinions. And personally I think that we should seriously consider the issue of trust of autonomous systems. But in order to discuss this, we need some new scientific and engineering foundation and this is something I am going to explain.
And okay, so what it means, an engineering foundation, an engineering foundation means that first of all we can understand the difference between automation and autonomy.
And also what it means to make a system more autonomous than another and okay, what it means exactly. Also I think we should relate transworthiness to knowledge truthfulness and this is something very important. Because in this debate also people are considering that knowledge is produced, we
have scientific knowledge of course, but we have also knowledge produced by neural systems. And these have the same quality, I don't think so. And my message here is that we should of course not close the eyes to all this evolution. I think the advent of AI and machine learning is something very, very important.
But we should try to take the best from modern based approaches and AI enabled approaches and so develop what I call hybrid design flows.
So this is an outline of my talk. So I will try to explain what autonomous systems are and then I will try to explain that knowledge comes by degrees. And so you have different degrees of truthfulness in knowledge and then I will explain my
ideas about what it means to develop hybrid design flows, how to violate systems in that case. And eventually finish with a discussion. So let me try to illustrate this difference between autonomous and automated systems by these five exams.
So these five exams have something in common. These are systems that interact with an environment, so you have very simple systems like a thermostat. So a thermostat controls the, ok, it has to maintain the temperature of the environment in some, between some bounds.
An automatic train shot, as the train shots you see in the airports today. It's just playing a robot, soccer playing a robot and a robot car. Ok, so all the systems have agents that are controllers that interact with their environment.
And each agent is pursuing specific goals but if you have many agents then the system also has global goals. And the problem is how the specific goals of the agents can be composed to meet the global goals of the system.
At least sometimes I have problems. So in order to fix some vocabulary I would like here to introduce some concepts. So in an autonomous system you have agents, so here you see I have two agents. So these are the objects that are controlled.
And then you have other objects that are not controlled or you can see that they are not controlled by the system. And each agent has an internal environment that is strictly controlled by himself. And an external environment that is shared by other agents.
And of course the purpose is how to move, how to change the situation in the external environment so that each one reaches his own goals. And of course then you have a global system that is a traffic system that has also specific goals. Specific goals are safety, how to have traffic that is fluid and things like that.
So these are the basic definitions. And you understand also that these systems are dynamically changing. They are mobile. So you have new agents or objects that come and go. And it's the hardest kind of system to understand and to analyze.
I am working on that and it is very hard to do this. And now I have introduced the concepts. Let me try to find the separation line between automation and autonomy in these five examples. So for the thermostat it's clear.
The environment is very static. The stimuli are just temperature and meeting the goals is very simple. It's trivial. You have an explicit control. Also for the shuttle the environment is a bit more complicated but the control here is static. When I was very young I have worked on such projects in France.
And then you come to the chess robot and here you have a very important change because here the planning is online. You cannot statically define because the complexity of configuration is so huge that you have to do that online. And then of course for soccer robot and robocar you have big defense.
The chess robot you have the environment is static. And then for the soccer robot it's dynamic but then you have a given number of players. And for the robocar the game becomes very, very wild to understand this.
So where is the separation line between automation and autonomy? It's clearly here. And I think what I'm going to show perhaps will make this distinction more clear. So what does an autonomous agent? An autonomous agent is a reactive system that interacts with some environments.
And it receives sensory information from the environments and sends commands to the environment. So this is the definition of a reactive system. And in order to achieve its goals it needs first to exhibit what I call the situation awareness.
So to understand what happens in the environment. And to have some adaptive decision process. So how situation awareness is achieved? Okay, by combining two functions, perception. So this is typically done today by using machine learning techniques.
So the perception is about receiving say frames and finding concepts in the frames. And reflection is something very important because it means that you are able to build a model of the external environment. You cannot control your external environment without knowing a model.
So you have, and this model is combination of external environments models and the internal environment model. And then what means adaptive decision? What means adaptivity? Adaptivity means that I have many possible goals. These goals may be conflicting. And I have to choose a subset of non-conflicting goals.
And of course now how I choose these goals, this may be an optimization problem. If I refer to the talk of yesterday evening. And then for each set of compatible goals you should plan. And the plan is a sequence of actions.
I mean it's more complicated than a sequence of actions, than adaptive sequence of actions. You generate commands and that's it. And now if you are more ambitious and you want your autonomous agent to exhibit self-awareness and self-adaptation.
Because these are also important terms in the jargon of autonomous systems. This means that self-awareness, what it means? It means that it can recognize situations that the perception module has not been trained to recognize.
And self-adaptation means that you can create new goals that you did not have initially. So you can create more knowledge than the innate knowledge. The knowledge you had initially in your knowledge. So this is a principle I have already explained last year. I have to go fast through it.
So to summarize, for me autonomy is characterized by these five functions. Perception, reflection, goal management, planning, self-awareness and self-adaptation. And the definition of this function can be made independently of the technology you use. So perhaps you want to implement perception by using neural networks, but perhaps not, I don't know.
And this is a very abstract definition of what an autonomous agent does. And also it allows to understand better the separation line between automation and autonomy. For instance, clearly a thermostat is an automated system just because it does not need any of these functions.
And also this division between human-assisted and human-empowered autonomy. Now let me talk about something else. That is, what are the dominant criteria to decide whether I should trust the system?
I think there are two basic criteria. To decide whether I will trust the system to perform a task. So a task, each task has a degree of criticality. And this is independent of the definition of the system that will perform it. For instance, you have tasks like flight controller task or controlling a nuclear plant.
These are systems that are safety critical. Now the problem is how to find a system that has adequate trustworthiness. So you have safety critical systems, you have mission critical systems, security critical systems, business critical systems, or best effort systems.
So most of the systems you use in the web are best effort systems. They try to do their best, but without guarantees, most of the systems you know. And then transworthiness is about not only functional correctness, but it's
about global correctness of the system with its hardware and its peripherals. So transworthiness is measured by some probabilities. Probability, for instance, of failure, probability of availability, and things like that. So there are techniques to characterize system from worthiness.
And also there are recommendations. They say, for instance, that for this degree of criticality, you need this kind of transworthiness. So typically, for instance, for an aircraft, a civilian aircraft, you need that the probability of a failure, of a single failure per hour of flight is of the order of 10 to the minus 9.
Don't ask me how this is computed, but this is when you apply for certification, you give arguments that, okay, corroborate this. This is the magic number, okay. So there is a correspondence between task criticality and system transworthiness.
And here, for the sake of simplicity, I will assume that I have normalized this in the interval 0, 1. So 0 transworthiness means that no guarantee at all. 1 means that full guarantees. And task criticality 1 means that failing will have a very severe impact on the environment, etc.
And 0 means, okay, no risk. So you understand that this bisector here, what people call the automation frontier, separates the space into two regions. So the green region is the region where you can trust systems. So typically, here you can find, okay, all the automated systems.
So here, these systems exhibit the transworthiness, or we know how to build systems that exhibit the transworthiness that matches the desired degree of criticality. And then, of course, you can have also human trusted systems, human trusted tasks.
Okay, like these ones, you can find others. And, of course, the question is what will happen for autonomous systems. And I think that for autonomous systems, we will go progressively. Remember, autonomous systems are called to replace humans in their mission, in complex organizations.
So my take is that we will do that progressively. Here is, so for autonomous cars, we talk about autonomy levels. You go from level 0 to level 5. 5 is full automation, so full autonomy of cars.
Level 4 means high automation, which means that the car will be able to drive in limited areas, geo-fenced areas, or under special circumstances. And then you go down to level 0. Okay, so what will be the situation for autonomous systems?
I think something like that. We should try progressively, so for complex tasks, that the human operators in his operations is assisted by systems. And this is something that some people call symbiotic autonomy.
And for me, it's a very big challenge. I think it's the way to go. It's a very big challenge. So what is the challenge? The challenge is in the harmonious collaboration between humans and machines. Because, and this is not a matter of having a fancy human-machine interaction interface.
It's much deeper, because machines have a very explicit representation of knowledge, and humans need a very synthetic representation of knowledge. So, you know, there are famous cases of accidents also with aircraft, where there is this mismatch in the communication between machines and human operators.
So, we need protocols for humans to be able to override safely machines' decisions, so to do that at the right moment, but also for machines to solicit human-agency intervention. And this is also a very dedicated situation.
I know a lot of stories about this happening in the aircraft. Okay, it's dedicated. We need theory about that. And this slide also explains this idea of symbiotic autonomy. So, for the five functions I have presented, a part can be ensured by machines and a part by humans.
So, you have a kind of division of work that should be carefully defined. And probably you've heard that recently a very functional term is tele-operated autonomous vehicle. So, seeing that fully autonomous vehicle is hopeless today,
some companies try this idea of tele-operated autonomous vehicles. Because the idea, the idea is that you have autonomous vehicles, or partially autonomous vehicles in some conditions, and when the vehicle is in an area that is, okay, or the situation cannot manage, then it calls the human operator.
So, it stops safely and calls the human operator. So, one human operator can manage many, many vehicles remotely. That's the idea. Okay, now let me say that trans-dwarfness has also a subjective dimension where institutions play a very important role.
And, of course, you know the famous case of Galileo. Some institutions say, oh, this guy is wrong, should be burned. And a hundred years later, some other institutions, academic institutions say, oh, he's right, that's great. Also, especially in modern societies, institutions elaborate the public perceptions about what is good,
what is bad, what is true, what is false, et cetera. And, in particular, now talking about systems and artifacts in general, you have agencies that are defining standards, rules, and are monitoring that they are applied in the right manner.
So, typically, in the United States, you have FDA, you have FAA, you have NHTSA, for cars, et cetera. And also, I would like to note that critical system standards enforce always rigorous design techniques.
And, of course, it's the first time that we see that with Tesla cars or other cars, these techniques are not applied. And these rigorous design techniques are applied to any artifact. You buy a toaster, and it's certified. It's certified that it will not kill you if you use it in a proper manner.
Even for toasters, you have fridges from whatever you buy. So, of course, for bridges and for aircrafts, this is something new in the history of humanity. So, let me finish by saying that other factors now can shape my ideal automation.
In some cases, for low criticality, of course, performance plays a role. So, we don't care if transworthiness is not good enough. And also, the frontier also is distorted in the opposite manner because usually humans have a bias towards, I think, towards machines.
They accept, I mean, if you have an accident, the human has an accident, you can justify why. But if Tesla car has an accident, people discuss this a lot and are very sensitive to that. Okay, let me move on to talk about knowledge truthfulness.
So, yesterday we heard in the first talk, Professor Bengio has discussed this difference between system one and system two thinking. Just to say that already for humans, we make this distinction between, say, irrational thinking and autonomous thinking.
And I think here there is an interesting analogy that between neural networks and conventional computers. So, neural networks are closer to fast thinking because they produce knowledge based on empirical, it's empirical knowledge.
And for computers that execute programs, then you are model-based and this is quite different. So, let me present also my pyramid of knowledge as Professor Bengio did it. It's more refined, mine. So, knowledge truthfulness.
You have facts and silogems. Silogems are just Boolean combination of facts. The fact is that the temperature today is 20 degrees, for instance. Then you have implicit empirical knowledge. It's generalization of facts. So, it can be statistical knowledge. Here we are introducing logic, we are introducing some kind of quantification, some kind of generalization.
And this is, of course, knowledge that we acquire through our senses. And then you have scientific and technical knowledge, and this is mortal-full knowledge. Why? Because it's justified by models. So, remember Newton, when he discovered his laws, he had also to develop
the mathematical theory that can be used to take calculus to explain his laws. And then you have mathematical knowledge. Of course, this is eternal knowledge. I mean, if you believe that the axioms are, I mean, take as valid the axioms, true the axioms, then, of course, Pythagorean theorem is eternal truth.
And then you have meta-knowledge. So, just to introduce some vocabulary, data-based versus model-based. This is well understood. And then some vocabulary about the evidence that is provided. This is a vocabulary I have taken in particular from standards.
So, you say that you have sufficient evidence when you can test. But, okay, if you can, okay, pass some tests. Conclusive evidence that is provided by scientific and technical knowledge means that this cannot be falsified by any other experimental evidence.
And irrefutable, at least you understand the mathematics. Now, with the advent of data analytics and machine learning, you have this, okay, this new pyramid. And here, okay, it's clearly understood that you have predictability, but you don't have explainability for machine learning and data analytics.
Of course, I mean, this is standard terminology I'm using, but if you have objections, we can discuss about that. Now, let me finish by giving a comparison between the way scientific knowledge is produced and machine learning knowledge is produced.
So, in the first experiment, I am Galileo. I'm applying forces, and I am observing the acceleration proportional to the force. In the second experiment, which is a mental experiment I read, I look at many images, and I distinguish between cuts and dots. So, images of cuts and dots.
Now, Mr. Galileo will observe this, will say, okay, I will generalize this, so there is a proportionality. And what you can say about the neural network? In principle, nothing, okay? Just because you cannot formalize, okay, say formally what is the difference between a cut and a dot.
It's as simple as that. Now, as time passes, let me finish by explaining my ideas about design for trans-quad and super-performance. So, the principle, first of all. I have worked, when I was younger, in projects of avionics, how to develop systems with this model-based approach.
So, you write the software, you try to prove that this is more or less correct, and there are techniques for that. And then you generate an implementation, and there are techniques. So, these are standard techniques that are applied today in critical systems engineering.
On the other hand, you have these guys of Waymo, say, and they come and say, oh, this is too complicated to apply, and I agree with them. You cannot apply this type of approach to self-driving cars today. We don't have the know-how. And they say, let's take a huge neural network and train it.
So, the input here is just, say, images from cameras, and the output is simply acceleration, deceleration, or the steering arm. So, very simple. It's an end-to-end system.
And you train it, you train it for millions or millions of kilometers, and then you say it's good enough, so let's implement it. And my opinion here is that we should try to combine the two by taking advantage of the fact that the model, some functions,
we can implement it model-based and be trustworthy enough, and some others we will have them database. So, this is my idea. And, of course, I don't have answers about how to do that. I give some elements of answer.
Something that is also very important is this, that is often ignored, the detection, isolation, recovery here. Okay, how? So, because I'm going to explain why this is hard, and this is a replace, so implementation is something very important, because transworthiness is global property of the system.
So, let me say a few words about that. When I apply the model-based approach, I have a clear distinction about what are the trans-graphic states and what are the non-trans-graphic states of the system. And this is characterized by a set of predicates. So, the nominal behavior that is corresponding to the behavior of the software should be within the trans-graphic states.
Then there is another important phase to guarantee global transworthiness that consists in doing some risk analysis, and guaranteeing that when some hardware fails, some device fails, then the system does not go directly to a fatal state, but it goes to some non-fatal state.
And this is achieved by using different techniques, coding, modular redundancy, and things like that. So, you go to a non-fatal state, and then you use some detection, isolation, recovery mechanism to bring, eventually, the system into a trans-graphic state. So, this is standard practice, it's applied in all critical systems industry, and we know how to do that.
Now, the problem is that this is not practicable today, and it's not practicable because of the presence of machine-level components, but also because of the fact that the complexity of the causes of failure in autonomous systems is huge.
I let you contemplate this. This is something in a document provided by the DOT. And you see, okay, try to read and analyze this.
I've done this type of exercise for avionics systems, believe me, it's much, much, much simpler. So, now Bill is saying we'll find out some assurance techniques, and this is a new buzzword for non-certified systems. So, the idea is to hear that we have monitors that will be actively monitoring the system,
and if they detect some deviation from some abnormal behavior, they try to stop something or to do something to mitigate this. And then, okay, another important idea in model-based approaches is that you will use mathematics to provide guarantees.
So, this is an example of reasoning you can find in a paper by Mobileye, which is a subsidiary of Intel. I think the development, so they show how to compute safe distance between two vehicles, I mean the mathematics here, the level of high school.
I show this because it's important that companies like Mobileye and also now NVIDIA also are advocating model-based approach because they say that fully AI-enabled is not realistic. So, this is the reason I'm showing, but all I would like to say is that all this is just the beginning,
and safety cannot be dissociated from performance. Cannot be dissociated from performance for the simple reason that, okay, so in a car you are safe when you stop,
but it's not wise to safe in any position. So, for instance, if I am overtaking in a double, in a two-way road, it's not wise to stop, okay? So, the problem is much, much complicated, I mean, okay, this is very simplistic what they are saying.
So, how to solve the problem? Let me say a few words about this hybrid autopilot design I am working on. So, the idea is, first of all, to dissociate situation awareness that can be machine-level enabled, and then adaptive decision, I want to have it all model-based if I can.
And then I assume that, and this is a strong assumption, that the perception function can recognize a well-defined and complete set of environment configurations. So, what are the important environment configurations? Configurations that require a specific maneuver, and there are documents about that,
what are the important maneuvers for cars. So, my decision process will be hierarchically structured, and as time passes, I am not going to explain this, but I am working on that if you have questions, I can explain. And then, validation is very important, and here I have a few slides about validation,
I don't have time to explain all this. I think that modeling simulation is of paramount importance. Even if you have a monolithic autopilot that is fully AI enabled,
you have to validate it through simulation. And here I give a few requirements that simulation systems should meet. They should be realistic, of course, but also they should allow semantic awareness. This is a very important concept in simulation. What does it mean, semantic awareness? Your simulator should allow you to understand what is the state of the system,
to have a state of the system, and you decide to set your system to state, to have repeatability, and you have coverage criteria. And most of the industrially available simulation systems do not meet these requirements. And, okay, I am wondering what is the value of results that are reported by Waymo,
they say that, okay, I mean, technically this declaration, this claim, is completely void. It does not mean anything technically. Because if you test, you have to provide coverage criteria, okay, how many cases you have. That's what you do when you test the software modules, for instance.
And then I have some slides also about validation, verification. I think I skipped these slides, because this is standard. And this slide is about formalizing requirements. It is interesting to understand that if we want to validate systems, we should understand the requirements. And it's a mess today, the requirements.
So this is a list of requirements for self-driving cars from a project, that's a California path project, that was started in Berkeley. I was in Berkeley with Varaya when the project started. It was beginning of the 90s. I suppose that the project is still active.
And here I let you contemplate how hard it is to formalize these requirements. It's almost impossible. So the message here is that we should do something, we should understand how much of the formal approach is applicable, and what are the limitations, and, okay, the challenge is huge.
Now, just to finish the discussion, so I have two slides for the future. One thing is what will be the role of authorities, what they allow, what they will not allow. I have explained that today in the United States,
authorities are very permissive. Some people are considering that this is a provisional situation, but that can become permanent, okay? And today they require sufficient evidence, so only testing, not conclusive evidence that is allowed only by model-based techniques. Then another important issue is social awareness, what we say the public,
how people feel with self-driving cars and all these systems. So let me note that in Europe we are applying some tech cautionary principle, that, I mean, this is at the basis of European policies and legislation
that say that if this is harmful and we cannot assess precisely the harm it can make, then we don't use it, whatever can be the benefit. And this is not applied everywhere on the planet. And another crucial question is whether people will accept grand power of decision to systems.
And just to give a provocative question here, I would like to ask you if you have a system that can solve any problem you submit to the system. Any problem, okay? Would you be happy? And you know that it can solve it, okay? Would you be happy as human that you have a system that solves all your,
fulfills all your desires, okay? Will you be happy? Personally I would not feel very comfortable with that. So, and just to conclude, I think that we should go beyond the debate that opposes data-based and model-based approaches
and prepare the way for smooth and progressive transition through the different levels of autonomy. And I will stop here. And thank you for your attention. Thank you.