We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Virtual HLF 2020 – Talk: Joseph Sifakis

00:00

Formale Metadaten

Titel
Virtual HLF 2020 – Talk: Joseph Sifakis
Untertitel
Why is it so hard to make self-driving cars? (Trustworthy autonomous systems)
Serientitel
Anzahl der Teile
19
Autor
Lizenz
Keine Open-Access-Lizenz:
Es gilt deutsches Urheberrecht. Der Film darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
Why is self-driving so hard? Despite the enthusiastic involvement of big technological companies and the massive investment of many billions of dollars, all the optimistic predictions about self-driving cars “being around the corner” went utterly wrong. I argue that these difficulties emblematically illustrate the challenges raised by the vision for trustworthy autonomous systems. These are critical systems intended to replace human operators in complex organizations, very different from other intelligent systems such as game-playing robots or intelligent personal assistants. They have to understand dynamically changing situations in unpredictable dynamically changing environments. They have to manage many different potentially conflicting goals and plan actions for achieving them. Finally yet importantly, they have to interact safely with human operators. I discuss complexity limitations inherent to autonomic behavior but also to integration in complex cyber-physical and human environments. I argue that traditional model-based critical systems engineering techniques fall short of meeting the complexity challenge. I also argue that emerging end-to-end AI-enabled solutions currently developed by industry fail to provide the required strong trustworthiness guarantees. I conclude that building trustworthy autonomous systems goes far beyond the current AI vision and advocate a new scientific and engineering foundation addressing this unique and groundbreaking challenge.
ProgrammverifikationSoftwareindustrieSoundverarbeitungEndliche ModelltheorieAutonome DifferentialgleichungTuring-TestBitrateElektronisches ForumVirtualisierungComputeranimationBesprechung/Interview
Natürliche ZahlSelbst organisierendes SystemSpieltheorieProdukt <Mathematik>ProgrammierumgebungSystemtechnikErwartungswertKomplex <Algebra>Physikalisches SystemPhysikalismusVirtuelle MaschineRobotikNotepad-ComputerKartesische KoordinatenInteraktives FernsehenService providerEndliche ModelltheorieAutonome DifferentialgleichungCybersexAutonomic ComputingInternet der DingeSelbst organisierendes SystemSpieltheorieGebäude <Mathematik>Produkt <Mathematik>ProgrammierumgebungGlobale OptimierungSystemtechnikGruppenoperationIntelKomplex <Algebra>Physikalisches SystemPhysikalismusVirtuelle MaschineBildverstehenNichtlinearer OperatorHypermediaVorhersagbarkeitRobotikNotepad-ComputerSchnittmengeArithmetische FolgeCharakteristisches PolynomSichtenkonzeptAbgeschlossene MengeEndliche ModelltheorieCybersexStandardabweichungKollaboration <Informatik>Autonomic ComputingTwitter <Softwareplattform>Computeranimation
BruchrechnungInformationComputerarchitekturTypentheorieMAPProgrammierumgebungEntscheidungstheorieFunktionalKomplex <Algebra>MaßerweiterungPhysikalisches SystemProzessautomationRechenschieberTermAutomatische HandlungsplanungGewicht <Ausgleichsrechnung>Prozess <Informatik>Anpassung <Mathematik>VorhersagbarkeitInstantiierungComputersicherheitBimodulRahmenproblemSpiegelung <Mathematik>Endliche ModelltheorieAutonome DifferentialgleichungDifferenteObjekt <Kategorie>Kontextbezogenes SystemDokumentenserverDatenverwaltungInformationMAPGenerator <Informatik>Kategorie <Mathematik>ProgrammierumgebungTaskKonfiguration <Informatik>EntscheidungstheorieGlobale OptimierungKomplex <Algebra>MultiplikationPhysikalisches SystemPhysikalismusSoftwarewartungTeilbarkeitProzess <Informatik>Anpassung <Mathematik>VorhersagbarkeitComputersicherheitSampler <Musikinstrument>Notepad-ComputerEin-AusgabeMultifunktionPartielle DifferentiationEreignishorizontKonditionszahlEndliche ModelltheorieKontextbezogenes SystemMultiplikationsoperatorSchlussregelAutonomic ComputingMinkowski-MetrikDokumentenserverFlussdiagramm
Folge <Mathematik>HardwareComputerarchitekturTypentheorieProgrammierumgebungSystemtechnikKomplex <Algebra>KoordinatenPhysikalisches SystemRechenschieberZahlenbereichExogene VariableZusammenhängender GraphParametrische ErregungInteraktives FernsehenService providerEvoluteMultiplikationsoperatorSchlussregelMinkowski-MetrikDynamisches SystemComputerarchitekturHalbleiterspeicherHydrostatikProgrammierumgebungSystemtechnikEinbettung <Mathematik>Einfacher RingKomplex <Algebra>KoordinatenPhysikalisches SystemPhysikalismusMagnetbandlaufwerkServerExogene VariableStochastische AbhängigkeitCoprozessorVerteiltes SystemCodierung <Programmierung>Parametrische ErregungInteraktives FernsehenRFIDPersonal AssistantClientEvoluteCybersexCachingMinkowski-MetrikKanal <Bildverarbeitung>GamecontrollerMobiles InternetComputeranimation
Ordnung <Mathematik>ComputerarchitekturSelbst organisierendes SystemMAPTaskEntscheidungstheorieDivisionPhysikalisches SystemPhysikalismusVirtuelle MaschineZusammenhängender GraphProtokoll <Datenverarbeitungssystem>MultifunktionCybersexSchlussregelDatenverwaltungComputerarchitekturFunktion <Mathematik>WellenpaketHydrostatikTaskGrenzschichtablösungEntscheidungstheorieKomplex <Algebra>MultiplikationNuklearer RaumPhysikalisches SystemVirtuelle MaschineNichtlinearer OperatorProzess <Informatik>FehlermeldungBetrag <Mathematik>Protokoll <Datenverarbeitungssystem>Kontextbezogenes SystemSchlussregelKollaboration <Informatik>GamecontrollerComputeranimation
ComputerspielDatenbankValiditätProdukt <Mathematik>Physikalisches SystemProzessautomationStützpunkt <Mathematik>Virtuelle MaschineDatenflussNeuronales NetzWort <Informatik>BimodulRahmenproblemEndliche ModelltheorieAutonome DifferentialgleichungAutorisierungSchlussregelStandardabweichungAnalysisValiditätIntegralSoftwaretestVerknüpfungsgliedTaskSystemtechnikErneuerungstheorieHybridrechnerInverser LimesKomplex <Algebra>Physikalisches SystemProzessautomationRechenwerkTeilmengeDatenflussMaschinelles LernenZusammenhängender GraphProgrammierparadigmaFokalpunktFramework <Informatik>Endliche ModelltheorieStandardabweichungAutonomic ComputingSoftwareentwicklerSLAM-VerfahrenComputeranimation
StatistikSoftwaretestProgrammverifikationBereichsschätzungReelle ZahlParametersystemMultifunktionBitrateAutonome DifferentialgleichungSimulationSISPComputersimulationPhysikalisches SystemBitrateNeuroinformatikComputeranimation
SimulationSoftwareSpieltheorieValiditätFormale SemantikInzidenzalgebraSoftwaretestElementargeometrieUniformer RaumGesetz <Physik>ComputersimulationKomplex <Algebra>Physikalische TheoriePhysikalisches SystemTeilbarkeitCASE <Informatik>KonditionszahlEndliche ModelltheorieDatenstrukturFormale SpracheSimulationSoftwareSpieltheorieValiditätDeskriptive StatistikTypentheorieFormale SemantikSoftwaretestProgrammierumgebungElementargeometrieGesetz <Physik>FunktionalPhysikalische TheoriePhysikalisches SystemPhysikalismusWiderspruchsfreiheitKonfigurationsraumTeilbarkeitCASE <Informatik>Regulator <Mathematik>ÜberlastkontrolleKonditionszahlEndliche ModelltheorieKontextbezogenes SystemMechanismus-Design-TheorieComputeranimation
ComputerschachComputerspielSchaltnetzValiditätMAPGesetz <Physik>Physikalisches SystemResultanteStützpunkt <Mathematik>Virtuelle MaschineReelle ZahlSemantisches NetzMinimumEndliche ModelltheorieAutonome DifferentialgleichungDifferenteMultiplikationsoperatorDigitale PhotographieAutonomic ComputingAnalysisFormale SpracheInformationSimulationSoftwareStereometrieValiditätFormale SemantikSoftwaretestEntscheidungstheorieSystemtechnikGruppenoperationHybridrechnerInterpretiererKomplex <Algebra>LeistungsbewertungPhysikalisches SystemBildverstehenSemantisches NetzHasard <Digitaltechnik>Arithmetische FolgeMinimumEndliche ModelltheorieKlassische PhysikSchlussregelKollaboration <Informatik>Autonomic ComputingMusterspracheComputeranimation
FrequenzGruppenoperationAutonome DifferentialgleichungDifferenteBesprechung/Interview
TaskGruppenoperationPhysikalisches SystemTreiber <Programm>Exogene VariableMultiplikationsoperatorBesprechung/Interview
ProgrammverifikationVirtuelle MaschineExogene VariablePunktInteraktives FernsehenEuler-WinkelDifferenteAutorisierungMultiplikationsoperatorMessage-PassingDigitales ZertifikatInterface <Schaltung>GamecontrollerBesprechung/Interview
Streaming <Kommunikationstechnik>
Transkript: Englisch(automatisch erzeugt)
introduce to you the next speaker, it's Joseph Sifakis. He received the ACM Turing Award in 2007 together with Edmund Clark and E. Allen Emerson for the role in developing model checking into a highly effective verification technology that is widely adopted in the hardware and
software industries. And he will talk about why is it so hard to make self-driving cars or any trustworthy autonomous systems. Joseph, the floor is yours. Yes, thank you very much. So I'm going to talk about autonomous systems. These are systems that emerge from the needs
to further automate existing organizations by replacing humans completely. And these are essential in the so-called IOT vision, Internet of Things vision, you probably heard about that. So these systems are very different from game playing robots or intelligent personal assistants
because they are critical systems that should exhibit broad intelligence by handling possibly conflicting goals. Also, they have to deal with uncertainty and interact with complex
cyber physical environments. And finally, they have to harmoniously collaborate with human agents. This is something I am going to explain in this talk. So I think that autonomous vehicles
is a very interesting example, because there's an example, an application that can be easily understood. And of course, the underlying societal and economic states, stakes are huge. So I've chosen this, this exam. And also, I think that building autonomous transport systems
would be a huge step toward closing the gap between machine and human intelligence. So let me say that a few years ago, we had a lot of expectations regarding autonomous
vehicles. Big tech companies have invested massively in that area, more than 100 billions of dollars. And people have been very optimistic, overoptimistic, I would say, about the possibility
to have self-driving cars everywhere in the near future. And now, car manufacturers have to revise their ambitions. And also, I would like to say that this overoptimism
led to some misconceptions that show that we lack a good understanding about the nature of the problem and of the underlying technical difficulties. So let me say that today, there are two different technical avenues for building autonomous
cars. And one is the traditional approach that is model based, that has been applied to critical
systems engineering. And it has been successfully applied to aircraft and production systems. But this, this technique is proved to be not adequate because of the overwhelming complexity of autonomous systems. The other approach that is taken by industrial players
consists in developing end-to-end AI-enabled solutions. We have some systems like, for instance, Waymo driver, but they fail to provide strong transworthiness guarantees.
So in my talk, I will try to explain why it is so hard to build autonomous autonomous systems, then I will discuss about transworthiness, why we trust the system to perform a task, and then I will discuss some system design issues. So let me try to explain what is an autonomous
agent, and to what extent an autonomous agent is different from an automated system. So an autonomous agent is a reactive system. It interacts with an external environment and an internal environment through sensors and actuators, as you see in this slide, it receives sensor
information and provides commands. And in fact, it combines, it uses knowledge, of course, here you see a knowledge repository. And here in this architecture, it has three modules,
one for situation awareness to understand what happens in the environment, one for making decisions and one for managing knowledge. So let me show how the information flows. So it receives sensor information, it has a perception module. So that analyzes say frames and determines concepts
objects in the external environment of the car, for instance. And then this information is used by another fraction I call a reflection here and reflection is used to build the models of the external and internal environment. And these models are used by the adaptive decision process. And
this integrates two interactive functions, one for goal management, so you can have many different goals in the self driving cars. And for each goal, you have planners, planners that will
determine for each goal commands that will be sent to actuators. So I hope you understand this model. And there is additionally, another function that I call here self learning that is not so much developed for artificial agents. And this is for the discovery of new concepts,
new situations and self adaptation, creation of new goals. So this is an architecture that explains what an autonomous system is. And I hope you understand also what is an automatic big difference that exists between autonomous systems and automated systems. For instance,
if you consider a thermostat, a thermostat has a very well defined external environment has very well defined goals. So now you can understand also why it's hard to build autonomous agents
because you may have the perception can be very, very hard problem. And typically the perception problem is solved by using neural nets today. And also you have to deal with the uncertainty of the external environment. I hope you understand the concept of uncertainty. And this implies
lack of predictability. So the problem is how to build a faithful model of the external environment based on which we will make decisions. And, of course, you have also the type of complexity, this complexity of decision. And this has to do with the type of goals
we have to deal with. You can have a short term goals, long term goals. So I'm here and I'm driving to Paris. So this is a long term goal and short term goal is a safety goal
or a security goal. And then you have midterm goals. And of course, also the planning problem is very, very hard. So I hope this explains why autonomous agents are hard to build. And I think this slide also help you understand the difficulties. These are the so called autonomy
levels for self-driving cars. So you have from level zero to level five, five means full autonomy,
level zero means no automation. So the first three levels are about automation. So you have what we call ADAS system, advanced driver assistance systems. And from level three, the autopilot controls the movement of the vehicle and you have these three levels of autonomy.
I'm not going to comment more on that. There is a critical level that is supervised autonomy where the car, the autopilot is driving the car and the driver should take over in critical situations. And this is very hard to achieve. Now, let me say that once you have an agent,
the problem is not solved at all. Because you have also systems engineering issues that are very often that are very often ignored. One important problem is what we call a reactive
complexity of agents. So you can classify agents according to the complexity of their interaction with the environment you can have. And this, of course, has nothing to do with the space complexity or time complexity. So you have transformational systems that take data and provide
data. So for a you have a transformational way or you have streamers, you receive sequences of data and produce sequences of data where you have embedded agents. So these agents
interact continuously with some physical environment. So you have to produce responses depending on the stimuli you receive. And then say the physical systems are systems that integrate embedded components. So each component has an embedded system inside. And this type of
components are very hard to design, and to compose. And this is the type of components we need for self-driving cars. Now, another aspect of systems engineering complexity is
complex. I have agents, I have self-driving cars, and now I have to coordinate their movement. And for this I need architectures. So what I call architectural complexity characterizes the intricacy of the evolution of the coordination of the agents in time and
space. So of course, an architecture can be starting as in hardware can be parametric. So you have coordination that is parameterized can work for any number of components. And then you can have dynamic architectures that were now the coordination rules depend on time.
And then you can have mobile architectures where coordination rules depend both on time and space. Then you have self-organizing architectures, where you have additionally clusters of agents. So you have worlds where agents are coordinating, that's what you need
in fact. Just summarize, this slide shows systems engineering complexity and why in order to build autonomous systems, you, there are a lot of difficulties,
you have a cyber physical components and self-organizing architecture. So now, let me talk about transworthiness. When I decide that I trust the system to perform a task and here in fact, there are two factors. One is the task criticality and the other is the
system transworthiness. So in systems engineering, there is a correspondence between task criticality and required a level of transworthiness. Transworthiness is a guy you can can be measured as the probability that the system satisfies behaves as, as it is expected. Just to explain
here the ideas I have normalized the criticality levels, levels in the interval zero one. So one means that my system is I trusted fully and criticality one means that the system is
absolutely critical. So you can imagine now that the bisector of this and that we call user the automation frontier defines two regions, the green region where I will trust the system and the red region where I cannot trust the system. So the task will be performed by humans. So here
I'm giving some examples for automated systems, for instance, for an aircraft landing system, the task criticality is very high, but I have the technology for solving this problem.
And okay, and there are other tasks I cannot trust the systems, for instance, for investing teaching. Okay, so that's the situation for automated systems. And it is expected that for autonomous systems, we will go progressively from zero autonomy to full autonomy. And
today at some point, we will have to develop systems where humans and the systems collaborate and we'll have a kind of division of work between humans and machines. And this raises very,
very interesting problems. You should define the appropriate autonomy level. So you should define protocols and rules about how humans and machines can interact. For instance, a human agent should not override the machine's decision suddenly without any rule without respecting
the rules because something bad may happen. And conversely, the machine should not, I mean, when it asks a human agent to intervene, there should be some rules for that, because otherwise, we know that there may be very serious problems. Okay, now, let me finish
by saying a few words about design, and then the problems that raises autonomous system design. He said that we have very good technology for being critical systems like aircraft,
like production systems. And these systems are developed according to rules prescribed by standards. And we have very well defined the flows, as it is depicted in this picture, you have this is called the so called V model, you start from requirements, you refine the
requirements, and you decompose the system into modules, and you call the modules and then you have this flow that goes upstream, and you test the modules and then you integrate the
modules, etc. So this is a very well established technology and and we know how to develop very reliable systems. And these flows are model based. For instance, you can and you can of course evaluate the probability that the system fails for avionics systems civilian avionics,
you allow 10 to the minus nine failures per hour of life. But as I said, these techniques are not applicable to to autonomous systems. And this explains the fact that that industry
has adopted machine learning enabled techniques that are end to end techniques. So the situation today is the following. On the one hand, we know how to build automated systems that are very trustworthy. And on the other hand, some companies have developed end to end
solutions. So here you have a say a huge neural network that receives a say frames and produces acceleration, deceleration signals and steering wheel.
So I think that these solutions that exist already commercially available solutions like these, but they will not be accepted and they will not be accepted just because we don't understand how the systems work, and they will not be accepted by certification authorities.
So I am advocating a hybrid approach, where we will combine the autonomous system by combining modules that are model based and others that are enabled by AI techniques. So typically,
perception will be machine learning based. And I think this is a very, very challenging idea. And personally, I'm working at that. It raises many, many interesting questions, how to combine model based and database techniques. Now, to finish, I would like to
come to another issue that is the issue of validation of systems. So an important question is when an autonomous system is safe, given that autonomous systems are very complex,
and cannot be validated like software, because software, you can apply verification techniques, which have well established testing techniques. And here, there are some companies that say, I have a simulator, I have driven with my autopilot for 10 billion miles, and this is
good enough. And also they make some others, they give some arguments, because we have very good statistics about the rate of failure per mile driven. So based on these statistics, you can compute how many miles you need to drive without failure for a given level of
confidence. So if the level of confidence is 95%, you should drive this amount 291 million miles without without any fatality. So let me say that this argument is not technically
tenable. I mean, it's flawed. And the reason is very same. The question is, what is the correspondence between the simulated mice and the real mice? And for this, you should give a technical answer because you see you can drive your car in a highway under uniform
conditions and this has no vibe. Okay. So the question is that we need evidence that simulation covers a good deal of the real situations. And in industry today, you have
simulators that are based on game engines. So you have realism about but probably a simulation is not real because you need to have an underlying semantic model. It's not enough to have fancy simulation systems. They should respect the laws of geometry and physics,
okay, which is perfectly possible not to respect them. So we need the theory. I'm working on that validation theory that are based on a fairly complex semantic simulation model that is used
also by by simulators. We have notions of coverage like like the notion of coverage we have for testing software systems. Also, I think that a very interesting idea is how to control complex transport systems to create critical situations and explore corner cases. And then of
I think also for the diagnostics, the diagnosis should be more refined, not that there is possibly there is just an incident, but also to analyze the factors of the incidents. And for
this, of course, we need a lot of I mean, we need theory for that not only technology, we need theory. Now to finish my discussion, I would like to say something that is often neglected that full autonomy will not come like that because humans are still much, much superior
than machines in understanding real life situations because humans have common sense knowledge and reasoning. And in fact, our mind is equipped with a semantic model of the world. We don't know how this the semantic model is built. But this is the way we understand the
world. And this is updated sees our birth, you can understand this. So the big difference between machine learning systems and understanding human understanding is that human understanding combines bottom up originally from social level to the semantic model to the
level and from the semantic network to the perception. So for instance, if I see say, a stop signal with some snow, I, I mean, I can understand that this is a stop signal. If I see this aircraft, I can without knowing anything about the laws of physics or whatever, I can
understand that this is a bad situation. Or if I show you this photo of a father with a child, you can and no need to explain who is a father and who is the child. So probably, we cannot, I mean, it would be very, very hard to match human intelligence, if using machines if we cannot
build in my opinion, such a semantic model. Now, just to conclude, because time is pressing, I tried to explain that the autonomous system challenge is not only about intelligent
agents is not about systems playing chess or something like that. Also, I think it's very important that we combine AI solutions with well established technology and the results that exist
from for critical systems. And I think that nobody would believe that your system is good enough if validation is not model based. And I have explained this. Also, I would like to say that there are still some misconceptions. People believe that they do not understand the huge gap
that exists between automated and autonomous systems. So they believe that we will go from other systems gradually to self driving systems. This is completely wrong idea. And also, I think that supervising autonomous cars on autopilot is a very hazardous idea. We have
results that show this. So to summarize, I think that to reach the vision, we need to develop a new scientific engineering foundation. This would probably take some time, probably a few
decades. Thank you very much for your attention. Thank you very much for your interesting talk. So we have some time for one or two very short questions. I would like to start with one which raises, I mean, you emphasize the difference between automated and autonomous systems
in the end. So for example, we think of self driving cars, we have the problem also in this transition period where we have self driving cars as well as human driven cars. Yes. So do you think this would be particularly challenging this transition? Or do you think it will?
How should we deal with that? Yeah, so the transition from automated to autonomous would be very, very challenging, because in one case, you have very well determined tasks in ADA systems. And the driver has the overall responsibility. So everything happens under the driver's
responsibility. And you know, when they start, and the driver can stop the task at any time. Now, if you have an autonomous system, and the machine will have the control of the car, then if you want to take over, I mean, this happens also in many, many even automated
systems, for instance, in autopilot aircraft, if you want, if the pilot wants to take control, you should, it should be sure that you should make sure that you understand the dynamic situation of the aircraft or of the car. And also, if the autopilot says now you take over,
then it should be possible for the human to react. So humans traditionally fall asleep or play with their phones if they have to supervise, okay, we know the story. That's very dangerous situation. Frankly speaking, I think that we will go directly to full autonomy someday, okay,
there will be no transition, because we don't know how I mean, that's a very challenging problem. And this is not a human machine interaction problem. Okay, I mean, it's not a problem define the right interface is the problem that mutual understand it. And machines are very much more analytical. I mean, I experienced these with verification systems, okay, if I try,
if you try to do a verification by hand, and the machine wants you want to be assisted by machine, but when the machine gives you the hand and say, Oh, this is my diagnostics. It's
has much more synthetic reasoning. I mean, impossible. I mean, to make, I mean, it's very hard to make. I can't say a lot of things because I'm passionate about this stuff. Okay, so perhaps let's pass another question if we have time.
Yeah, so there's one other one from one of the young researchers. So do you think it will be more challenging to convince people to use self driving cars, then to build them? Yes, okay, yes. Because today we have self driving cars, or we have technology,
but people are not convinced about that. Also, of course, you should convince authorities, certification authorities, but you know, in the United States, there is a big, big difference in the attitude of certification authorities. In the United States, they allow
what they call self certification. So the car manufacturer can say, okay, my car is good enough, but this is not the idea that would be accepted in Germany, or I mean, in Europe, okay, because the criteria are much needed. This is a very good thing. So I think it will take some time for the public opinion to get convinced and people are very careful also.
Even from a psychological point of view, I think that when there is an accident, and an accident is a human's responsibility, okay, a human can say, Oh, I did this, okay, this, okay, but now if a machine does an accident, and this happened in the United States, people do not accept the
idea. And the machine cannot say anything about that. It does not have even any responsibility. So the situations are very, very different. Thank you very much. So we are at the end of our time, unfortunately.
Okay, thank you very much. Thank you very much.