9th HLF - Laureate Discussion
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 11 | |
Author | ||
Contributors | ||
License | No Open Access License: German copyright law applies. This film may be used for your own use but it may not be distributed via the internet or passed on to external parties. | |
Identifiers | 10.5446/59606 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
5
11
00:00
Meeting/Interview
01:31
Lecture/ConferenceMeeting/Interview
02:18
Lecture/ConferenceMeeting/Interview
03:06
Lecture/ConferenceMeeting/Interview
03:47
Meeting/Interview
04:29
Meeting/Interview
06:03
Meeting/Interview
06:41
Meeting/Interview
07:50
Meeting/Interview
08:39
Meeting/Interview
09:27
Meeting/Interview
10:13
Meeting/Interview
10:52
Lecture/ConferenceMeeting/Interview
11:30
Meeting/Interview
12:23
Meeting/Interview
13:11
Meeting/Interview
14:47
Meeting/Interview
15:29
Meeting/Interview
16:09
Lecture/ConferenceMeeting/Interview
18:24
Meeting/Interview
19:09
Meeting/Interview
19:53
Meeting/Interview
20:31
Meeting/Interview
21:08
Meeting/Interview
22:00
Meeting/Interview
22:43
Meeting/Interview
23:26
Meeting/Interview
24:04
Meeting/Interview
25:59
Meeting/Interview
26:43
Meeting/Interview
27:25
Meeting/Interview
28:04
Meeting/Interview
29:05
Meeting/Interview
29:50
Lecture/ConferenceMeeting/Interview
30:26
Lecture/ConferenceMeeting/Interview
31:03
Lecture/ConferenceMeeting/Interview
31:46
Meeting/Interview
32:22
Meeting/Interview
33:24
Meeting/Interview
34:09
Meeting/Interview
34:58
Meeting/Interview
35:42
Meeting/Interview
36:30
Meeting/Interview
37:14
Meeting/Interview
38:08
Meeting/Interview
38:54
Meeting/Interview
39:45
Meeting/Interview
40:28
Meeting/Interview
41:11
Meeting/Interview
41:54
Lecture/ConferenceMeeting/Interview
42:37
Meeting/Interview
43:46
Meeting/Interview
44:44
Meeting/Interview
45:30
Meeting/Interview
46:19
Meeting/Interview
46:57
Computer animationLecture/ConferenceMeeting/Interview
47:45
Computer animation
Transcript: English(auto-generated)
00:28
to a format which we called Laureate Discussions, which is actually a format which we introduced during the digital two years of the virtual HLF and the 8th HLF, which we had digital
00:42
last year. Many people liked it, so we decided we also try this in person, so to have a group of laureates be on stage discussing. And it's my great pleasure to introduce the moderator of our first Laureate discussion, Vicki Hansen. Vicki Hansen was the president of the Association for Computing Machinery from 2016 to 2018.
01:06
And then in 2018, she took the role of CEO of the Association for Computing Machinery being actually the first woman serving in this role, and she has been a regular visitor of the HLF, and it's great to have her here back in person in Heidelberg.
01:24
And Vicki, the stage is yours to introduce the Laureates and moderate the discussion. Hello, everyone.
01:40
Yeah, I'm absolutely thrilled to be here today, but you don't have to listen to me for a long time. I'm here to moderate a panel between three of our very distinguished ACM Laureates. So first, let me introduce Vint Cerf. Oh, you're close. Good for you.
02:00
Vint is coming up. So Vint, well, I think you all know him. He won the 2007 Turing Award for, no, that's not right. I'm already off to a poor start, 2004 Turing Award for his work on design and implementation of the internet's basic communication skills, TCPIP, right?
02:24
That's it. That's it. Okay. The second Turing Laureate up here is Leslie Lamport. He's from Microsoft who was the 2013 Turing Award recipient for fundamental contributions to theory and practice of distributed concurrent systems.
02:42
And the third Laureate is Joseph Fakis. So he was the 2007 Turing recipient for his award on model checking and very many important systems. So thank you all for being here.
03:04
Okay. So I'm going to start with Leslie here. Yeah. You look shocked. Okay. So I intentionally gave everybody an incredibly quick introduction because I wanted each of you to talk about yourself, how you perceive your work and the implications of your work,
03:21
what has happened since you received the award, okay? What implications has it had for the larger world? Maybe what surprises have come along on the way that you had not anticipated might have happened? Uh, that's a lot to say. Explain the universe in 25 minutes or less.
03:41
Give three examples. Yeah, you only have about 10 minutes for this one. Well, uh, well, let's see, what did I win it for? I think for having a lot of fun, uh, thinking up cute problems and finding solutions and somehow a bunch of them turned out to be useful in practice, uh, which is not actually accidental.
04:14
I think one thing that distinguished me from a lot of contemporaries who are working on similar things is that they thought of a lot of these problems as mathematical problems.
04:28
Uh, things like, uh, mutual exclusion, keeping two processes from bumping into each other at the same time. Uh, and I thought of it as a physical problem.
04:41
There are these two guys who are trying to do something and they want, don't want them to try to do it at the same time and time is a real physical entity. And, uh, I think if you look at my, uh, work and distributed algorithms, I think you will see the, there's a notion of a physicality of them and they turned, some of them turned
05:07
out to be important because, uh, computers are physical things and, uh, I was lucky and happened being interested in them at the, at the right time, uh, before there were,
05:23
was a lot of concurrent computing going on, but that the idea was there. And the, uh, that's the most useful algorithm that I, uh, I guess widely used, which was actually invented by, uh, Barbara Liskov, uh, and, uh, a student of hers, uh, a little
05:45
bit earlier, but independently, uh, I didn't know about it at the time. And because it's called the Paxos algorithm, and because I gave it a, uh, a cute name, it's known as the Paxos algorithm rather than the Liskov algorithm.
06:01
Uh, and that's basically used to deal with, uh, to keep, uh, if you're using a, doing some kind of interaction on the web, but there are a bunch of computers involved in doing that. And one of those computers could fail at any time.
06:22
And if they've implemented the algorithm correctly, then you won't notice that the system will just keep going and do the right thing. So did that cover the questions or was there part of the universe that I left out? We can consider that for part A. Um, I had another question.
06:43
So, um, I had a feeling you weren't going to talk about this. You also created LaTeX, which you don't talk about. And I remember a few years ago when you were here, you said something about being somewhat surprised that people were using it at all. So, um, I was just wondering if you could tell us a little bit about what led to your
07:01
development of it and, um, why were you surprised that people were using it? Oh no, I was not surprised that people were using it at the start. I was surprised that 10 years later they were still using it and nobody had come up with something better. Uh, the, uh, Don Knuth, uh, had released the current, the final version of tech and
07:25
it was, it was working on it at that time. And I realized, well, I was writing a book and I needed a set of macros, uh, to use for producing the book. And I figured that for a little extra effort, I could make those macros usable for
07:44
other people. And it turned out to be a little bit more effort than I expected, but not a whole lot more. Uh, and, uh, I realized that, you know, tech is a wonderful thing and Don did a fantastic job of the, of typesetting, but there's something that at a higher level
08:04
than typesetting, which is document production. And he didn't spend a lot of time on that. And so that was what I had to add to tech for writing a book. And that's what other people have to add to, uh, to tech for writing their books
08:22
and papers. So, uh, it was the first one there. So I'm not surprised that it, that it was used, uh, and that it, you know, succeeded. But I'm amazed that, you know, it's still going strong these days. Okay. Thanks. So, Joe, question for you.
08:43
I'd say something about what they'll check in. Okay. So, uh, I, I was in your normal and the seventies and I was interested in verification at the time. It was a very hot topic and the dominant approach was, um, axiomatic approaches as you remember, Leslie, and, um, I, I had the idea to, of course, axiomatic approaches
09:06
are not applicable. I mean, you need a human to, to, to, that is inventive enough to, to apply the axioms as you probably know. So I had the idea to take a very, uh, elementary approach and, uh, build models and enumerate
09:24
the states of the models. At that time also, I was working on temporal logics as Leslie did. I remember you visited me in 82, with Amir, and other people. Uh, so, uh, we devised the first model checking algorithm and we developed the first model
09:42
check in 82. And at that time, I, I knew that, uh, uh, Ed Clark with Alan Emerson at CMU, uh, were doing a similar work. Uh, so we federated the efforts with, uh, Ed.
10:00
We tried to publish at that time and we did, we were not very successful, I should confess. So we created our own conference. If you want to be successful, create your own conference. It's better. It's easier to publish. So this was the CAV conference. We started in Grenoble, uh, at 89.
10:21
And then of course, uh, with, uh, uh, faster machines and also because some industry had the problems, uh, you remember, uh, probably the famous bug, Intel's bug, uh, beginning of the nineties, uh, some people have hired engineers in the industry and tried to apply
10:40
the techniques at, at, at, uh, industrial scale with some success. So this is the success story of, of, uh, model checking. Anything surprising about it though? Sorry? Any, um, impact from it that you hadn't actually expected at the time?
11:00
The impact, what do you mean the impact? The impact was that we've been successful and then I, I should say that I did not stay in the same area. I changed. I was interested in system design and currently I work on autonomous systems with focus on self-driving cars.
11:22
So I consider the design, system design is a very challenging problem and it's a pity that, uh, uh, theory people are not so much interested in design because, uh, this requires, I mean, this cannot be formalized easily and this requires the combination of many
11:41
different techniques and their integration in a framework. Can I say something here? When Joseph and the others were developing model checking, I was very much uninterested and I went to, I was interested in proofs and I thought, what good is it, you know,
12:02
to try in this? Cause you can just really try it on a rather small model of the, of the, uh, of the system and you know, I wanted proofs, but in around the year 2000, a colleague built a model checker for a language that I've developed and I started using it and I was blown away.
12:24
It is marvelous. It's a wonderful idea. It's a brilliant idea. I never had the faggiest idea of how far you could go and get incorrectness with the tiniest model. So a very belated, thank you, 15 years later, 15, 20 years later.
12:46
So Vince, I think we've all heard a lot about what you're doing, but from your perspective, how did you go about starting this and what do you see as the main impact? The first point I want to make is I didn't start this.
13:01
Um, the, uh, the project that gave impetus to the internet was called the ARPANET. It was a predecessor. The problem that was posed was an economic one. The Defense Advanced Research Projects Agency was funding research at a dozen universities 50 years ago in artificial intelligence and computer science.
13:20
And every year the computer science principal investigators would say, you need to buy us another world-class computer every year so we can continue to do world-class research. And even ARPA couldn't afford to do that for a dozen universities. So they said, we're going to build a network and you can share.
13:41
And everybody hated that idea and they said, we're going to build it anyway. So they decided to do something heretical. They decided to build a packet switch net. Well, back in the late sixties, everybody knew the way you build a network is you use circuit switching. That's how the telephone system works. But can you imagine, you know, a computer dials up another computer and you know, gets
14:02
an answer and sends the data, hangs up, calls another one, takes too long. Instead, we use packet switching, which is kind of like electronic postcards that run about a hundred million times faster than the post office does. So we try out the packet switching idea and then we run into the next problem. The computers that the computer science departments were using were all different brands.
14:25
They were from IBM and they were digital equipment corporation and from Xerox data systems and a whole bunch of others, totally incompatible, different word sizes, different operating systems, different character encodings. And so the question then was, even if we had this uniform packet switch net, how do
14:45
we get the computers to interact with each other in a useful way? So Steve Crocker, my, one of my best friends ran the network working group for the ARPANET project and led all of the development work of all the applications and the host to
15:01
host protocols. So this is all very successful and I leave UCLA, I go to Stanford and start working on more network research and Robert Kahn shows up who worked on the ARPANET but then went to ARPA and he shows up in my office and he says, we have a problem.
15:21
And I said, what do you mean we? And he said, well, the ARPANET is very successful. The defense department wants to use computers in command and control. Well, the implication of that is that some of the computers are going to be in ships at sea and some of them are going to be on airplanes and some of them are going to be in mobile vehicles.
15:40
Well, the ARPANET was designed to hook the computers together with dedicated telephone circuits in air conditioned rooms. The computers didn't get up and move around. So now we have a problem. We have to figure out how do we handle these mobile things. Can't use wires because the tanks run over the wires and they break and the ships get
16:00
all tangled up and the airplanes never make it off the tarmac. So we had to use radio and so he shows up in my office and he says, we have a packet radio network and we have a packet satellite network and we have the original ARPANET. How are we going to hook them all together and make it look uniform? And we had a more lossy environment than we had with the dedicated ARPANET connections.
16:25
And so we had to develop a protocol that would be more resilient in the face of various kinds of loss and variable latency, different length packets and the like. So it took us about six months to figure out how to do the TCP protocols and then
16:43
we started implementing what we had invented and we discovered we'd gotten it wrong and it took us about four cycles to get to the TCP IP that you use today. And even then the original TCP IP protocols suffered from problems with regard to flow control, for example, and managing congestion.
17:02
So the cool thing about the design is that it allowed for evolution. And so there's been quite a bit of refinement over the course of 50 years to make the system work better and better at scale. The system has grown by a factor of about seven orders of magnitude from the time that
17:23
we first turned it on in 1983. Here it is, whatever it is, 39 years later. So the cool thing about all this is, first of all, if you want to do anything big, lesson number one, get some help, especially from people who are smarter than you are, which that wasn't hard. There were a lot of really smart people around.
17:41
And the second thing is be prepared for change and be receptive to it. And the third thing is that eventually people will come up with better ideas. Now they didn't come up with a better idea for LaTeX apparently, but there are better ideas for internet. So as an example, there are new protocols.
18:02
One of them is called QUIC, which is from my company, Google, which does the end-to-end connections in a somewhat different way with more efficiency with regard to security and recovery from a failure of a connection. So there is plenty of opportunity for improvement.
18:20
And that's been the fun thing about the internet design. Now in terms of surprises, two things. First of all, yesterday when we played the game, I was thinking, holy moly, think what they did. In real time, everybody had to go and connect to a new application, get logged in and then play the game.
18:40
And it all happened in real time. And we all assumed it would work. And the fact that we've got to that point is pretty amazing. Also the fact that only 60 or so percent of the world has access to the internet says there's still a lot of work to do. And finally, those of you who use social media, which is probably a lot of us, have discovered that this neutral platform is capable of amplifying everything,
19:06
good stuff and bad stuff because it doesn't know the difference. And so the result now is not a technical problem. The result is a social, economic and regulatory problem. How do we keep this medium being useful, keep it useful while we're trying to deal
19:22
with abusive behaviors in the online environment? And that extends all the way from misinformation and disinformation and various kinds of phishing and other foolishness and also malware attacks and very direct attacks against the infrastructure of the system.
19:42
So we now have this gigantic and extremely useful environment, but it needs to be made more safe and more secure and more reliable than it is today. So there's plenty of work still to be done for those of you who are looking for dissertation topics. So, you know, that leads in perfectly to something I was going to ask you,
20:01
because we haven't mentioned this before. Several years ago, you were interviewed on the Stephen Colbert show. Right. Great interview. If you haven't seen it, it's still on YouTube, so you can find it. But I remember one of the questions he asked you that you kind of took pause at, he said, why did you make the internet so unsafe? Right. Why did I make it so unsafe? Well, hang on.
20:24
This is a Stephen Colbert question, not a my question. Right. Okay. You want to blame Stephen for that. Fair enough. So first of all, here's an interesting little factoid. The security of the system was a very of high importance because it was
20:43
funded by the Defense Department. We knew that. Can I reformulate the question? Yes. Can you make the internet secure? We can make it less... To what extent you can make the internet secure? I can't make it perfectly secure. I cannot make it perfectly secure even if I wanted to.
21:02
Can you explain why you cannot make the internet secure? Well, if you want something really secure, I'll give you a brick. It just sits there and it doesn't do anything, but it's very secure. Make sure somebody doesn't pick it up and throw it. Well, there's that too. So thank you. That's the proof that nothing is secure.
21:23
Let me give you an example why this is hard to make it secure. Think about how a browser works. Okay. What does a browser do? Well, first of all, it's a piece of software that's running in a computer, probably hosted by some operating system. So it's a process that's running. What is the first thing it does?
21:41
It goes out to some computer somewhere on the internet after doing a DNS lookup, and it pulls in a bunch of HTML or HTML5 or XML, and then it interprets it. Okay, this is great. We just sucked in a piece of software from some random place in the internet, and we're running it inside our computer with whatever privileges the browser happens to have.
22:03
Holy moly. So that's a big problem all by itself. Can I give a simple answer to this question? Sure. You cannot make it secure or safe for whatever you want, just because you cannot reason about that. Because in order to reason, you need a model of that. And you don't have models. I mean, these are very complex systems.
22:22
You don't even understand how they work exactly. So you can make something secure or safe. If you can build a model, analyze in the reason about that. This is my point of view. Okay, so next time somebody asks me that question, I'll say, you can't secure it because Joe says so. How's that?
22:42
Well. Okay, we're done with that question, so Joe has answered that. Well, maybe we beat this to death. I do have some other observations to make about it, but I know you have other things you might want to do. Sure. Okay. Well, I mean, you said something that the browser goes out,
23:01
pulls back some peaceful code, and it executes it on your computer. Well, why does it execute it on your computer? It's because it was a cheap and dirty way of getting a lot of functionality.
23:22
And I've seen it, you know, operating systems, you know, windows have had problem, you see it in the internet. Because there were very strong commercial interests, especially to get, you know, a lot of functionality there
23:41
without doing the large and perhaps impractical amount of effort that would take to make it secure, to allow somebody to write programs that you could safely execute on your own machine without worrying, you know, what they're going to do.
24:00
And so I think a very big problem there is not so much theoretical impossibility, but the economics of really trying to do it properly. Well, okay. So now we've ignited a discussion and it's too bad if you're out of control.
24:21
Let me point out a couple of things. First of all, companies like mine build large-scale computing environments and we try to do so in as secure a way as possible. One good thing about what we do is that we at least have control over our computing assets, but we don't have control over the computing assets of the people who use our products and services.
24:42
So we don't have control over that. For our employees, we have some control because we actually put software on their laptops and on their mobiles that looks to see what they're running, what, you know, what the configuration of the software is. And if they download some piece of software from someplace, we don't let them run it until we've checked it.
25:02
We can't be absolutely sure, but we work very hard. The problem is for the general public, we can't do that because we don't have software on board the general public's computers to do that kind of thing. So there are practical matters about trying to make things secure across the entire collection of devices
25:21
that are going to be running the various applications that you were describing. I would love it if it turns out that we could design hardware and software combination that would increase the safety of executing arbitrary code that we brought in from the outside world. And there are some projects like Cherry, C-H-E-R-I,
25:45
which is looking at a combination of hardware and software in order to reinforce security in the system using a lot of memory access control techniques. So my guess is that I would agree with you that it's technically possible to build a much more secure environment,
26:03
but the effort required and the scope of trying to apply it to everything sounds really hard. Can I say something? Yes, I agree. What if they said no? Theoretically it's possible, but practically it's not possible.
26:21
Because if you try to build a model of a mixed hardware software system, it's very hard to provide a formal model. Now I'm working with self-driving cars and I would like to say that it's really amazing
26:41
how technology should change. I have worked in the past for aircraft safety where we have very well-defined methodologies, the aircraft is certified, it's approved by certification authorities, it can fly, you estimate the reliability 10 to the minus 9 failures per hour,
27:04
things like that. This is practically impossible for self-driving cars for many reasons, because also they integrate AI components that cannot be modeled. But even if you can model the AI components, the complexity is completely overwhelming
27:21
because of the non-predictability of the behavior of people and in security problems you have non-predictable behavior of hackers. You cannot provide a model that captures all the way a hacker can be inventive. I think this is hopeless.
27:42
I think now what challenges systems engineering is forget about model-based approaches and verification. This is my opinion now since many, many years. We should have some theory to validate the systems
28:00
and guarantee their properties. This can be statistical techniques for instance, but we need some theory. Currently we don't have theory. We build systems in an ad hoc manner. These are very complex systems. We have no theory about that. How to do that? We have methodologies about how to do that. The problem is now how do we have some guarantees
28:22
for these new generation systems? This is a very, very interesting problem. One thing I would observe about large-scale systems is that they almost invariably break in some way or another. And so you start thinking,
28:41
well, I can't drive the breaking rate down to zero, but what I can do is at least measure the rate at which things are breaking. And if they're breaking at a rate which exceeds some threshold, that's when I need to jump in and do something. It's pretty amazing when you look at the internet and you look at all the cloud-based systems,
29:03
how much work goes into keeping them running. This is not automatic at all. There are people who are responsible for tracking and responding to the signals that we get back saying this is broken or that's broken. How do we recover? Building in as much redundancy as possible is super important.
29:22
Backing up so we don't lose any data is important. One thing we learned at Google is that we had to build a giant network to connect the data centers together to copy data back and forth in order to replicate it so that even if we lost the data center, we didn't lose the data. So the effort involved to make things at least more reliable,
29:44
if not absolutely secure, is significant and it's worth it. Okay, I will go on to another question because I can see this could go on all day long, right? I'd be happy to keep going. Leslie's right. Okay, so this year ACM became 75 years old.
30:02
We celebrated our 75th anniversary in June at an event. There were a number of topics ACM looked at the past, what ACM has done, what it's contributed to computing, and we did talk about some of the things that people considered were important topics in computing. I'll just give you a hint in case you weren't there.
30:21
So balancing trust and risk, how do we foster trust in the public in using our systems? Another was building human-centered AI. Another was connecting everyone everywhere all the time. And then we had one that was looking at the fact that
30:40
it was brought up yesterday as computer scientists, the things we do affect everybody in the world all the time and how do we make sure that we're working on some of the more challenging problems in the world. And we didn't have time to really talk about that one. So one of the things I'd like to ask all of you is, what kinds of things are we not thinking about?
31:02
These were the topics that people thought were interesting at the time of the 75th. I'll go to Leslie first since he had this. Sorry, do you want to go second? You were the one who actually suggested this as a good question so I thought I'd go to you first. What are we missing that we haven't thought about? Standard answer to this class of questions is,
31:21
if I knew the answer to it, I'd be working on it. So I'll pass to other people. We've had five more seconds to think about it. Well, I have one reaction to that list that you just gave.
31:40
There's been an enormous amount of attention paid to artificial intelligence and machine learning, which is its primary manifestation right now, and great concerns about ethics, for example. You mentioned human-centered AI and things like that. My big worry, frankly, is not so much about machine learning and our dependence on it.
32:02
There are issues there if we rely on the decisions that it suggests or recommendations it makes or actions that it takes. If we rely on that uncritically, that's potentially hazardous. But so is all the other software that people write.
32:22
And so I'm just as worried about non-machine learning software, especially things like in the Internet of Things devices. You think about the business model. I'm going to make a webcam, OK? How cheap can you make it? Well, let's see. I can get the operating system from this open source library over here,
32:42
and I'll just toss that in, and then I'll put everything together. Oh, by the way, I won't put any access control into it. I'm trying to make this simple. Just turn it on, and it runs. Then you sell it, and then you're done. Now people use these things, and they discover they don't have any access control.
33:01
There's no way to update the software in case there's a bug that's been found, and the people that made the device aren't even around anymore because they just sold it, and then they disappeared. I'm really worried about the more general case of software and devices that rely on it, and we who rely on the devices working,
33:21
we should be attentive to that as well as to AI and machine learning. But you can answer that question. The other question, the flip side of it, was something you seemed interested in. What are we not paying attention to these days? Yes, I would like to say something about machine learning because machine learning now is an avoidable in systems engineering.
33:41
Whatever system you build, you need machine learning techniques even in network systems for network management, for instance, or all the autonomous systems that we would like to be able to build. I would mention self-driving cars because it's an emblematic case. I think the problem is how we will combine traditional techniques
34:06
that are model-based with machine learning techniques that are data-based. You probably know that there are solutions of critical systems. You have self-driving platforms that are monolithic end-to-end solutions
34:22
with neural networks. NVIDIA or Guemmo are selling such self-driving platforms. And of course, their transquatheness is not guaranteed. On the other hand, we cannot build model-based systems for self-driving cars. So we should find compromises. This is, I think, a very important problem in the future,
34:43
how we combine model-based and data-based techniques and we integrate them in the same architecture. That's a challenge for me. And of course, the deeper problem behind that is how you link symbolic knowledge to concrete knowledge. Because neural nets deal with concrete knowledge.
35:03
And if someday we want AI to come to human level, we have to be able to combine symbolic knowledge and concrete knowledge. But I'm glad that will be a discussion on Thursday about the machine learning techniques.
35:22
But this is a very, very challenging problem. And if we are not successful in that, I think that, okay, in any case, machine learning, it will not be possible to combine machine learning techniques with traditional techniques. And that will be a problem.
35:40
So I actually have a question for Leslie, if I could, based on... Of course. Go ahead. You work on systems engineering scale kinds of problems and trying to model the interactions at some level of abstraction. And Leslie has done some beautiful work with the TLA and TLA Plus
36:01
expressing what a program is trying to do in a form that's more mathematical. What I wanted to ask you, Leslie, is whether there is a way to take that concept, the TLA Plus concept, and project that into the more complex systems environment. Is there a way to use the vocabulary or the notions that you put into TLA Plus
36:26
to deal with this concurrent multiple complex systems environment that Joseph keeps working with? Well, I think the problem at this point, as I see it, is more sociological than technical.
36:44
In the sense that... Well, I'm a strong believer in intelligent design. Not of human beings, but of computer systems. And the problem that happens now is that
37:02
they're not made by intelligent design. They're produced by evolution. Because people do not spend the effort of planning enough at this high level. And they don't take the time to think through,
37:24
to make the high level, to get the simplicity that you need. First at a higher level, then as far down as you can go. And at some point, you have a million lines of code, and that's not going to be simple. But if that million lines of code is based on a thousand lines
37:46
that somebody can understand and was really well designed, you've got a chance. I think that the techniques like TLA, and it has a particular application domain, and there are other things with other application domains,
38:02
for thinking at this higher level. But there is just not an understanding that such a thing is needed. And it's not even clear how many of the people who are engaged in it, you know, have the ability to do the necessary amount of thinking.
38:22
I mean, one problem nobody, I've never heard mentioned, is that my belief is that the average, the median IQ of programmers has been falling continuously ever since I was in Korea. The simple reason that, you know, the programmers are becoming larger and larger percentage of the population.
38:45
And we see that in the software. That's a very interesting point. Yes, I fully agree, Leslie. And working on industrial projects currently, I see that the systems are built in a very ad hoc manner.
39:01
So people work in harsh conditions and produce systems that are not well-architected. And I think that if correctness is not a concern from the beginning, safety, security concern from the beginning, you build a system in an ad hoc manner,
39:21
and then you want to make it safe and secure, it's impossible, okay? Because you don't even understand what you are doing, okay? And discussed with engineers, they don't understand what they are doing. This is a bad situation. I don't know the reasons exactly, but this is a very bad situation because systems should be designed,
39:42
architected in a reasoned manner, and the choices should be justified somehow. And this is not the case today. And what I see in self-driving cars is also in autonomous networks, okay? What I see in industrial projects, it's really horrible, okay?
40:02
What people... People are talking about autonomy, they don't even understand the concept of autonomy. What it means, for instance, to make my system a little bit more autonomous or less more autonomous. What are the risks, okay? Because they don't care about that. We want just to build systems, and this is a problem today. So it feels like this problem area
40:22
has some descriptors that I think about. One of them is concurrency, because concurrency seems to be rife in the world that we live in right now. That touches on the autonomy question as well. Then there's this notion of scale. The challenge that we're faced with
40:42
is that we're operating at such huge scales. How do we cope with that? Another thing that we would want is safety in the sense that even if the system doesn't work properly, it's still safe. It fails in some safe ways.
41:01
And there must be a list of other characteristics that we would like these complex systems to have, and we need tools to help us figure out whether we've achieved any of those objectives. But you see, there is theory for that. But people don't pay the money because a line of critical code costs...
41:21
May cost, I don't know, in some projects, thousands of dollars, a line, a line. Okay, so, I mean, safety and security, they have some price, and you have to pay. Now, if people... Of course, there are also complexity issues, I agree, but there are methodologies,
41:40
and people also should strive to define methodologies and develop system in a proper manner, and they don't do that today, for some reason. So, could I interrupt? Because I'm very interested in your current big project that you're working on, right? Digital cooperation and digital diplomacy and cooperation.
42:04
Yes, this is the UN effort that the Secretary General has initiated on digital cooperation on a global scale. Right, and we only have like three minutes, so you can't talk about it in detail. But we're all talking about hard technical problems here. What I find fascinating about what you're trying to do
42:20
is you've got to get not only the technology right, but you've got to get different governments working together, people all across the world working together. And do you have anything to say about how you can try and do that human side of the projects? Well, let's see. First off, when it comes to diplomatic exchange and trying to achieve agreements
42:42
among parties that are not exactly aligned, my first tactic is to find something that they agree on, and it might be that they just agree that we don't agree on anything. And if they even agree on that, I'll slice that off and say, okay, we agreed on something. And then I'll say, what else can we agree on?
43:01
And you keep slicing your way until you have a nice salami sandwich. So that's, you know, tactically, that's one thing. The second thing is that I think that it is widely appreciated that computing technology has infused our entire social and economic fabric.
43:23
We're deeply dependent on it. And we're also deeply concerned about the potential hazards that these technologies have introduced into our society. And you can get some agreement on that. The problem that you run into is that at a national level,
43:41
you get different opinions about what to do about it. In some cases, you have very strong desire to regulate the behavior of people and to punish companies and people who do harmful things. The problem that we run into in the internet environment is it's global in scope. And it's insensitive to the fact
44:01
that the traffic flows across international boundaries. The packets are completely oblivious to moving from France to Germany to across the ocean to the US. By design. And so in order to deal with harmful behaviors, you need cooperation and reciprocity between the states
44:24
in order to identify a party that's causing harm and to then do something about it to hold them accountable. So the mantra for my negotiations in this space is accountability and agency. We need to hold people, organizations and countries
44:42
accountable for their behavior in this environment. And we need to give agency to people, organizations and countries in order to respond to the potential hazards. And we have to achieve that on some relatively common basis on a global scale. That's not going to be easy.
45:01
It's going to be multi-stakeholder. It's going to be multilateral. But it's worth trying because these technologies, as many of you have already demonstrated, are powerful and useful and constructive. But they also have the potential for great harm.
45:22
And so our task is to hang on to the good qualities of these systems while we're protecting ourselves from their harmful potential. Yes, but also you need some regulations about how to use these technologies. And let me consider an example.
45:41
In the case of aircraft, you have standards for that. An aircraft is certified. You cannot modify a line of code. Now for critical systems, we allow over-the-air updates. Is this reasonable or not? I don't know. But this is practice today. You know this, okay? And there are other trends that in the United States,
46:04
you have self-certification for some critical systems. You know this. So what means it sounds strange, self-certification, because the manufacturer will guarantee, for instance, for medical devices or for self-driving cars, that they are safe enough.
46:22
Is that okay? If the regulations allow this, this is the door open also to some other practices. So I think that there is a responsibility also of governments, of international organizations. They could perhaps establish some regulations to control this.
46:43
I thought you should be levitating out of your chair by about an hour. I've been looking at this real rat's nest that is being discussed and thinking, I don't want to get into that. Well, you're in luck, because time's out anyway.
47:04
Anyway, I want to thank very much Josepha Sifakis, Lesley Lamport, and Vint Cerf for this great discussion. Thank you all, and it's break time now.
Recommendations
Series of 11 media