Heidelberg Lecture - The Technological Imperative for Ethical Evolution
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 340 | |
Author | ||
License | CC Attribution - NonCommercial - NoDerivatives 4.0 International: You are free to use, copy, distribute and transmit the work or content in unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/45101 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
Lindau Nobel Laureate Meetings324 / 340
3
5
15
16
18
22
23
27
32
33
34
35
47
54
58
67
69
70
72
73
78
79
83
84
85
86
87
90
92
95
100
102
103
104
105
106
114
115
116
118
119
120
122
126
128
129
130
131
133
137
142
143
145
147
148
149
153
155
156
159
162
163
165
168
169
174
176
178
181
182
189
194
198
201
202
203
206
209
213
214
217
218
219
220
225
227
228
237
240
241
244
245
250
254
257
260
261
266
273
278
284
285
287
291
293
297
302
308
310
317
318
319
321
325
327
328
333
00:00
Particle physicsHeidelberg TunAstronomisches FensterYearRoll formingMeeting/Interview
00:34
Astronomisches FensterSpaceflightYearLastHeidelberg TunLecture/ConferenceMeeting/Interview
01:09
LastYearAlcohol proofElectric motorHeidelberg TunMeeting/InterviewLecture/Conference
01:54
Key (engineering)Book designHot workingPlanetDayMeeting/Interview
03:17
EveningMeeting/Interview
03:46
AsbestosCaptain's gigBauxitbergbauCommitteeYearLastHeidelberg TunMicrophoneBill of materialsWater vaporLecture/ConferenceMeeting/Interview
04:51
UnterseebootLinenWindKette <Zugmittel>Electronic mediaStock (firearms)Book designWeaponFinger protocolLecture/ConferenceMeeting/Interview
05:34
AmmunitionAutumnProgressive lensRelative articulationClimate changeMembrane potentialLecture/ConferenceMeeting/Interview
06:52
BauxitbergbauAutumnMarch (territory)Tool bitKey (engineering)Lecture/ConferenceMeeting/Interview
07:25
AmmunitionHall effectGamma rayKey (engineering)SoundSeeschiffDayVolkswagen Beetle (A5)Circuit diagramLecture/ConferenceMeeting/Interview
08:35
DrehmasseMountainAmmunitionJanuaryCircuit diagramEnergy levelElectronic mediaVolkswagen Beetle (A5)FACTS (newspaper)MonthLecture/ConferenceMeeting/Interview
09:40
Will-o'-the-wispMoonLocal Interconnect NetworkTemperatureHull (watercraft)Mie-StreuungLinenBauxitbergbauUnterseebootAmmunitionMeasurementAutumnCylinder headMassDesertionDayBombTurningFermi National Accelerator LaboratoryAtmosphere of EarthYearWeaponNuclear fissionHot workingFACTS (newspaper)PaperFinger protocolLine-of-sight propagationVideoLecture/ConferenceMeeting/Interview
13:10
BauxitbergbauHourAmmunitionHot isostatic pressingWill-o'-the-wispBookbindingMeasurementLinenVideoGentlemanBallpoint penSiliconPattern (sewing)Bird vocalizationEnergy levelHot workingPaperSoundWeekWednesdayDirect currentYearCartridge (firearms)ProzessleittechnikFACTS (newspaper)Ground stationFinger protocolHot isostatic pressingHorn antennaLecture/ConferenceMeeting/Interview
18:21
LinenBauxitbergbauTin canLaminar flowAmmunitionLocal Interconnect NetworkWill-o'-the-wispAnnulus (mycology)MountainAutumnBe starYearStandard cellProzessleittechnikFlavour (particle physics)Jefferson, ThomasMissileKey (engineering)Climate changeWeaponGentlemanMapPower (physics)Focus (optics)FACTS (newspaper)Plane (tool)A Large Ion Collider ExperimentRutschungNatürliche RadioaktivitätBauxitbergbauNuclear powerButtonTypesettingLine-of-sight propagationNorthrop Grumman B-2 SpiritCurrent densityBomberCartridge (firearms)Order and disorder (physics)ClimateApparent magnitudeLecture/ConferenceMeeting/Interview
23:33
ButtonYearWeaponGround (electricity)WeekLecture/ConferenceComputer animation
24:32
BauxitbergbauMie-StreuungDayHot workingYearEnergy levelWeekWatercraft rowingMeeting/Interview
26:01
Local Interconnect NetworkLinenBauxitbergbauSpare partNuclear powerProzessleittechnikUniverseProgressive lensWeaponWeather frontAmplitudeLecture/ConferenceMeeting/Interview
26:40
BauxitbergbauAmmunitionVideoChandrasekhar limitMoving target indicationNorthrop Grumman B-2 SpiritMeasurementHull (watercraft)SchneckengetriebeLinenLocal Interconnect NetworkWill-o'-the-wispHall effectCougarAutumnMountainTin canCrystal twinningWeaponAmplitudeNoise reductionLeadNoble gasHochbahnFACTS (newspaper)Be starFinger protocolYearGameRoman calendarBauxitbergbauController (control theory)FirearmPaperNuclear powerHourRelative articulationClimate changeGentlemanElectronic mediaWeekLists of nuclear disasters and radioactive incidentsCapacity factorSpiegelteleskopPitch (music)SiliconFlavour (particle physics)SunriseAccess networkSeries and parallel circuitsTurningImpact eventRankingGenerationSchmidt cameraAngeregter ZustandMonthKette <Zugmittel>ClimateShort circuitLecture/ConferenceMeeting/Interview
35:53
Commercial vehicleThermostatVideoElectric power distributionNanotechnologyGround stationMaterialLecture/ConferenceComputer animation
Transcript: English(auto-generated)
00:14
Good evening and welcome to the Heidelberg lecture of the Lindau meeting of Nobel laureates and young scientists.
00:21
As many of you know, the Lindau meeting has a sister organization, the Heidelberg Laureate Forum. It's held each year in Heidelberg and this meeting brings together young scientists with laureates in the fields of mathematics and information technology. They're laureates of such recognitions
00:41
as the Turing Award, the Fields Medal and the Apple Prize interact with young researchers in much the same way that Nobel laureates interact with young people here in Lindau. In recent years, each of these meetings have hosted participation by laureates from the other meeting. It was in this context that last year
01:02
I gave the Lindau lecture in Heidelberg and in the past we have had Heidelberg lectures here in Lindau. Vint Cerf gave one of those lectures a few years ago and he is participating with us in this Lindau meeting. And it was last year in Heidelberg
01:21
that I first encountered Martin Hellman who is not only a distinguished Heidelberg Laureate but is this year's Heidelberg lecturer. Professor Hellman received his bachelor's degree in electrical engineering from New York University in 1966 and his master's and PhD degrees from Stanford in 1967 and 69.
01:43
After working at IBM Watson Research Center in Yorktown Heights and teaching at MIT, he returned to Stanford as a member of its electrical engineering faculty in 1971 and he is still there. Now as an emeritus member of the faculty. It was at Stanford that he met
02:01
Whitfield Diffie who is also a Heidelberg Laureate and developed some of the key ideas in public key encryption including what became known as the Diffie-Hellman Key Exchange which they published in 1976. Among his 12 US patents, Hellman holds two key patents
02:24
awarded in 1980 for public key cryptography. His work has been foundational for the public key encryption that protects trillions of dollars in financial transactions every day
02:41
including your piddling transactions when you buy something online protecting your financial information. Today, Hellman's interests have expanded from cybersecurity to national and international security via a path that was inspired by his own personal relationships.
03:01
In this context, he has collaborated with his wife Dorothy Hellman to co-author with her a book entitled A New Map for Relationships Getting, Creating True Love at Home and Peace on the Planet published in 2016. In 2015, Hellman was honored along with Diffie
03:21
by the Turing Award. The Turing Award is the highest distinction in computer science and it's generally considered to be the equivalent of a Nobel Prize for computer science. This evening, he will be speaking to us about a subject that draws upon his varied areas of interest, the technological imperative
03:40
for ethical evolution, Marty Hellman. Thank you, and with this microphone, I can move around, right? Okay, well, thank you, Bill,
04:03
for that wonderful introduction and I enjoyed your lecture last year at Heidelberg and thank you to the committee for inviting me. The title of my talk is The Technological Imperative for Ethical Evolution and that can sound a little bit dry so I should tell you I was tempted
04:21
to make the subtitle How I Screwed Up and How to Avoid It and that's what you're gonna hear tonight. So first a caveat, I'm an American and I'm going to speak primarily from that perspective but the issues that I'll address are universal and I encourage those of you from other nations
04:41
to think about what you can do in your own communities and we'll briefly touch on that toward the end of the talk. But first, some water. Secondly, a note. I've written up this talk and it will be made available on the Lindau Foundation media website
05:03
and so if you just ask how you can get it and it has links both to a PDF of the book my wife and I wrote so you don't even have to buy it and to a few other things that will be very relevant to tonight. Well, the Manhattan Project which made the first nuclear weapons
05:22
transformed what had been a purely moral concern, ethical decision making, into one that was of utmost practicality essential for the survival of civilization and that's because for the first time it created the possibility for humanity to destroy itself.
05:40
That's why the talk's called the technological imperative, it's necessary for ethical evolution. And while I won't be talking much about physics, it really relates to physics because if even all you want to do is to advance our understanding of physics which is the goal of this conference, if there's a nuclear war our understanding of physics will come to an abrupt halt
06:03
if not be erased entirely. That risk to both civilization and physics is increased by the potential for a catastrophic environmental crisis such as extreme climate change as well as advances taking place in cyber technology, genetic engineering,
06:20
artificial intelligence and other areas. All of those threats make it essential that society progress in its ethical development and by the way, life gets a lot better I can tell you from personal experience. So most of this talk is going to be eight lessons that I learned, usually the hard way, meaning I screwed up, but eventually I came to see it.
06:43
Lesson number one, it's easy to fool ourselves or to put it more personally, it was really easy to fool myself. So the story starts in March 1975 before most of the young, actually all of the young researchers were born.
07:02
The US National Bureau of Standards as it was then called proposed a National Data Encryption Standard or DES. It had a 64-bit key which was not that unusual because computers tend to run eight 1664 numbers like that but more careful observation showed
07:21
that one of the first things the algorithm did was to throw away eight of the 64 bits leaving only 56. So that meant there were two to the 56 keys for the non-technical people, forget that last thing. It meant there were 100,000 million million keys which at first sounds impossible to search
07:43
but Whit Diffie who Bill Phillips mentioned and I estimated that even in 1975 an LSI chip, because that was the technology of the time could be built to search a million keys per second. Since they cost about $10 each, you could buy if you're NSA,
08:01
the American National Security Agency, you could buy a million of them, you're now searching a million million keys per second. So it only takes 100,000 seconds or a little more than a day to search 100,000 million million keys. And the equivalent cost we estimated was about $10,000 per solution.
08:22
So this was inadequate. So we thought this was a bug and so a mistake. We wrote to NBS expecting them to fix it. It was easy to fix it. It was cheap to fix it but they never really answered us on that issue. So we wrote more letters. We did more, I did more research.
08:41
I met with people in our integrated circuits lab and refined the estimate. After six months, it became clear this was not a bug. It was a feature. How so? NSA, the American National Security Agency did not want a publicly available cryptographic system that it couldn't break.
09:02
So Whit and I decided we had to go public with this. In fact, we were given that advice that we're not gonna change it by writing letters to the government. We had to get Congress involved. We had to get the media involved. And as we're getting ready to do that, in January 1976, and I remember this meeting well, two NSA employees, very high level people,
09:23
flew out from Maryland to California and talked with us. And they basically said, if you keep talking this way, you're gonna really hurt national security. If I remember their exact words were, you're going to cause grave harm to national security. So I went home to figure out the right thing to do,
09:43
the ethical thing to do. But as I'm trying to figure out the right thing to do, an idea pops into my head unrequested. Forget about what's right and wrong. You'll never have more of a chance to be famous. Run with it.
10:01
Well, who wants to jeopardize national security just to be famous? Well, actually a lot of us, but we wouldn't admit it. And at that time, I wouldn't have admitted it. And I liken this to having a devil on my shoulder. You know, in a movie, how the devil's whispering in the actor's ear? And I thought I brushed him off my shoulder,
10:20
but five years later, I realized that I fooled myself. So that's the first lesson. It's easy to fool ourselves. How did I come to that realization? I was watching a documentary called The Day After Trinity. Trinity was the code name for the test site in the New Mexico desert where the first atomic explosion took place.
10:41
And in this documentary, a number of the scientists, mostly physicists, who had worked on the Manhattan Project were asked, why did you work on this horrible weapon of mass destruction that killed hundreds of thousands of men, women, and children? They all had the same answer. Hitler, if he got the bomb first,
11:01
it would be a thousand years of dark ages, and fission had been discovered in Germany. We had to work on the bomb. Well, the interviewer comes back to each of those scientists and asks them later in the documentary, when Hitler was defeated and Japan's our only enemy, why did you keep working? Their faces fall, they have no idea why.
11:23
And I can't be sure, but what I believe happened is that they had fooled themselves just the way I had fooled myself. They had other reasons besides the socially acceptable one that came to consciousness. And in fact, Robert Wilson,
11:40
the first director of Fermilab said, and I transcribed this from the video, in terms of everything I believed in before, during, and after the war, I cannot understand why I did not walk away from Los Alamos, but it simply was not in the air. Our life was directed to do one thing, and we, as automatons, were doing it.
12:03
So, watching the day after Trinity, I realized that I had fooled myself, and I vowed never to do that again. But that was easier said than done, as lesson number two will show. Lesson number two, the value of outside help. So, after watching that documentary,
12:22
five years later, I was confronted with another major ethical decision, and worried I might be fooling myself again. What had happened? Some of you may have heard of the RSA algorithm, named for Ravesh, Shamir, and Adelman at MIT. It did, in a better way, what Whit and I had proposed in our paper,
12:40
and they also won the Turing Award for that work. In their paper, they credit Whit and me with inventing public key cryptography. When we asked them to pay royalties on our patents, that was a different matter. They sold their company for 250 million dollars. We made almost nothing.
13:00
I was not happy with RSA. Now, we're good friends now, but that's a whole nother story. While I'm angry at them, roughly five years after I've vowed I'll never fool myself again, a man who is the CEO of a startup company in Silicon Valley comes to me, Lou Morris was his name,
13:20
and he used very colorful language, so forgive me. What he said to me was, help me get an exclusive license from Stanford to your patents, and I promise you we'll get those RSA bastards by the balls. It seemed to me that going with Lou Morris' offer made good sense from a business point of view,
13:41
in which case I should do it, but I was worried that I was so emotionally involved in this fight that I might be fooling myself again. So what did I do? I went to my wife, Dorothy, who's in the audience tonight and I explained my conundrum, and she came up with a brilliant solution that I missed because I was so emotionally involved. She said, the director of technology licensing at Stanford
14:02
has the same business interests as you do, but he's not emotionally involved in this issue. Let him make the decision. I went to him and he said, of course we go with Lou Morris' offer. It's the same decision I would have made on my own, but this way I can be sure I didn't fool myself. So lesson two, the value of getting outside help.
14:22
Lesson number three sounds like a throwaway. Friends are better than enemies. Well, who would disagree with that? And yet how many people do the hard work to try to turn an enemy into a friend? In the paper I have two stories, I only have time here for one. And this is really nice because it was Admiral Inman
14:42
who was director of NSA who deserves the credit for starting the process, not me. So you can believe it. In 1978, so two years after that meeting where I had the devil on my shoulder afterward, I get a call from the director's office at NSA.
15:01
Admiral Inman, the director, will be in California and he would like to meet with you. Are you willing to meet with him? I jumped at the opportunity. Up to that point, NSA and I were fighting it out in the press, never directly. And NSA never really said anything. In fact, there's a joke that NSA stands for no such agency and never say anything.
15:23
And so this was a chance to talk to NSA directly. So I'll never forget when a few weeks later, Admiral Inman comes to my office at Stanford and the first thing he does is look at me and say, he smiles and says, nice to see you don't have horns.
15:41
I was being depicted as the devil incarnate at NSA. The second thing he said is, I'm meeting with you against the advice of all the other senior people at the agency, which makes sense. If I'm the devil, it's not gonna do any good to talk to me. So he was willing to take the risk of talking to someone that might be the devil.
16:02
He was willing to take the risk of turning an enemy into a friend. And out of that first cautious meeting, we developed a friendship. And in fact, today we're good friends and he signed two statements of support. I'll only tell you about one from my most recent project, Rethinking National Security. It questions our approach to national security
16:22
at a very fundamental level. And having a former director of NSA sign a statement of support gets rid of the argument, you don't know what you're talking about. Now, Admiral Inman wouldn't have done that if he didn't agree with the statement, but he also wouldn't have done it if we were enemies. Friends really are better than enemies.
16:44
Lesson four, get practice by correcting even minor ethical lapses. Well, you remember how I fooled myself in 1976. By the way, the decision I made was the right decision and it's explained in the paper.
17:01
Admiral Inman has said so, he's changed his mind. But it was sheer luck that I made the right decision using unethical means. So I'd fooled myself in 1976. In 1986, 10 years later, when Lou Morris came to me with that wonderful offer that I won't repeat, I realized I might be fooling myself. What had happened in the middle?
17:23
Well, I'd come to regard even minor ethical lapses as unacceptable, failings that I previously wouldn't even have recognized as unethical became important issues for me to work on. For example, mistreating my wife,
17:44
being angry with my wife. That doesn't seem like a big deal. Everybody, almost everybody does that. But I had vowed, when we had our wedding vows, I had vowed to love this woman through good times and bad. I was not loving her through the good times. And so what happened is, by getting practice daily,
18:05
I was confronted daily with opportunities to become more ethical in my personal relationship, my relationships at work. That's how I got the practice so that I recognized in 1986 that I might be fooling myself.
18:21
The big ethical decisions come too infrequently for us to become proficient. I mean, how often do you get a decision like you may cause grave harm to national security or you could really hurt some people that you feel have hurt you at the time? Lesson five, ethics is an evolutionary process.
18:41
We tend to think of ethical decision making as static. However we do it now is the right way. But it's easy to see if you look back over time that it's dynamic, it evolves over time. Thomas Jefferson was highly ethical by the standards of 200 or 250 years ago. Today he'd be in jail for owning human beings.
19:02
The British legal system, oh, to take something that deals with the Turing Award. It's named for Alan Turing. The British legal system hounded Turing to death over his sexual orientation, he was a homosexual. And they were enforcing the ethical standards of the 1950s in Britain and in the United States, by the way.
19:20
So ethical standards really do change. It's easy to see unethical behavior in the past. It's much harder to see it right now and that's what we need to do to accelerate our ethical evolution. So how can we do that? Apply the scientific spirit. Defined as follows by a key mentor of mine.
19:43
He defined the scientific spirit as a zealous search for the truth. I love that word, zealous search for the truth. With a ruthless disregard for commonly held beliefs when contradicted by observations. That's how we do science. Why don't we do foreign policy and military policy and decide when to go to war
20:00
or when we made a mistake in war that way. So what might we investigate if we undertook any search for the truth, much less a zealous one, beyond the usual boundaries of science? Well, I'll propose three, you can come up with a lot more on your own.
20:21
How ethical is society's current approach to nuclear weapons? Threatening to kill millions, probably billions of human beings, sometimes over relatively minor issues like the Cuban Missile Crisis, there was nothing really at stake there worth that kind of risk. How risky is nuclear deterrence, which I'll get into in a little bit. It's actually a lot riskier than society realizes.
20:44
Which recent wars have been ethical? Which have had their intended result? Which have backfired? I've studied these and I've concluded that all of them have backfired and hurt our national security yet we keep doing the same wrong thing, killing people in the process. How ethical is society's current response
21:02
to climate change? Are our actions, especially in the United States these days, consistent with the cost, the risks and the uncertainties? There are uncertainties, but there are also risks and we need to weigh those carefully. And it's my position that if we studied them
21:21
we would be much more proactive. Well, let's look at, while there are many man-made existential threats on the horizon, the only one that could destroy us as we're sitting here, I'm speaking, you're listening, what is it?
21:42
Nuclear war. Climate change is a problem, but it's longer term. This could happen right as we're talking. So I'm gonna focus on that for demonstrating lesson six, but the others apply just as well. So how great is the risk? Society acts as if the risk were minimal, but there's a very easy way to see that the risk is probably highly unacceptable.
22:03
To an order of magnitude, which most of you understand, but there are a few non-technical people, so I'll explain, that just means to the nearest power of 10, one, 10, 100, 1,000. We'd round 20 to 10, we'd round 50 to 100, very approximately. How many years do you think we can go
22:23
on our current path with presidents like Trump and Obama and Bush and so on, and Cuban missile crises every 50 to 100 years, and Georgian wars and Ukrainian wars, and a whole bunch of other things I don't have time to tell you about that you never heard of? Most, even without knowing all that other stuff, almost everyone sees 10 years
22:41
as too short an expected time horizon. Well, it could happen, but it's not likely to happen in the next 10 years. I jump over 100 and almost everyone sees 1,000 years as too long. What does that leave as the only order of magnitude estimate? 100 years. That's 1% a year, 10% a decade,
23:02
worse than even odds over the lifetime, roughly 80 to 90 years, of a child born today in the developed world. And many of the young researchers have about 50-50 odds. And so, in fact, someone was saying, why are there so many older people in the audience tonight and so few young people? You actually have more at stake.
23:21
Well, probabilities are fine for a technical audience, but I'll give you another way of explaining it that it works for anybody. Oh, so let's see, I'm going to go to this slide. Imagine a man wearing a TNT vest, came into the auditorium and sat down near you. And he told you, there's no need to leave,
23:41
just keep enjoying the lecture, I'm not a suicide bomber, I don't have the button for setting this off. There are two buttons in very safe hands. One's in Washington with President Trump, just relax, he's very cautious, nothing to worry about. And the other is in Moscow with President Putin. And he says, yeah, I know there are buttons in Paris
24:01
and London and Beijing and a few other cities, including Pyongyang, small one, and the terrorists are trying to get one. But just sit here, you'd still get out of the, we'd evacuate this auditorium as fast as we could. So why, just because we can't see the weapons controlled by those buttons, have we, as a society, sat here for 50 or 60 years
24:22
complacently assuming that because the Earth's explosive vest has not yet gone off, it never will. So in summary, lesson six, technology requires accelerating our ethical evolution. It's not just moral, it is of the utmost practicality. Lesson seven, there's hope of humanity
24:42
becoming more ethical, believe it or not. Now, I'm convinced there's hope. Why is that? Most fundamentally, I have hope because of how I have transformed my, well, my wife and I together have transformed our relationship. We were close to divorce 40 years ago,
25:01
roughly 10 years into our marriage, and we thank each other every day now. And it's the same kind of changes, the same asking questions instead of getting furious, getting curious instead of furious that's needed internationally and at the personal level. And I'm a different person from who I was 40 years ago. If I can change as much as I can't have,
25:22
anybody can and nations can. The second reason for hope may seem paradoxical at first. Many people see this as a quixotic adventure, a fool's errand. But the work that leads to the best results
25:41
usually looks foolish a priori. When I wrote it up, I said, often looks foolish. And I recommended that the young researchers ask the Nobel laureates among you whether their work was encouraged at first or whether it was discouraged. I see Danny sitting in the front row, and I know he would, and actually I've talked to five or six of you,
26:00
and only one maybe wasn't discouraged. So this is the norm. So it's actually great that people think it's a fool's errand. The third reason for hope, we need to accelerate the process, not start it from scratch. Many parts of the world have abolished slavery, something that was deemed quixotic, impossible,
26:20
200, 300 years ago. They've established universal suffrage, again, unheard of before that, improved human rights, and even started to tackle environmental degradation. Even on the nuclear front, there is progress. You may not get that impression from the newspapers and the TV, but the world's nuclear arsenals
26:42
have fallen 80%, 80% since the peak of the insanity. In 1986, there were over 70,000 nuclear weapons in the world. Today, there are only 14,000. Now, that's still enough to do a hell of a lot of damage and we need to get it lower, but it's an 80% reduction.
27:03
Don't forget that. Lesson number eight, everyone can play a role. Well, many of you are already working on one or the other of these issues. In fact, Brian Schmidt introduced the Minow Declaration on Climate Change four years ago at this very conference,
27:22
and I'm honored to be speaking at the same series of Lindau meetings that gave rise to not only the 2015 Minow Declaration on Climate Change, but the 1955 Minow Declaration on Nuclear Weapons and War. In my paper, the written paper, which is a PDF, there's a link where you can read both declarations.
27:42
They're short. I really encourage you to read them. And in many ways, the talk I'm giving tonight can be viewed as just seconding those two Minow declarations. Of course, even when one is a Nobel laureate, I'm sure the question frequently arises. What can I, a single individual, do on such a huge problem?
28:02
I know that because I have a good friend who's an American congressman, even more powerful in a way for changing these things than a Nobel laureate. And he says the same thing to me. I'm one of 435. That's in the House. Then we still need to get the Senate. We need to get the president. And then we have to worry about the electorate voting us out of office
28:21
because we do things that seem so crazy to them, even though they make sense. No one person can solve this problem, not even a Nobel laureate or a US congressman. But if enough of us move things just a little bit, all together, we can move things a lot, change the state of the world, and make new possibilities possible.
28:44
The most effective thing we can do depends on the issue. For example, climate change has significant public awareness, so there are candidates like Jay Inslee, the governor of Washington state, he's running for the American presidency, who's made that his prime focus. You can support him if you think that's really important,
29:01
and actually I have. Nuclear risk, on the other hand, is underappreciated, and none of the candidates for US presidency have said they're making nuclear weapons an issue, and I don't encourage them to because they're not gonna get any votes that way. But what we need to do is to raise awareness.
29:20
The risk, remember, is roughly 1% per year, if you agree with what I said before, horrendous. Raise awareness, and then we can start to do things. I talk about nuclear risk with ordinary people I meet, as well as members of my government, and at talks like this.
29:41
Whether you talk to members of your government, which is much more likely for the Nobel laureates among you, or whether you use social media and talk to the general public, like the young researchers are more likely to do, because by the way, I don't understand social media, you guys do, you still will have an impact, and you cannot know ahead of time,
30:00
as the following example illustrates. By this happened just three months ago, but it's the best example of the value of talk. A friend and a colleague of mine was at a dinner on a totally unrelated subject, and his table mate says, so what do you work on? My friend told him, well, I'm working on rethinking national security,
30:21
the project he and I are working on. He gave him what we call the elevator pitch in Silicon Valley, the 30-second summary, and it goes like this. In 1945, at the end of World War II, the United States was totally secure. Nobody could touch us. We've spent trillions of dollars since then, trillions upon trillions of dollars,
30:40
to improve our national security, and what has been the result? A nation that could be destroyed in under an hour. Something went really, really wrong. And in mathematics, you know we call that a reductio ad absurdum, and it means that there's at least one assumption which is off. So my report on rethinking national security,
31:01
which there's also a link to that in the paper, lists a dozen such assumptions that are taken for granted but are highly questionable as soon as you think about them. Remember the zealous search for the truth with a ruthless disregard for commonly held beliefs when contradicted by the data. Now, my friend's table mate offered to help.
31:22
As we talked, the three of us, we learned that the table mate was good friends with a very influential American congressman, more so than the guy I'd mentioned before who's my friend. We're talking with that congressman, and at this point, it looks like some really good things might happen. Now, I've had enough experience to know that that could fall through.
31:41
But if it does, like the fool that I am, I'll be up here again, trying something different next year. But that shows the value of talk. In fact, a man who holds the rank of ambassador and is working with us on this, and he was the number two man at the START nuclear arms control talks,
32:00
had been questioning whether talking could really do anything, and when he heard about this, he said, I changed my mind. He sees the value of talk. Now, my, oh, it helped that there are prominent supporters of that statement, including Admiral Inman, who I mentioned before, and four Nobel laureates. Several of you here today, in fact.
32:21
Two of you, I think. And I hope that some of the others of you may want to join and do that. I have an international version of the statement, by the way, which is not yet public, because they don't have enough people on it. Now, my colleague and I talk to many people, increasing the odds of such a payoff, like meeting this influential congressman.
32:40
I hope this talk might have some of that result, too. The laureates among you often have access to people, like that influential congressman. The young researchers understand social media far better than a dinosaur like me. But what if you're a citizen of a non-nuclear nation, as many of you are?
33:01
Well, I actually believe the non-nuclear nations can play a leading role, and in fact, already have played a leading role. ICANN, the International Campaign to Abolish Nuclear Weapons, won the 2017 Nobel Peace Prize, and the treaty that they put forth was approved by the General Assembly of the UN,
33:21
not the Security Council, with absolutely zero support from the nuclear weapons states. The non-nuclear states took the lead. If any nation treated that issue with the respect it deserves, it would be a game changer. Today, even the nations that make it a priority do not make it a top priority.
33:41
If they did, and treated it as the highly risky, existential threat that it is, it would make a huge difference. So, in conclusion, I've talked mostly about evolving our thinking about nuclear weapons, because I'm convinced that's the greatest threat. You may think differently, but that's okay. It doesn't matter what we work on. But it's also the one I studied most deeply.
34:03
But most of the ideas in today's talk carry over to climate change and the other threats that we face. All eight lessons for ethical evolution also apply to interpersonal relationships. That's where I first got started with my wife, and that's where I still see the most immediate benefits.
34:20
I can't tell you any nuclear weapon that I got rid of, but I can tell you that I stopped fighting with my wife, and my wife's a hell of a lot better. I appreciate what many of you are already doing to build a better world, a more ethical world, whether it's with respect to nuclear weapons, climate change, cyber threats, genetic engineering,
34:41
or just building a more loving home or a more harmonious workplace. I'm gonna close with a lesson that I learned from that same mentor I mentioned before. He said, there are two hypotheses. The nobler hypothesis, the better hypothesis, is that human beings are capable
35:01
of the radical changes needed for survival in the nuclear age. He died in the 1980s, he was born in the 1890s, so he didn't know about climate change. He would have added that now too. So that's the nobler hypothesis, we're capable. The other hypothesis is that we're doomed. And what he said is, if we accept the less noble hypothesis,
35:22
we're doomed even if we had the capacity to change, because we won't be motivated, we won't think we can do it. If we accept the nobler hypothesis, he said, the worst that happens is we go down fighting. And the best that happens is we succeed, and we build a world that we can be proud
35:42
to pass on to future generations. So he concluded by saying, why not assume the nobler hypothesis? Thank you.