We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

The HLF Portraits: Butler W. Lampson

00:00

Formal Metadata

Title
The HLF Portraits: Butler W. Lampson
Title of Series
Number of Parts
66
Author
Contributors
License
No Open Access License:
German copyright law applies. This film may be used for your own use but it may not be distributed via the internet or passed on to external parties.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Butler W. Lampson; ACM A.M. Turing Award, 1992 Recipients of the ACM A.M. Turing Award and the Abel Prize in discussion with Marc Pachter, Director Emeritus National Portrait Gallery, Smithsonian Institute, about their lives, their research, their careers and the circumstances that led to the awards. Video interviews produced for the Heidelberg Laureate Forum Foundation by the Berlin photographer Peter Badge. The opinions expressed in this video do not necessarily reflect the views of the Heidelberg Laureate Forum Foundation or any other person or associated institution involved in the making and distribution of the video. Background: The Heidelberg Laureate Forum Foundation (HLFF) annually organizes the Heidelberg Laureate Forum (HLF), which is a networking event for mathematicians and computer scientists from all over the world. The HLFF was established and is funded by the German foundation the Klaus Tschira Stiftung (KTS), which promotes natural sciences, mathematics and computer science. The HLF is strongly supported by the award-granting institutions, the Association for Computing Machinery (ACM: ACM A.M. Turing Award, ACM Prize in Computing), the International Mathematical Union (IMU: Fields Medal, Nevanlinna Prize), and the Norwegian Academy of Science and Letters (DNVA: Abel Prize). The Scientific Partners of the HLFF are the Heidelberg Institute for Theoretical Studies (HITS) and Heidelberg University.
Internet forumMusical ensembleRight angleMultiplication signIntegrated development environmentNeuroinformatikDigital photographyCore dumpComputer programmingMathematicsShift operatorRange (statistics)PhysicalismField (computer science)Particle systemDirection (geometry)Moving averageMoment (mathematics)BitBuildingSource codeTable (information)Point (geometry)Level (video gaming)NumberGroup actionVirtual machineProcess (computing)Inheritance (object-oriented programming)Metropolitan area networkVideo gameAddressing modeFamilyDegree (graph theory)WordCollaborationismService (economics)Decision theorySemiconductor memoryPunched tapeNumerical digitTrajectoryOperator (mathematics)Computer animationMeeting/Interview
DiameterCalculationMultiplication signVirtual machinePunched tapeFunction (mathematics)Reading (process)Computer scienceQuicksortCentralizer and normalizerImplementationProgramming language19 (number)Moment (mathematics)Data storage deviceUniverse (mathematics)Direction (geometry)Projective planeSheaf (mathematics)PhysicalismComputer programmingFlow separationAreaFormal languageData conversionNeuroinformatikTerm (mathematics)Group actionThomas BayesPoint (geometry)Decision theorySpacetimeWordNumberVideo game consoleRight anglePhysical systemField (computer science)Video GenieBuildingForm (programming)Turing testBitBootingMeeting/Interview
Computer scienceMultiplication signConnected spaceProjective planeDecision theoryRevision controlNeuroinformatikVirtual machinePhysical systemInstance (computer science)State transition systemEvent horizonOffice suiteSequenceReal numberFlow separationBit2 (number)Computer hardwareVideo GenieElectric generatorRight angleComputer programmingComputerDegree (graph theory)Point (geometry)Focus (optics)HypothesisCrash (computing)Student's t-testRule of inferenceNormal (geometry)Universe (mathematics)Color confinementFigurate numberUltraviolet photoelectron spectroscopyLevel (video gaming)Order of magnitudeSoftware developerMcCarthy, JohnCodeMathematicsAbsolute valueMeeting/Interview
NeuroinformatikNumberThread (computing)Different (Kate Ryan album)Computer hardwareService (economics)Point (geometry)View (database)Computer scienceAnalytic continuationMultiplication signMoment (mathematics)Computer architectureContext awarenessInformationOffice suitePhysical systemVirtual machineModulo (jargon)BitTube (container)Atomic nucleusEnterprise architectureMereologyComputer scientistDependent and independent variablesBuildingSpreadsheetPressureRight angleSoftware developerTouchscreenProcess (computing)Semiconductor memoryReading (process)System administratorLink (knot theory)Web 2.0Scaling (geometry)Shared memoryModul <Datentyp>Configuration spaceMeeting/Interview
Virtual machineSoftwareTrajectoryRight angleEnterprise architectureNeuroinformatikTouchscreenSpacetimeRadical (chemistry)Computer scienceSimilarity (geometry)View (database)Multiplication signPoint (geometry)ComputerProcess (computing)Line (geometry)Standard deviationFrequencyTextsystemUser interfaceFault-tolerant systemField (computer science)Fraction (mathematics)Configuration spaceElectronic mailing listSemiconductor memoryBitComputer programmingDivisorArithmetic progressionComputer hardwareConnectivity (graph theory)Level (video gaming)Parallel portDirection (geometry)Cartesian coordinate systemPersonal computerAbsolute valueMereologyDiallyl disulfideMeeting/Interview
Multiplication signMereologyInternetworkingNeuroinformatikQuicksortRight angleGoodness of fitType theoryInformation securityMetropolitan area network10 (number)CuboidCarry (arithmetic)TouchscreenLink (knot theory)Web 2.0Video gameView (database)Radical (chemistry)Computer programmingReal numberSoftware bugBitCASE <Informatik>Computer hardwarePredictabilityException handlingPhysical computingEuler anglesFundamental theorem of algebraExpected value1 (number)Sensitivity analysisMathematicsEmailPhysicalismIntegrated development environmentOcean currentHacker (term)Meeting/Interview
Strategy gamePosition operatorComputer programmingModal logicNeuroinformatikRight angleARPANETPrototypeInternetworkingMultiplication signReduction of orderEmailQuicksortEndliche ModelltheorieProcess (computing)Computer hardwareState observerTheoryEvoluteSimulationBuildingException handlingOnline chatFacebookMathematicsAlgorithmWaveProof theoryView (database)Latent heatTelecommunicationWeb 2.0Line (geometry)CodeExistenceLevel (video gaming)Cartesian coordinate systemDirection (geometry)SequenceLattice (order)MereologyInteractive television10 (number)SoftwareAbsolute valueGoodness of fitMeeting/Interview
Internet forumGoodness of fitRight angleView (database)CASE <Informatik>WordMeeting/InterviewComputer animation
Bit rateComputer animation
Transcript: English(auto-generated)
Professor, let's start at the beginning, relatively the beginning, not at birth, but
you're 10 years old. Where are you living? What is your family life like? My parents were in the diplomatic service. So when I was 10, I was living in Bonn, Germany. Is that right? Yes. This was during the occupation.
Bonn was the headquarters of the occupation. The American military had a big presence there. And they had decided that it was not healthy for American dependents to associate with Germans. So they built a small midwestern town on the banks of the Rhine in Plitestorf. It had a number of three-story apartment
buildings and it had a shopping center with a PX and a movie theater and a club and a church and a school. So it was like being in the Midwest. Exactly. That was the intention and that was what was achieved. And does that mean that essentially it was an insular life that you were leading or were your parents enriching that?
Well, my parents definitely thought this was weird because this is not the way the foreign service operates. So I was not strictly confined to this environment, but that was the normal day-to-day experience. There was a school that had been set up.
So you went to a midwestern American school? That's right. And a lady came in twice a week for 45 minutes to teach us German. So it was fairly weird. In retrospect, of course, at the time, being 10, I thought it was pretty normal. How was your German, by the way? I used very little of it. So it's pretty rusty.
Okay, this child, this 10-year-old, is he good at any particular subject? Is he being pushed in any particular intellectual direction? No, I don't think I was being pushed, but yes, I was always very good at schoolwork.
Across the range, not only science or math? Across the range. Terrible handwriting. How long did that stay in Germany go? You're 10 when you're in this village. How much longer before you go to the next place?
We came back to the United States in 1955. I was 11 or 12. I must have been 11 then. So your high school education, middle high school, was in the United States? That's right. And where was that? I went to school for one year, to a private school in Washington, and then I was sent to boarding school at Lawrenceville.
Lawrenceville, again, because of the future career, I'm always going to be asking you whether you're beginning to get rich scientific or mathematical encouragement. Lawrenceville might have been mostly humanistic in its offerings, or there may have been some rich source of inspiration there?
I think it had fairly decent science and math. I was always very interested in science and math, but I was interested in a lot of other things, too. Right. And you're deciding what to do with your life. Is there a teacher who's particularly guiding you, or are you really having the internal conversation about what's next?
I don't think there was any particular teacher who had an outstanding influence on me. Well, we've got to get you graduated, and we've got to get you started on probably a scientific career. You go, I think, to Harvard? So I went to Harvard. That's right. I majored in physics.
In physics. Okay. Now we're getting somewhere because the transition from physics to a decision, ultimately, to change to computers is so much at the core of what you're becoming intellectually. Can you tell me originally why physics was your chosen field, and then why you began to shift your interest? Or perhaps it's not a shift of interest?
I'm not sure that I know exactly why I chose physics, but I understand quite well why I had always been interested in computing from the time that I'd been in high school.
One of my friends found an underutilized computer at Princeton, which is just six miles down the road, and we would go up there once or twice a week to play around with it. I think it's probably hard for a lot of young people today to understand really, at some deep level, how early on the
knowledge of and access to computers are. Can you give me some sense of how special this opportunity may have been for you? Well, at that time there were maybe a couple of thousand computers in the world, maybe five thousand, certainly
not much more than that. This was a vacuum tube machine. It was a decimal machine. It had two thousand ten decimal digit words of memory. So it was fairly basic. But we had a good time with it nonetheless.
Why was the access permitted? My friend found this severely underutilized computer. At this time, this was an IBM 650, and by 1959 it was heading towards obsolescence. So it wasn't very heavily used. And I think he sweet-talked the lady who was in charge of it. Right.
I wasn't involved in that. Not the sweet-talking. You just benefited. I just benefited. That's exactly right. Well, you chose physics for reasons we can't precisely define, but you were choosing the sciences, certainly. Yes. And that always seemed more interesting to me than math, although I studied a lot of math. Maybe I chose physics because it's the most mathematical science.
Again, I'm looking for what I may not find, which is a particularly exciting professor or moment that's going to direct your future inquiry. Is your time at Harvard mostly unremarkable for inspiration, or are you having some good moments as you discover what you're interested in?
Well, I found a number of opportunities to do computing while I was at Harvard. There were occasionally opportunities to go down and run jobs on the computer center at MIT. This was in the days when Harvard didn't really have any centralized computing.
Yes, yes. And then I got hooked up with a physics professor who had a mini-computer, one of the very first mini-computers, the so-called PDP-1. And he was running experiments that involved taking photographs
of the sparks in a spark chamber, which is a way to track the trajectory of ionized particles. And he wanted to have some programs to automatically analyze these pictures. So I did quite a bit of work for him while I was an undergraduate. Ah, okay. That looks like an important turning point. Can you describe the broad interest, such as it
exists in computing as a direction and career at this time? Are you finding yourself rather alone in this interest, or is there an interesting group of young people who are sharing your interest at this point?
Well, there were not many. This was before you could actually formally study computing in any systematic way. It was possible to get a graduate degree in computing at Harvard, but it was kind of weird the way things were set up.
The way computing got started at Harvard, there was a man named Howard Aiken, who had disappeared by the time I showed up. But he, in collaboration with IBM, built a couple of big relay machines, which were a couple of the first large-scale computers. Although nowadays we wouldn't really consider them to be real computers because they didn't have stored programs.
The programs came in on gigantic rolls of paper tape. And that was how Harvard got into computing. And at the time that they had built a building, named it after Howard Aiken, and I
can still remember when I was a sophomore or junior going into the lobby of this building. It was quite large, probably about the size of this room, 30 feet long. It was all fairly dimly lit in the lobby. Down at one end there was a lit-up desk where the receptionists sat. And all down the right-hand wall was completely glass. On the other side of
the glass was the machine room, which was about five times the size of this lobby. And in the machine room, all down one side was the Mark 1 relay calculator, cabinet after cabinet filled with relays, giant cables a foot in diameter, huge readers for the two-foot-wide paper tape, typewriters for typing the outputs and so forth.
It was all turned off, of course, because it was completely obsolete at the time. All down the left-hand side of this gigantic room was the Mark 4 relay calculator, more of the same. And way down at the end there was a lit-up section, which was the university's central computing facility, which at that
time was a UNIVAC 1, which was also completely obsolete, even more obsolete than the 650 that I had access to at Princeton. What is the year now? This is 1952. And in October of 1962, they swept all of this stuff away, and
they put in a raised floor and installed an IBM 7090 and moved into the modern world. And you're a junior, senior? When that happened, I was a sophomore or junior, I'm not sure. I'm a junior. I've talked to a number of Turing Award people, and some of them were quite sensitive to early
in their careers when they began to be passionate about computing, that it was considered by a lot of their colleagues déclasse. I mean, it wasn't, if you're giving up physics for that. Well, I haven't done that. I went to graduate school in physics. Okay, then let's get you to graduate school, and then we'll worry about this dangerous decision you're making.
I spent a lot of time on computing as an undergraduate. Yeah, but it wasn't, it's still not your field, it was not even possible, really. It wasn't Harvard even possible, really. Right. I did take a course from Ken Iverson, who was the originator of a very famous programming language called APL.
He came from, he worked for IBM, but he was on sabbatical at Harvard for a year. Oh, so that was lucky. That was nice. He taught this course. At that time, there was no implementation for APL. It was purely, existed only in the form of pencil and paper, and Ken Iverson's book explaining it all.
And in fact, he was strongly opposed to the idea that there would be an implementation, because he was sure that the implementation would require a lot of terrible compromises that would wreck the purity of his language. Really? And three or four years later, he was extremely lucky to fall in with some people who figured out how to implement APL without compromising the purity at all.
And it had a lot of influence for a couple of decades after that. Very important to know. But at the time that I was taking his course, it was not like that. It was just paper. We want to get you graduated in physics, deciding for what graduate program you wanted in physics.
Well, I applied to two places, Princeton and UC Berkeley, and I was very fortunate that Princeton turned me down, because it would have been much more difficult to get into computing at Princeton than at Berkeley. Describe Berkeley, because that is where the transition happens in terms of your fascination, the
kind of project you were interested in in physics, and what sent you in another direction. Well, I know exactly what sent me in another direction. It must have been November or December of 1964, which was I had arrived in Berkeley in September of 1964.
The Fall Joint Computer Conference happened in San Francisco, and this was at that time the overwhelmingly most important gathering of practitioners of computing, both academic and industrial. Anywhere. Anywhere. Yes. And so it was easy for me to get to it since it was just across the bay, so I went to it,
and I ran across a fellow that I knew very slightly named Steve Russell, and he asked me, how is Peter Deutsch doing? And I said, who is Peter Deutsch? I had never heard of him. Steve knew him because they had both been at MIT earlier.
Peter at that time was a freshman or sophomore or undergraduate, but he had done a lot of programming while he was in high school at MIT, where his father was a professor of physics. And so Steve explained to me that if I went to the electrical engineering building, which is called Corey Hall on the Berkeley campus, and I went to the first floor, the northeast corner, there was an unmarked door in the northeast corner.
Sounds like a speakeasy. Yes, exactly. And if I went through that door and kept going, I would come to the so-called Genie Project, where they were trying to build a timesharing system out of a reasonably high-end minicomputer of the day.
So I followed Steve's instructions, and I went through this door, and there was a little sort of airlock, and then I went through another door, and then there was quite a large room with several desks in it, but nothing interesting, and another door on the far end. So I went through that door, and then I was in a huge room with enormously high ceilings.
It turns out when they built Corey Hall, they didn't have enough money to build all the floor space that they wanted, so what they did was they built almost all of it double-height with the idea that they would put in mezzanines later when they got more money. This was before the mezzanines, so there was this enormously high space, and in this gigantic room, there were a couple of
old computers that were turned off, and in the middle of it was a rather small minicomputer called the Scientific Data Systems 930. Sitting at the console of this machine was a very young guy. Peter must have been 16 at the time, I think. 16. 16 or 17, because he was a freshman or sophomore, and he was a little bit precocious, too.
And he was reading in a paper tape. So I stood there and watched, and after it was all read, he took it out and he put it back in the reader and read it again. And I said, what on earth are you doing? And he said, it's a two-pass relocatable loader. And I said, what?
It's just not a sensible thing to do. And Peter said, yes, yes, I know, I'm rewriting it. So that was how I was introduced to Peter, and indeed to the 940 project. So this is the moment of conversion? Well, I don't know about conversion. I had been drifting towards computing for a long time.
But this was a point at which I had something to engage with that could consume all my energies and allow me to pursue an academically legitimate path as well. Which is everything. Which would have been much more difficult if I hadn't been able to do that. Now, do you formally switch your...
A few months later. A few months later. And what sort of a department would you then enter into at this stage? Well, this was the so-called Electrical Engineering and Computer Science Department at Berkeley, which had been renamed from Electrical Engineering just a couple of years previously by its very foresighted chairman, Letty Zada.
So they didn't actually have much of an academic program in computing, but they had put computer science in the name of the department. There is, in popular culture, if that's the right phrase, general understanding of the relative merits of beginning your computer studies at Berkeley or Stanford,
the sense that Berkeley was a little backward compared to Stanford. Is that fair to say? In 1964? Maybe a bit, but these things... The Berkeley-Stanford competition has had its ups and downs over the last 50 years.
Stanford was in many ways, I think, a bit ahead of Berkeley, primarily because they had hired John McCarthy, who brought the whole idea of timesharing in AI from MIT to Stanford
and started the Stanford AI lab. I didn't have a lot of contact with Stanford in 1964 and 1965, so I'm not quite sure what the sequence of events was. Apparently you weren't so self-aware of the relative merits that you were bothered by what you were getting at Berkeley. No, the Gini project was a great thing to be on. It never occurred to me that I might be better off somewhere else.
Well, but we're going to get you somewhere else. So your degree is in electrical engineering slash computing. EECS, it's called, right. What was the focus of your dissertation work? Well, my nominal academic advisor was a professor named Harry Husky,
who had built... It was very common in the early 50s for universities to build computers themselves, because you couldn't actually buy any computers at the time to speak of. And Harry had built one, I think, that was at UCLA. And then he had been involved with the development of some industrial computers as well.
But by the time I came into contact with him, he was not doing a lot of new work anymore. He was getting on a bit in years, and he didn't pay a lot of attention to me, and I didn't pay a lot of attention to him, which was fine with me,
but it made it very easy for me to write a PhD thesis that he was perfectly happy with. But I didn't get any guidance. The guidance came from Mel Pertl, who was actually a graduate student, although somewhat older than the normal kind of graduate students, who was effectively running the Gini project. What problem were you trying to solve?
The goal of the project was to build a so-called general purpose time-sharing system that could run arbitrary programs on a machine that was an order of magnitude cheaper than the machines on which this kind of work had been done previously. And the goal was to do this in such a way
that the company that built the original version of this machine, which we had to modify a fair amount, could then commercialize it. And that, in fact, was done, and the scientific data system's 940 computer system, which was the commercial version of the stuff that we built at Berkeley, basically ran all of our code
and incorporated most of the hardware changes that we had found necessary to make to the original machine. That was the first widely sold commercial time-sharing system. I think they sold 50 or 60 of them. Really? Now, as you think of your next stage,
are you tempted by a scholarly university connection? Are you tempted by the commercial world, which may not yet understand the possibilities of this? How do you make those decisions? Well, it was fairly easy for me, actually. Having built this successful time-sharing system
on a modified SDS-930, we wanted, naturally, to build a second-generation system that would be much more glorious, that would be much faster and run much bigger programs and generally be more wonderful in every possible way. This was before people started to realize that it's not always a good idea to make a second system
that's much more grandiose than the first one, because very often those second systems crash and burn. But we didn't appreciate that. Is this one of your principles? Oh, it's a very... You're famous for your principles. No, it's not my principles. Okay. This is the well-known second system phenomenon. So our original idea was that we would continue in the university
and get more money from ARPA and build a second-generation system. But we discovered over the course of a year or two that basically the resources that you would have to marshal to build this system, it couldn't be marshalled within the confines of the university. There were too many bureaucratic restrictions
of one kind and another. And so we decided to do a start-up. And we found a company that was in the IBM computer-leasing business that had a bunch of spare cash lying around and talked them into being our venture capitalists. There was no real venture capital industry at the time.
Very awesome. So this was one of the first, really one of the first instances of something that subsequently became trademark for Silicon Valley. But of course there was no Silicon Valley at the time. I think it was just getting started. Would someone have described you,
would you have described yourself at this point as entrepreneurial? Well, I personally was not terribly entrepreneurial. The entrepreneurial aspects of this were primarily done by Mel Pertl, who had previously effectively been running the Gini project and had finished his PhD by that time. So we started this thing called the Berkeley Computer Corporation, the goal of which was to build this much more grandiose
second-generation time-sharing system. Am I right in remembering that it wasn't a great success as an organization? No, it bombed out after a couple of years. Why? Because it was much too grandiose. I think there's an iron rule of successful startups, which is if you're going to do a technology startup,
there should not be any technical risk, because there are so many other risks in a startup that if there are technical risks as well, you pile them on top of all the other things that can go wrong, it's very unlikely that the whole thing is going to succeed. And there was a lot of technical risk in this project, because it was basically a research project. It was the research project that we had wanted to do at Berkeley,
but we couldn't figure out how to make it work. Now, I'm just going to make a wild guess and say you survived this failure and moved on. Yes, absolutely. To what? Well, at the end of 1970, Berkeley Computer was pretty clearly doomed.
We had basically run out of money, and there was no real prospect of getting any more money from anybody. Very fortunately, at that time, the computer science lab at Xerox Palo Alto Research Center was just getting started, and they had hired Bob Taylor to get it going. Bob had been, among other things,
the director of the computing office at ARPA several years previously, and the Genie Project at Berkeley was an ARPA contractor, so I had come into quite a bit of contact with him, as had several of my colleagues, and Bob was looking around for a hotshot computer scientist to hire into his new lab, and BCC was going down the tubes,
so he scooped up half a dozen of us to form the nucleus of the computer science lab at Clark. And this was all, from my point of view, this all happened really without any effort. No, understood, but you're the right person at the right time. So it was just blind luck. I didn't actually do anything. Luck is often harder. Luck is very important. Luck is very important.
Now, this is now considered almost like the Bell Lab, one of those golden moments where the right people get together at a critical point and begin to produce fundamental things. How would you characterize your career there?
What was fascinating you? This is the context in which you begin to work with Alto and so forth, but I'd like to understand your development there as a computer scientist. Well, the primary motivation for the timesharing work that we've been doing at Berkeley was to be able to compute interactively so that you could do something
and the computer could respond more or less immediately and respond in a way that was as flexible as possible. And a number of ways to think about what we did at PARC in the 1970s was to just carry that thread forward using a substantially different hardware base
because hardware technology had moved on and you could do things that you couldn't possibly have done in the 60s just because the hardware would have been too expensive or too inflexible or whatever. But the basic idea was to do interactive computing. Another way of looking at it is Xerox had set up PARC
because they realized that the immense riches that they were taking in from their monopoly on plain paper copying would not last forever. Eventually their patents would run out and they would have competition and so forth. And if they wanted to continue to be a growing enterprise they would need new things.
And so they set up PARC, in particular this part of PARC, to invent the Office of the Future and the architecture of information. That was the official challenge. And actually the one that you all tried to meet. So that's another way of thinking about what we were doing at PARC.
In the 1970s we were inventing the Office of the Future and the architecture of information. Right. The everything of the future as it turns out. Well, almost not quite everything. We didn't do spreadsheets. And I think the reason for that is that we had a very strong culture that said that you should use what you build.
And we didn't have any use for spreadsheets because spreadsheets at that time were for accountants. Yes. And we didn't have any accountants. I mean upstairs in administration there were a couple of accountants but we didn't get along with them very well. And we certainly didn't do any of that kind of work ourselves. So we didn't see any need for spreadsheets. And since we didn't actually have any customers there was no external pressure to do spreadsheets so we didn't do them.
But we did everything else. The only other thing we didn't do that's an essential part of the modern computing world is we didn't do the world wide web. Right. Because things weren't much too small a scale to make that possible. We're very close to your producing a world famous paper.
I mean I can say more easily than you but I've read a lot about you. About the Aalto, the early movement toward PC and so forth. Can you describe your thinking at this point?
I guess this is 1972-73 when you published a very important paper. I published an internal memo explaining why we should build the Aalto. It was called Why Aalto. There were two energizing ideas behind the Aalto.
One was that a whole computer should be dedicated to one person. This was absolutely unheard of at the time. The only context in which anything like this had ever been attempted before was when Wes Clark built a laboratory computer called the Link. Which he built to make it easier for laboratory biologists to instrument their laboratories.
The goal there was for the computer to interact with the laboratory instrumentation, not with the people. Our idea was that the computer should be in service of one person who is doing the kind of thing that he would otherwise be doing.
Which is a revolutionary aspiration. From one point of view it was a revolutionary idea. From another point of view it was just a natural continuation of the idea of time sharing. Which was that although in fact the user of a time sharing system didn't have access to the whole machine,
everything was constructed in such a way that modular performance that he got, he could think that he had access to the whole machine. By the early 70s it was actually feasible to build a machine. You couldn't have built this machine commercially. If you tried to sell it commercially it would have cost $60,000 or $70,000.
And that was 1971 dollars which were worth four or five times what today's dollars are worth. So you couldn't possibly have sold this machine commercially. But it was cheap enough that we could afford to build well initially 30 of them. Eventually about 2,000. Was the response to the memo and the ongoing idea and aspiration pretty positive immediately?
Did it take some persuading? Well it was positive enough that we got enough money to build 30 of them. And then we were able to do all kinds of interesting things with the 30. I don't think after that there was ever any tremendous difficulty in building, getting money to build more. Other, I mean it's all related, but other intellectual challenges that you were interested in taking on at PARC?
Well as I said, I mentioned that there were two motivating, main motivating ideas behind the Alto. One of them was that the user should have their own machine. And the other one was that there should be a screen that could do a decent job of simulating a piece of paper.
Which again was unthinkable previously. And it was only the advent of, the improvement of hardware technology that made memory cheap enough that you could actually afford to dedicate half a megabit of it to representing the screen that made this possible.
Again this is something that couldn't have been done even five years previously. I remember still when Harvard acquired its 7090, in the reading room of the computing center there showed up a few pieces of paper that described the configuration and what it cost.
And I don't actually know how much Harvard paid because they probably got some great deal from IBM. But the list price for the memory of the 7090 was a dollar a bit. So at a dollar a bit, to do what the half megabit required for the screen of the Alto would have been half a million dollars.
So you couldn't have afforded it. Right. But by 1972 you could because the price had come down by a factor of a hundred or so. Again we're used to thinking of this amazingly fast progress.
At the time are you thinking well this is going very fast, we'll get to such and such a place soon. I mean are you aware of being in the middle of this speeding? Well yes and no. We were certainly aware of the fact that the hardware components that you could build computers out of were improving fairly rapidly.
We were also very much aware of the fact that this machine that would cost 70 or 80 thousand dollars if you commercialized it today would be possible to sell a machine with similar capabilities for just a couple thousand dollars ten or fifteen years in the future.
That you were expecting. And that was an essential motivation for the whole trajectory of the work that we were doing. Right and IBM's increasing support of it. Xeroxes. Yes, up at Xerox.
IBM would never have supported such an enterprise. Why not? Because they actually knew a lot about computers and they had very strong ideas about how things ought to work. And the idea that you could have a personal computer with a paper-like screen that was just not in their space at all.
So they were the orthodoxy. They were the orthodoxy, absolutely. You were the radicals. We were the radicals. Xerox didn't know anything about computers. Right. We knew a fair amount about computers, although not from a commercial point of view. Right. But Xerox didn't care about that because it wasn't their goal to be a commercial computer company. Actually, that's not quite true because the other thing that happened at about the same time was that Xerox bought SDS.
It was kind of an accident that the company they chose to buy was the one whose machines we had been working on earlier. Right, right. But they had dreams of getting into the computer business. However, they had absolutely no clue how to get into the computer business.
And in the course of three or four years, they managed to run SDS completely into the ground and had to shut it down. Well, for you, it's been an intellectual paradise to be able to follow these questions and issues. And why do you ever leave PARC? Why did I leave PARC? Oh, because the computer science lab, as I mentioned before, was run by Bob Taylor.
Yes. And in 1983, Xerox fired Bob Taylor. Ah. So most of the people that had been working for him left. Big mistake on their part. I think it was very stupid. We explained to them very carefully how stupid it was, and they listened to us very carefully. We even got to talk to the CEO, but they went ahead and fired him anyway.
Do you have any idea why? Bob was not an easy person to manage. I see. And after 13 years of it, I think George Peay, who was running PARC and Xerox Research at the time, got tired of managing Bob. Right. But it was, in my view, an extremely irresponsible thing to do.
Right, right. Self-destructive. Yes. Okay. By now, you probably have an okay reputation in the field. You demonstrated your abilities, so probably you're able to choose a next step. Well, that never actually came up because Bob found another company to… So you followed Bob.
I followed Bob to Digital Equipment, where they started the Systems Research Center in Palo Alto. In Palo Alto. And hired a large fraction of the people that had been working at PARC in CSL. Some of them, by that time, had started to go off to do startups and become billionaires.
Yes. But the bulk of them stuck with Bob. Oh, so you were a billionaire, or you stuck with Bob, basically? I think there were some other possible paths, but that's not a bad way to characterize it. How are you being paid at this point? Is it a good salary? Is it comparable to a professor?
I was being paid significantly better than a professor at the time, but I wasn't going to become rich on the salaries that Xerox or DAT was paying. I'm very comfortable. Comfortable at that point. And by this time, by the way, I had moved to Philadelphia to become one of the world's first telecommuters. Ah. That was probably for love?
Yes, it was. It was because Lois got a job at Penn. Right. So you're still working for the Palo Alto, but you're living in Philadelphia. I'm living in Philadelphia. And is that workable? I mean, you said you're one of the world's first telecommuters. It worked out okay. I had two Altos sitting on the third floor of my house, which was built in 1749.
And I had a 9,600 baud leased line at vast expense that connected me to Palo Alto. So I was pretty much connected, at least by the standards of the day. Right. Because you've done so much, we can't really cover it all,
but what are your next challenges in this next period? Are you developing these same ideas to the next level? Are you taking on some other issues that interest you? I think in the 80s and early 90s, the things that I primarily worked on were actually somewhat different.
I worked on understanding how to build reliable parallel and distributed computer systems,
and also on how to do fault tolerance, neither one of which were things that we thought about much at PARC. How did you choose? Were you in charge of your own research direction? More or less, yeah, but also these seemed like very natural things to pursue.
We didn't have any spectacular ideas for how to, what's the right way to say this? There was a lot more to it than this, but I think the central thrust of what we were doing at PARC was captured by the user interfaces of the various applications that we built,
word processors and drawing programs and so on and so forth, and by the low-level networking capabilities that we built. By the early 80s, we didn't really have any great ideas about how to make these things better.
In my opinion, no one has had any great ideas yet, even though it's 40 years on. I don't think that the fundamental user experience of computing has not improved substantially, in my view, since the early 80s.
When you spoke to Alan Kay, he felt that very strongly, one of the big disappointments of his expectations, and maybe yours too. Well, a lot of people have worked on it, but no one's had any great ideas. So far, it's certainly a big change that you can hold a full-blown computer in your lap
and you can carry something that, except for screen sizes, a full-blown computer around in your hand, but the user experience is not substantially different. We've introduced swiping, which is not, in my opinion, a huge advance. We've also introduced the idea of clicking on links to traverse the web, which is a huge advance, but that's about it.
Some would say, I think Alan may have, but others, it's really been a dumbing down rather than... No, I don't agree with that. Okay. It just hasn't progressed. It just hasn't progressed very dramatically. Certainly, a lot of things have been done to make it somewhat, or even considerably, easy for people to get into the kind of computing environment
that we had already built at PARC by 1976. So, a lot of good engineering has been done and a lot of useful things have been added. Now you can typeset in Thai, and that's not important to me, but certainly there are tens or hundreds of millions of people
for whom that kind of thing is extremely important. Of course. But qualitatively, I don't think the experience has changed significantly. One of the many issues you've been involved with, problems, challenges, is the question of security. Can you talk a little bit about how that became so important in your own research?
Well, I've worked on security off and on for more than 50 years now. Security was very important in the world of timesharing, because you had these people who were perhaps not active adversaries,
but certainly not necessarily cooperating, sharing the same chunk of hardware. And it was a very serious issue to make sure that they could do that sharing in such a way that they wouldn't trip over each other's feet. Furthermore, at that time, I guess the view was not so much that the guy at the next terminal might be actively hostile,
but that you didn't really know what his program was going to do. It might be full of bugs, and you didn't want your program to be messed up, in spite of the fact that you couldn't predict anything about what his program was going to do. So we took at that time fairly stringent attitudes towards how security ought to be done,
to make sure that the guy that was sharing the physical computer hardware with me was not going to be able to mess me up. And in my view, things have basically, with respect to security and computing, things have basically been downhill ever since.
Other things have been more important than security. Is that a mistake? No, I think it's good. I think the current hysteria about security is wildly overblown. Why do you think people have become so hysterical about it?
Well, it makes good headlines. When sensitive emails from Sony get published, or Hillary Clinton gets into trouble for her emails or whatever, it's very easy to get excited. You can dig up all kinds of hysterical stories about how the world might come to an end if this kind of thing spread.
Allowing for the Clinton email controversies and so forth, the notion of the hacker and hacking being one of the great weapons of our time, is that overdraw, overblown?
Yes, I think it is. I think there are relatively straightforward things that one can do to alleviate these problems. And the fact that for the most part, people are not doing them. I found a wonderful slogan on the internet, which is attributed to General Benjamin Chidlaw, who was the Commandant of NORAD in 1955, the North American Air Defense System.
And nothing to do with computers. He actually calls to be written on some sort of monument at NORAD. If you want security, you must be prepared for inconvenience. And this is certainly true in the physical world, right? You have a document that you want to secure, what do you do with it?
You put it in a safe deposit box. It costs a small amount of money, an incredible amount of inconvenience to get to the document. But it's secure. But it's much more secure than it would be if it were sitting in your house. In the computer world, typically, we haven't been willing to put up with that kind of inconvenience. And so what we pay, a consequence of not being willing to put up with the inconvenience
is that we don't get security. Life is hard. You get what you pay for. And people have this weird idea that because computers are made up of binary bits and execute programs deterministically, we should be able to make them perfect.
But if you look at the real world of computer programs, that's just complete nonsense. Nobody understands how any program of any substantial size really works. And it's not cost effective to try to understand that. I see.
You make the program good enough to get the job done. This doesn't sound like the observation of a theorist. Theorists don't build programs that other people use with very few exceptions. Don Knuth is an exception. Most of his programs nobody uses, but tech, a lot of people use.
But Don actually understands this very well. There's a famous note that he put onto an algorithm that he was working on many years ago. He said, be careful when you use this program. I've only proved it correct. I have not tested it.
So why does that make any sense? I mean, if you've proved it correct, it must be working, right? Well, no, not quite, because what the proof says is that the program behaves the way some specification says it should behave. But the specification could easily be wrong or incomplete. Yes, I understand. And there we were talking about programs with 100 lines,
whereas on your phone you have tens of millions of lines of code running. Because many in the audience for this interview will be at an early stage in their own careers. Any advice about what direction their research might go in the future?
What do we need to get done? If you were that age now, where might you go in your research? Well, biology is good. It has more challenges than computing. Computing has plenty of challenges left, but biology has a lot more. And indeed both my wife and my oldest son are biologists.
Are biologists. So you're paid to say that. Absolutely, absolutely. But in computing, I'm a little hesitant. 15 years ago, there was a big fad in computing for grand challenges.
This was right after the success of the sequencing of the human genome, which is sort of, in my view, the prototypical, sensible and successful grand challenge. But it got a lot of publicity and everyone said, gosh, if we could have grand challenges, then we could get a lot of money and it would be cool.
So there were a lot of meetings where people tried to invent grand challenges for computing. In my opinion, for the most part, it was not successful. They came up with grand challenges like a teacher for every learner. The problem with which is that it's not clear what you should do to get there, and it's not clear how you would know that you'd succeeded.
So unlike the Human Genome Project, where it was pretty clear what strategies should be adopted, it was extremely clear whether you'd succeeded or not. So I tried to come up with grand challenges at the time. I came up with two. The one that has been the most successful up to now was, I said,
we should reduce highway traffic deaths to zero. My position was, this is purely a programming problem because we already have all the necessary sensors and effectors on the cars. The cameras should be good enough. They're good enough for me. They should be good enough for the computer. You just have to forget how to write the right program.
Well, now, of course, this is common wisdom. Everybody knows that self-driving cars are coming, right? Right. So that was an easy one. Broadly speaking, I think that I have a story about the evolution of computing, and it goes like this. In the earliest days, we used computers for simulation.
And whether you were simulating the weather or simulating a payroll, the basic idea was the same. You built some sort of model in the computer, something that was going on in the physical world, and you ran the computer program, and it told you something about what would happen in the physical world.
And this was enormously successful, and it paid for computing for the first 30 years. And by the way, it's not played out by any means. It's still going strong. But after about 30 years, computing got started around 1950. By 1980, hardware had gotten cheap enough that it was feasible to start using computers to facilitate communication between people.
And so this brought us the Internet and the World Wide Web and email and chat and Facebook and all these other things. And that's also been enormously successful and has given rise to even more wealth and changes in the world than the simulation wave. Well, it's been another 30-odd years, and in my view, it's time for another great wave.
And I think it's crystal clear what the next great wave is going to be. It's going to be non-trivial interactions with the physical world, and whether that means networks of sensors or programs that can understand your speech and respond to it,
or self-driving cars or other things that we can't yet even imagine. Not many people know this, but for the first 20 or 30 years of its existence, overwhelmingly the most important application for the Internet was email.
If you read the founding documents for the ARPANET, which was essentially the prototype for the Internet, email isn't mentioned. Why do you suppose that is? It hadn't been thought of. So it's very hard to predict what these things are really going to turn out to be good for. And the people that started the ARPANET, of whom Bob Taylor was one of the most important...
Had no idea what... They didn't know what it was going to be good for. They just knew that it was going to be good. So the moral is, in my view, you shouldn't try to plan the value of your research too carefully, because in many cases, we're just not smart enough.
We're not able to see far enough into the future to understand what the value is going to be. I think that has to be the last word. Thank you. You bet.