We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

The HLF Portraits: Leslie G. Valiant

00:00

Formal Metadata

Title
The HLF Portraits: Leslie G. Valiant
Title of Series
Number of Parts
66
Author
Contributors
License
No Open Access License:
German copyright law applies. This film may be used for your own use but it may not be distributed via the internet or passed on to external parties.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
The Heidelberg Laureate Forum Foundation presents the HLF Portraits: Leslie G. Valiant; Nevanlinna Prize, 1986 & ACM A.M. Turing Award, 2010 Recipients of the ACM A.M. Turing Award and the Abel Prize in discussion with Marc Pachter, Director Emeritus National Portrait Gallery, Smithsonian Institute, about their lives, their research, their careers and the circumstances that led to the awards. Video interviews produced for the Heidelberg Laureate Forum Foundation by the Berlin photographer Peter Badge. The opinions expressed in this video do not necessarily reflect the views of the Heidelberg Laureate Forum Foundation or any other person or associated institution involved in the making and distribution of the video. Background: The Heidelberg Laureate Forum Foundation (HLFF) annually organizes the Heidelberg Laureate Forum (HLF), which is a networking event for mathematicians and computer scientists from all over the world. The HLFF was established and is funded by the German foundation the Klaus Tschira Stiftung (KTS), which promotes natural sciences, mathematics and computer science. The HLF is strongly supported by the award-granting institutions, the Association for Computing Machinery (ACM: ACM A.M. Turing Award, ACM Prize in Computing), the International Mathematical Union (IMU: Fields Medal, Nevanlinna Prize), and the Norwegian Academy of Science and Letters (DNVA: Abel Prize). The Scientific Partners of the HLFF are the Heidelberg Institute for Theoretical Studies (HITS) and Heidelberg University.
Generic programmingSoftware testingPhysical systemState of matterInformationObservational studyUniverse (mathematics)Software developerPoint (geometry)Right angleMathematicsMultiplication signFaculty (division)Student's t-testMoment (mathematics)MereologySquare numberOrder (biology)PhysicalismProcess (computing)PhysicistMusical ensembleContext awarenessFormal languageInheritance (object-oriented programming)Translation (relic)Experimental physicsData storage deviceMetropolitan area networkTelecommunicationDirection (geometry)Open sourceWebsiteTerm (mathematics)Medical physicsGoodness of fitMeeting/Interview
NeuroinformatikQuicksortSoftware developerMathematicsNumberTheoryPhysicalismMathematical physicsDecision theoryMathematicianLimit (category theory)DemosceneComputability theoryTerm (mathematics)Computer scienceComputerTheory of everythingTuring testVideo gameControl flowTrajectoryPoint (geometry)Interactive televisionBitFaculty (division)Level (video gaming)Different (Kate Ryan album)Field (computer science)Form (programming)Ocean currentAnnihilator (ring theory)3 (number)Principal idealRight angleSource codeService (economics)Musical ensembleResultantObservational studyGroup actionDistanceFamilySummierbarkeitFluid mechanicsList of unsolved problems in mathematicsConfidence intervalVector potentialMeeting/Interview
Computer programmingDegree (graph theory)Point (geometry)NeuroinformatikUniverse (mathematics)Term (mathematics)Multiplication signField (computer science)Context awarenessForcing (mathematics)HierarchyData managementBuildingGodOffice suiteRight angleStudent's t-testFrequencyMathematicsLevel (video gaming)Process (computing)ResultantThresholding (image processing)Interpreter (computing)Goodness of fitCASE <Informatik>Different (Kate Ryan album)BitFormal languageDirection (geometry)Pattern recognitionComputer scienceAlgorithmHypothesisComplex (psychology)Natural numberLimit (category theory)Natural languageRoundness (object)Mathematical optimizationProgramming language2 (number)Data conversionBit rateGroup actionForm (programming)Formal grammarData miningView (database)Decision theorySocial classMultilaterationMeeting/Interview
CASE <Informatik>CalculusExistenceFormal grammarNP-completeLinear algebraRevision controlResultantPoint (geometry)AbstractionDerivation (linguistics)Dimensional analysisNeuroinformatikVariable (mathematics)Complex (psychology)Level (video gaming)1 (number)Statement (computer science)Form (programming)Expected valueFreewareContext awarenessNumberObject (grammar)Formal languageVolume (thermodynamics)AreaAlgorithmInterpreter (computing)FrequencyGradientAlgebraTheoryAssociative propertyPattern recognitionMoment (mathematics)BitMusical ensembleBounded variationSocial classRight angleMathematicsMultiplication signWordComputer scienceUniform resource locatorProcess (computing)Meeting/Interview
CognitionArithmetic progressionSoftware developerAlgorithmNeuroinformatikVirtual machineNatural languageWordMultiplication signQuicksortTable (information)ComputerDifferent (Kate Ryan album)PredictabilityMathematicsLevel (video gaming)TheoryQuantumComputer scienceDecision theoryMereologyFundamental theorem of algebraAlpha (investment)Electronic mailing listPoint (geometry)Form (programming)Equivalence relationAdditionTerm (mathematics)Frame problemMathematicianTuring testArtificial neural networkField (computer science)Insertion lossMeeting/Interview
AbstractionVirtual machineSingle-precision floating-point formatSoftwareComputer programmingComputer architectureView (database)Cartesian coordinate systemRight angleNeuroinformatikPhysicalismTuring testMathematicsDependent and independent variablesCoprocessorQuicksortCuboidSynchronizationComputer hardwareAlgorithmMultiplication signNatural numberParallel computingBitArithmetic progressionMusical ensembleLimit of a functionException handlingPoint (geometry)Data structureLine (geometry)TheorySoftware developerDirac delta functionComputer program1 (number)ArmPower (physics)Different (Kate Ryan album)Semiconductor memoryFitness functionLie groupMeeting/Interview
Field (computer science)Natural numberDirection (geometry)PhysicalismUniverse (mathematics)Computer scienceLimit (category theory)Musical ensembleSemiconductor memoryGame controllerPhysical lawMechanism designNeuroinformatikLine (geometry)AreaMultiplication signContext awarenessFrequencyNegative numberExpected valueStatement (computer science)Data conversionData storage deviceMultilaterationSet (mathematics)Right angleOnline helpMeeting/Interview
Internet forumBit rateComputer animation
Transcript: English(auto-generated)
So we begin at the beginning. Really the beginning, because interestingly you were
born in Budapest, Hungary, although I don't think you're Hungarian. Where are you? Well, I was born there, but I grew up in England. I went to school in England. So that was early childhood in Hungary. So your childhood was in England? Yes, yes, yes. Well, let's look at you as an eight-year-old. Where are you living? Who are your parents? What is the culture of your childhood?
Yes, so I was living in London. So my father was a chemical engineer. So my mother was interested in languages. English was a good language translation. She had a teaching qualification, which she didn't use very much. I had an older sister.
I think I was a pretty ordinary child, but I had a fairly early interest in science. So maybe by the time I was nine or ten, I think I had a strong interest in science. Are you noticing particular influences from your parents in terms of what they're thinking
about you, hoping for you, or are you on your own intellectually, so to speak? Well, they were very kind of hands-off, so I don't remember. I think my father was very happy that I liked science, and I was good at it. He had enormous respect for science, and I'm sure he probably wished that I had
a career in science at that point, but he never told me, I don't think. Right, he didn't insist. No, not at all. So I was very much left to my own devices, and I think I was more like a hobbyist. I did lots of things by myself to do with science, and he made it possible.
So we're going to a store once to get electronic components, transistors, and I was interested in electronics and made transistor radios. How things work, something like that. Yes, yes, yes. What kind of schooling did they have in mind for you? I don't know what they had in mind, but I had fairly ordinary schooling, so I went to a local school.
So we moved to the north of England when I was eleven, and then I went to a different school. So then there was the eleven plus system in England where you take a test, and then twenty percent of people go one way and the rest the other way.
So I went one way. There also existed much more selective schools, but they were far away. So you went to, we would call it a public school, but really a state school? A state school, yes, a state school, yes, yes. And was there an emphasis in the direction of, did the test you took at eleven determine a scientific career for you?
Not at all, no. This is just generic tests, I think, in English and arithmetic, I think. It's a general IQ kind of test. So the whole population went through this test.
And the society is determining who should get further education at that point? Exactly, exactly, yes, yes, yes. So that system changed a decade or two later. Right. But it's the system that shaped you. So somebody determined, the test determined you were fit for academic studies. Right. You went to a state school in it.
Is there, at some point in your pre-university education, a moment, a mentor, a turning point? So, again, I was interested in science, but I think I did everything by myself.
I did talk, there were some other kids in school who one could interest, but my interests were rather more theoretical. So I think I'm interested in rockets, and I tried to calculate whether I could make a rocket which would launch and would go around the world and land back in our garden. It didn't work out, that one.
So I did, it was fairly slow going, but when I was seventeen we moved house again to London. And at that point I kind of looked around, and at that point I did go to one of these more selective schools, which had students from a large part of London, and getting into Oxford and Cambridge was the big purpose of their schools.
So in that school I was only there for a short time, so there I think it was much more competitive than I had seen. And also I think I had the most influential teacher was a physics teacher, and his main, what he did for me was that
I took a gap of two semesters, I somehow did things in a different order. So I got into Cambridge and I had two semesters and he got me a very nice science job in a medical physics laboratory
in one of the medical schools in London, just before university. And so that was very good for me, my jobs were very hard to get those jobs. Yes, yes, yes, certainly good for the money, but intellectually, what was the influence of doing this? Yeah, so the influence was that in high school I was very much interested in physics,
and I thought I wanted to become a physicist, maybe a theoretical physicist, or maybe an experimental physicist, I didn't know. There's always a question, I was interested in mathematics, but I wanted to use it in some context, so physics was what I was thinking of.
So in some sense this job was good because it exposed me to experimental physics, and it told me that it's not what I wanted to do, so it ruled something out for me in an early stage, which was very important I think. So then I turned slightly back towards more theoretical things. I think it's a generalization that may not be true, but broadly in the Oxbridge context,
one thinks of Cambridge as more science-oriented. Were you aware of that and thinking Cambridge rather than Oxford for that reason? Yeah, so obviously yes, so people, like the school I went to at the end, everyone, so I applied to do mathematics with physics, so everyone from that school was applying to Cambridge, not Oxford.
Right, so you got in, we know, and again, for an American audience, we might not quite generally understand how much of a graduate kind of education one would get at Cambridge at that earlier point.
I simply mean you could specialize very early. Yes, so even in high school the last two years were somewhat maybe even over-specialized. So certainly if you did mathematics and physics, then my perception was that the amount of material you were taught in the last two years of high school was rather small.
You had to learn to manipulate it well, but it wasn't a large amount of matter, whereas once you got to university, then they did throw a lot of material at you. Yes, it was compared to American liberal arts education, it was totally specialized. So each year you could change what you did, so I happened to do mathematics throughout.
You decided a tutor of some significance in your development in that precious tutorial system? Well, how it works out is that in mathematics we had two tutors at any time, one applied math, one pure math,
but many of these, some were faculty, occasionally they were well-known faculty, often they were graduate students or postdocs. So one good thing was that you were exposed to quite a few different people, so they could also change from different terms. So if you wanted to find out what people were doing in research, wanted to find out what was ahead, you met some people.
So that was good. What are the years at this point that you're at Cambridge? The dates, there's six to seven to seventeen. Okay, in terms of your future interest in work, where is the world of mathematics applied or otherwise at this point?
What are people thinking about, what are you learning at this point? Well, so in Cambridge mathematics was and I think still is a very high percentage of things to study,
which means that effectively that many more people study it than those who just want to be specialist mathematicians. So to compare it to say Harvard here, where the pure math undergraduates are a very small group, there were lots of people doing mathematics, the idea being that they'd go on to do something else,
but mathematics is a good preparation. So you did fairly basic stuff, but I think it was quite a large amount of it. So for example, I find it less satisfying compared to being this teenager hobbyist doing what I'm interested in,
you were given a lot of material. So I remember one interaction with a professor, I think it was fluid mechanics, where I didn't have that much interaction with faculty that were kind of at a distance, but I was kind of complaining a bit that this course made different bits and pieces and had it all fit together.
He said, oh yes, yes, we do seven techniques in the course, but if you know these seven techniques, then you can go almost anywhere afterwards. So the purpose of the course wasn't explained to you then, but it was good for you to have done it. So that was the philosophy. So it wasn't that satisfying because you didn't quite understand why you were doing it.
So I was thinking of still doing some sort of applied mathematics theoretical physics. So if you want just a picture, so if you've seen the film, the theory of everything about Hawking, so there are some scenes of mid-60s Cambridge, so kind of, you know, that was me.
So his advisor, Dennis Schramm, is someone I took a course from. So how are you determining next steps? How are you determining your competence? In mathematics, people begin to sort themselves out in terms of ability and future.
What are you thinking about yourself as a potential mathematician? I don't know. I was looking forward to working on problems by myself, on things I could concentrate on.
So there were people better than me at this kind of undergraduate math where you had to learn a large amount and perform well without understanding it that deeply. But that didn't kind of worry me. I mean, what worried me is what to do next. So I think what worried me is whether I wanted to carry on with this idea
I had from my teens that I wanted to do physics, some sort of mathematical physics. So that sort of concerned me. So certainly when I went to Cambridge, I thought I'd do mathematics for two years and then change to physics. That was a possibility, but then I just did mathematics throughout.
So somehow I looked very hard, but I couldn't find anything I really was passionate about. So maybe as a way of taking a break, so I decided to go for one year to Imperial College London afterwards.
And this was a kind of computer science. This was a break from a trajectory you were on. Yes, well I think I understood that computer science was something new, which offered potential, but it wasn't clear what it was. It was really very new.
You were not being patronizing in your thinking about it as a break. No, I really mean because in the early stages of computer science, it takes a kind of either boldness or a sense that yes, there is something there that's not a lesser form. Are you thinking any of that at this point?
Well, I think what I felt is that it's unknown. So I think, as I described it, that even when I became a computer scientist in the 70s, the whole field was flourishing, but it was almost like climbing a wall of fear. And the fear was that there wasn't very much there to discover.
So maybe it is something, and then maybe after five years it's exhausted and there's nothing else to find. So the idea that it's quite as rich as it's turned out to be, I don't know how many people... There would have been a big leap of faith to assume that. Yes, and I think most people in computer science took that leap somehow for various reasons.
But I think I was very aware that there was a risk there. Because many of the things, you read it by the current computer science, and much of it was pretty simple. You could easily understand it. And the mathematics you'd been taught was highly compressed,
stuff done over hundreds of years compressed to be almost incomprehensible. Everything was understandable, clear, and anyone could do it. So that was the thing. So then I spent this year at Imperial College, and then I really had to decide what to do next.
So I was going to apply to do a PhD somewhere in something. And so then I did my homework, I went to libraries. And so the number of... I also decided I wanted to do something back, more mathematical, so that was a decision. I wasn't so interested in the practical aspects of computing.
And then there were a limited number of places in England where you could do this, where you could do a PhD in mathematical aspects of computer science, very few. But I read the research of these few centers. Yes, of course. And then I did have an eye-opening experience, so then I discovered my future place, Warwick.
So I read actually one paper written by Mike Patterson, who became my advisor, and his advisor, David Park. And so in some sense this paper was about computability in the tradition of Turing. And by reading that paper, I first understood computability in the sense of Turing,
and just as an incredible development which I wasn't aware happened, and as far as a mathematical theory of intellectual life, that computability theory explains how, in a certain mathematics problem,
each problem you have to work away at, there's no universal way of solving it, which explains thousands of years of mathematical development. And so this paper I was reading was, again, a very concrete, natural problem, which was not computable.
So I think there I saw that both this whole topic of computability had this great mystery, you know, infinite depth, but also it was totally different from anything I'd seen before. So you were really touching the beginning of your life at this point, I mean, as a productive scholar. You're sensing now where you want to go.
Yes, yes, yes. And you were in fact at Warwick? No, no. I did this in London, as in this one-year graduate program, and tried to research where to apply. Where to go. Where to apply. And then I did apply to Warwick, and I got in.
I want to ask, maybe not a profound question, but this is the point where these are called the red brick universities. Okay. Actually, okay. So the red brick universities were universities which were established in the 19th and early 20th century.
So these were called plate class universities. Yes. So this is another round of universities founded after the Second World War. Yes. So Warwick, I think, the first group of students came through in 1964. Yes, so in a way that's, whatever the context of the hierarchy,
it's where they're trying new things. Yes. As intellectual emphases and so forth. Yeah, I mean, I think Warwick was special because it had a very strong mathematics program. It still has. Somehow it, by the force of personality of one person,
it includes a lot of people from especially Cambridge. And it obviously had good management. I think they decided to concentrate on a few subjects and doing it seriously rather than, so they, one of the very successful new universities. So it was a very strong mathematics department by then,
and there was a new computer science department. It was kind of, when I got there, it was almost an empty building. As a graduate student, I got a nice office because there was no one else to take the offices. There were very few graduate students. So my Ph.D. advisor, Mike Patterson, he'd done a Ph.D. in Cambridge earlier,
and he spent three years at MIT, and it just came back the same time as I arrived. So that was a very good experience at Warwick. So Mike was a very kind of generous advisor. We spent a lot of time together, and also quite interesting that he,
his world view was kind of a bit different from mine. So it was a case of me learning something from someone who's a bit different. Yes, yes. How are you deciding on the direction of your dissertation? Well, he, I mean, he addressed some problems, and that's what I worked on.
What were those problems? Well, these were, so at that point, so these were decideability problems related to his thesis, so general questions whether a certain question has an algorithm for getting a solution,
or whether it had no algorithm. So by that time, certainly in North America, the emphasis had shifted, not just to saying whether something is computable, but how efficiently it was computable, so it had algorithms and complexity. Which becomes very key to some of your insights. Yeah, so what I was doing in some sense was a bit old-fashioned.
On the other hand, it was kind of a very, very simple, mathematically simple and difficult problem, which I couldn't solve, and I solved special cases. So something else one can think about. So some people are very lucky with their PhD thesis that it's fashionable and never wanted.
So in my case, I got something which was very difficult, so maybe that's got some advantages to it. You understand your limits. So that's what I found out from my PhD experience. So the nature of your research for your PhD did not shape the direction that your future work would go?
No, but in that period, I did have, so my advisor did have other interests too, and we had lots of conversations. So I think I was quite up-to-date with where the field was going. Just the problem I was trying to solve was something left over from some earlier years.
In the end, and we won't go there yet, but broadly you've had a transatlantic career, but what are the temptations at this point, particularly as you're about to get your degree, to go to what may have seen the wider universe of computer interest in the United States? Yes, so in fact what my interest became was practice mostly in North America and much less in Europe.
So my subsequent career, so I spent one year, I finished my PhD in a fairly short time, and I spent one year at Carnegie Mellon in Pittsburgh. Directly? Yes, after my PhD somehow I finished early and expected a short-term job turned up, so I did that for a year.
Then I had to think harder, then I went back to Britain for about eight years to Leeds and Edinburgh. So then, yes, I wasn't where the crowd was in this field, but I didn't, you know, I obviously chose it.
I don't think I suffered, so I tended to make up my own problems, but maybe just to give one characterization. So during that eight-year period, so every year I came to the US for one of the major conferences.
Right, of course. And I was always conscious of this question that I want to see what's going on, to catch up, and maybe I'll change the course and do something which someone else was thinking was good. And I think what I found, each time I found out lots of interesting things which influenced me,
but then when I went back I was carried on doing what I was going to do. So it seems that I was fairly strongly motivated internally what I thought were good questions. It's odd to put it this way, but as you're launched in your career, what is your first major problem and insight?
I mean, it's silly to put it that way because it's a process, but essentially the next stage of inquiry and revelation. Yeah, I mean, it depends on what your threshold is for these things,
but I suppose the first result which, you know, if you measure it from outside interpretations. Alright, one later. Yeah, okay, so when I was in Pittsburgh, I got one result which attracted attention, so it's a kind of result which, you know, if you mention it, people say, oh yeah, are people interested here? It's not something you have to kind of persuade them of is interesting.
It's a good definition. Yeah, something which kind of sells itself. Yes. So this was on context-free recognition, so languages, natural languages, and formal languages, and computer languages, people try to characterize what forms of grammar and context-free language.
I think it's, the Sanskrit's new about it, but it's the most natural way of describing a formal grammar, of how a sentence, a subject, object, et cetera, but the question is, okay, but a sentence can have, you can have a grammar where the same sentence has many interpretations,
so it's ambiguous, and this is one reason why. So, you know, formal grammar, given a kind of sentence, the question is, is the sentence legal in the grammar? And the question is, how efficiently can this be determined?
So this is all within the very specific natural context-free languages, which was defined by Chomsky as a precise thing. And so I got a more efficient outlook for this, using kind of algebraic techniques,
and using results which were fairly recent then. So there's some recognition of this. I think if I'm right, because I've done a little bit of research in your background, and this is in the late 70s?
Yeah, this was in the mid-70s, it was published in 75. In 75. The next, I think, landmark moment for you, but you can tell me when you're credited with a new class of complexity, this is the late 70s. That's right, that's right, that's right, that's right.
So this is about saying a computational problem, so many questions have yes-no answers, you've got a question, answer yes or no, and much emphasis up to that time, starting possibly because for Turing that worked, so it worked for a long time, but there are many questions where the answers are quantitative,
it's like you want to count how many things there are, or the calculus is all quantitative, you want to find slopes, or you want to find, you want to do some things. So there the issue is how do you recognize a problem which has an efficient algorithm,
and distinguish it from one which does not. So, and this is often in cases where, you know, often a kind of accounting problem has an existence problem, a simpler version of it, but often the existence problem is easy, but still to sum it up is hard.
So one point where the result, I suppose, I was pleased with it, is that, so one consequence is that, okay, so the kind of mathematics one learns in school, most of the mathematics one learns in school is also algorithmic, that you learn,
you want to be able to compute something with it, it's not just a pure abstraction, you want to be able to compute, so most of the things you learn in school are efficiently computable, like linear algebra, and they kind of, don't know how much calculus you did, but you learn that the derivative of x squared is 2x, and things like that. Okay, but it turns out, which came out from my results, is that for the calculus,
which I thought, I was promised in school that everything is easily computable, in fact, in high dimensions, it's kind of provably one of these hardest problems. So the simple things you learn in calculus, and it turns out that if you've got many variables,
x1, x2, x3, then these things are as hard to compute as, say, an NP-complete problem. Right. Or harder. So you're basically defining that level of complexity in this work. Yes, yes, so basically it's, so one thing which is amazing in computer science is,
which we can exploit and benefit from, is that there are some statements of great generality. So people talk about the search problem, where a search problem is something where, if someone gives you an answer, you can easily recognize, you can easily check whether it's an answer, but it's maybe hard to find it.
Okay, so lots of problems like this. So the question is, if you're given any old search problem, it turns out that for some of them, you can count the solutions efficiently, and for most you can't. You can. And the ones you can't are all equal, provably the same.
Right. Or many of them are provably the same. So it gives a technique for recognizing whether you should give up looking for efficient algorithm or not. That's a rather serious adjustment of the earlier expectations. So this insight about the many solutions, including ones you don't even know exist.
Yeah, so here the question is, yeah, there are many solutions, but maybe you can efficiently count up how many there are or write down this big, big number. Or in calculus, you're computing a volume or an area.
Yes. And so in high dimensions, these problems, you can understand why no one's found an efficient algorithm. So it's like a systematization. So without this computational viewpoint, people thought of all these problems as being different and arbitrary.
And this computational viewpoint gives you an incredible way of systematizing your understanding. Again, because it's a very rich career, I'm going to lead to 84. Now, by 84, you're a great deeper on learning. 84, where are you? Have we gotten you to America?
Yeah, yeah, yes, I'm kind of here. Actually, I'm just back there selling my house, actually. So I was here, and then I went back for three years to sell my house. So I was thinking of this during that period as well, so I've got that association. Where, basically, are you doing the thinking that led to your theory of the learnable?
To where, you mean? Yeah. Yeah, so some things I can remember exactly where I was, but this was over a period, I think. Over a period. So I don't have a geographical location. But, yeah, I mean, where did it come from?
Yeah, the process of thinking in that way, yes. So, yeah, I'm not quite sure how much hindsight. Anyway, one way of saying it is that people thought of kind of human phenomena,
so like learning was a human phenomena, as something which is totally beyond science, and it's a waste of time for mathematicians to think about it. It's a soft subject. It's the loss of the humanity as well as the science. Yeah, yeah, don't even touch it.
But, of course, I mean, the history is that, in some sense, Turing's main achievement was to touch exactly this, because in his day, the idea of what needs creativity and what can be done mechanically, that's exactly what he distinguished. So, you know, I thought things could be done.
But I suppose I was mainly interested in human cognition, not so much machines, and I suppose the simple idea is that human cognition, or, for example, children learning words, that millions of children learn words all the time, they see different examples of tables, but they get the same notion on the table.
So this is such an overwhelming phenomenon, predictable phenomenon, there's nothing wishy-washy about it, that there must be some scientific explanation. And so what we are saying is that what's represented in our brains, how does that map to the real world?
There must be some mathematical kind of way of thinking about that. Interesting. Because it's inconceivable that this whole cognitive phenomenon is just, it doesn't have a theory, it's just a trick which just happens to work. It works so well, it must have a theory.
So kind of that's where I started from. And then the one decision to make is, well, what aspect of cognition is the fundamental part? Yes. Because people, like artificial change at the time, I mean, people did work on learning, they also did machine learning, they worked on reasoning, they worked on search. So it was natural language, a whole list of topics at a conference,
the question was, which is the starting point? And so somehow I pretty early decided… So are you basically in this paper setting the problem of seeing the relationship between human cognition and computation in a mechanical sense?
Or is this purely an inquiry into human cognition? Well, no, no, no, it's confusing the two, it's saying the two things are the same. So it's writing, basically it's writing down some criteria for what you want to achieve
before you declare someone to have learned, a machine to have learned, a person to have learned, that doesn't matter. So people, once you say that, people worry about machines learning now. But it was totally inspired by cognitive phenomena. How was it received as a framing?
Well, I guess that's a great question. So certainly the community I was in was a pretty hardcore mathematical community. So initially there was some suspicion there was some absurd thing. Why are you doing this? It can't be serious science.
But yes, there was a lot of bad reaction initially. But I was lucky that there were some other people in the field who quickly picked it up and there were some very good people. So there was a small community formed who pursued this. In some sense you've been working on that question or problem
throughout your career from this point on. I mean, you continue to look at the human and the machine equivalent issues. Yes, yes, yes. Well, yeah, the equivalent is simply that, so again this is implicit in Turing,
that Turing said that he's going to describe everything which is a mechanical process and that certainly includes the brain. So the question really is whether human cognition somehow can be explained in terms of computation. I think that was settled by Turing.
But the question is what kind of computation and can we make it useful besides having said that, what happens next? This framing happens in the mid-80s. We're now in 2017.
What progress has there been in this investigation? Because this is so now fundamentally recognized as a key issue. Are we much closer to understanding the relationship between human cognition and…?
Well, no, so understanding human cognition is what motivated me. So what exactly the algorithms we use, we still don't know. So the developments have almost entirely been on the computer science side, people making machine learning practically useful, which it is.
So it's almost like, again, if you want to nudge with Turing, so a psychologist in his day would maybe debate philosophically what's creativity and what's mechanically computable. And so his influence is that computers have taken over the world,
but we still haven't settled the problems that a psychologist may have been interested in, exactly what humans do. So being motivated by… So the human phenomenon of cognition is so spectacular, it's okay to be inspired by that, which I was,
but I'm not telling you that I've solved it, this human aspect. But I think in the long run, I think computer science will contribute to that. Hasn't there been, I'm going to just use the word that comes to my mind,
it's probably the wrong one, almost a sentimental notion that computers are solving educational issues and learning issues and that there's been almost too much an embrace of what is computer generated and what really advances human thinking, or at least advances human learning.
The teachers sort of jumped in, or maybe the corporations jumped in, but it was too early, yes. Yes, I entirely agree. I mean, some people tell me since I work on learning, well, what do you know about education? And I say nothing. I think what I've learned from my technical work is that
human learning is very complicated. If we understand more about particular algorithms we use, we can do something. But otherwise, the fact that the educational world tries different techniques and tries this and that and the other, it's reasonable.
It's not that we've made much progress on that, from my point of view. Right, right. Again, I think an issue not necessarily recognized by the society, it's almost an over-optimistic thing about the role of computers in human learning. Yes, I agree entirely.
You also did work in the 90s on parallel computing. Can you explain this inquiry? Yes, so again, there's very basic questions in computing. So Turing had his notion of computing,
and then when people wanted to really build machines, then people thought a bit more concretely of what this would amount to. So von Neumann had a more concrete realization. And the main difference, his main contribution is that the way he phrased it,
this von Neumann architecture had an implicit kind of efficiency, physics efficiency hidden away. So what it really means is that he postulated that you've got one processor, you've got a memory, you can access it. And the big mystery is that his architecture could be realized efficiently decade after decade after decade,
even as the technology changed. Somehow his architecture had some physics insight, so that for a computer programmer, the world didn't change very much, even if the machine totally changed underneath. And this was a very powerful notion for the development of the computing industry,
that people wrote software on the one hand, a lot of people developed hardware on the other, which got more and faster and faster, but the two were separate. It's not that every time you had a new machine, someone would have to rewrite the program. So this is kind of incredible.
But even soon after von Neumann, of course people realized that if you have more than one processor, if you have parallel computing, which appeared immediately, then it's not quite clear what you should do anymore. Von Neumann didn't have a recommendation what you should do. So if you have many processors, what do they have access to, how fast, etc.
The problem being is that if every machine you make differently, then you can't reuse programs because the program exploits the particular structure of the machine. And in some sense this problem permeates parallel computing in practice even now.
So the question was what's the suitable abstraction in August of von Neumann for single processor machines which works with many processors. And so this interested me a lot. So I did have this proposal in 1990, this box synchronous machine,
which had some influence on the community, which is some positive influence. I'm interested, and we touched on this a little bit, and if it doesn't interest you we won't pursue it,
but this whole question of resistance to insight. You choose a line of thinking, questioning as an individual researcher and it goes into the world. There's the question of response, there's the question of colleagues,
there's the question of the perversity of your own nature and pursuing something when people are telling you this isn't going to go anywhere. How does that fit into your career? Are you very often talking along lines that people are challenging and wondering why you're going there?
Sure, so certainly I think you have to be thick-skinned, because you have to be very thick-skinned. I mean you get all kinds of responses which,
regardless of the music it's okay, but if you're worried by it. So for example on this parallel computing thing once I gave a talk, actually this was in industry, and a rather tepid response, and then someone in the audience was sensitive to this
and came up at the end to console me and says, well what you were telling us was too general. People aren't interested in anything quite so challenging. Okay, so if you want a different response, so I was going to talk about learning.
So I gave a talk somewhere, and there's a well-known philosopher in the audience, and so basically I gave this mathematical definition of what learning meant, and he said, well what you're describing was already said by John Stuart Mill, except for the deltas and epsilons,
those are mathematics, and then I didn't say anything, and then he paused and said, ah, maybe it's deltas and epsilons which are important, and I said yes. So making quantitative science out of something is the point.
But as far as your general question, certainly people have their lives and they don't want their beliefs to be uprooted every day. Sometimes they're receptive, but otherwise they don't. How concerned are you with the, I mean you're a theorist essentially,
with the application or the consequences of your insights. Is that something you lead to fate? Is that something you follow in a way? Is there something you're particularly proud of having led to? What about that?
Yeah, well I think in some sense all you can do is to leave it to fate. I think one doesn't have that much control, and sometimes one is pleased that some things which had very little influence were taken up decades later, so some things which I thought were pretty good, people didn't pay attention to, and some decades later people take it up.
So I think you, so I tend to leave it to fate. I mean, so I suppose I've done quite a few different things, and some have a good response, some don't. I leave it to fate. I suppose some people are more salesmen and think they have to push it, and so I've never done that,
and if I had I'm not sure what anything would be different. One of the other things that occurs to me in the spirit of this question is, and again something I think is very admirable, is one of the ways one can characterize your voice,
and tell me if I'm wrong, is that you're one of the people out there who is talking about the limitations of computers, or what we know, and it's a period, and it may have to do with not being an American or whatever, but more likely it's just in the nature of your intellectual interest,
that you are in an era gone mad with expectations for the computer for learning. You are, your inquiry is about what we don't yet know or can do. Is that fair to say? Yes, or what probably, or what can't be done. Yes, but I think computer science as a science is particularly good at that,
so I think I'm taking advantage of some aspects which are special to computer science against us, with Turing at the base in this non-computability,
so I'm certainly not the only one in this. No, no, I understand, of course. But of course you could say that all of physics' limitations, they tell you that things go in straight lines, so they tell you what can't happen, so saying that the laws of science are negative statements is some truth,
so just saying you can do this, you can do this, you can do this. Right, it's not perversity on your part, it's just a natural way of understanding how things work. Yes, yes, yes. My last question really has to do with the kind of thing one always, the young always want to know from those who have achieved so much in their field,
and that is if they think about future direction of a career, what seem like the exciting directions now that one might jump into?
I know there's no one answer or easy answer, but as you look at the field right now... Yes, but I think this may not be a helpful answer, but I don't think that's what I ever asked. Okay, that's helpful.
And certainly in the academic context people will say, what's the next field we should hire in? But these conversations I don't think are, I mean, no one has that kind of insight. No one could really see the future. Well, not fully, but you can see the present. You can see where some of the direction of research is going,
some of the problems that really need to be solved that one took them on. Yes, so one can identify. So I tend to look at problems which really looked not just ready to be solved,
but problems which needed solving. Problems needed solving, that's what I'm trying to say. Yes, okay, so look for problems which need solving. I think that's the look for problems which need solving, and obviously some judgment in whether there's any chance that you can solve it in the foreseeable future,
so it's within your skill set. But generally fields, going this field or that field, I don't think that's the right approach. I'm still going to press you to tell me some problems that need solving. Broadly within the computational universe right now,
maybe directions that you yourself are pursuing, but what do we need to know that is possibly knowable in the next five, ten years? Well, something which I've worked on for decades without any final conclusion
is very simple things about the computation in the brain, how we store memories, how you can store memories on top of memories without disturbing your previous memories too much. So you have questions like whatever you had for breakfast this morning,
if you had breakfast, do you use five neurons to represent it, five million, no one knows. But these questions, together with some more invasive methods of looking at what the brain does, I think these things we should be able to resolve because I'm sure the brain uses some very definite, simple mechanisms for doing this,
and we should be able to find out. What kind of specialists, this just seems to me to call out for interdisciplinary discourse. Are you involved with conversations across many fields, colleagues who are trying to solve this with you?
Yes, in the sense that I certainly listen to what these experimental neuroscientists do, and I also try to persuade them to do certain experiments, but that doesn't reach any conclusion yet. But I think it's an area which people have been predicting for a long time
that something should happen, and I think inevitably it will. Inevitably it will is a great last line. Thank you very much. Thank you.