The HLF Portraits: David M. Blei
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 66 | |
Author | ||
Contributors | ||
License | No Open Access License: German copyright law applies. This film may be used for your own use but it may not be distributed via the internet or passed on to external parties. | |
Identifiers | 10.5446/40201 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
The HLF Portraits35 / 66
2
9
14
16
17
18
20
23
25
26
27
29
30
34
35
36
38
40
43
46
50
53
57
58
00:00
Figurate numberQuantum stateMereologyMathematicsDecision theoryGoogolComputer programmingRing (mathematics)Point (geometry)NeuroinformatikUniverse (mathematics)Computer scientistComputer scienceWave packetGradientOffice suiteInheritance (object-oriented programming)Musical ensembleCommodore VIC-20FamilyAxiom of choiceStorage area networkMeeting/Interview
03:48
FamilyComputer scienceArtificial neural networkNeuroinformatikMultiplication signReinforcement learningMachine learningNumberVideo gameRight angleLetterpress printingAlgorithmElementary arithmeticData storage deviceClassical physicsInformationLibrary catalogSelf-organizationGraph coloringOrder (biology)LogicTerm (mathematics)outputHypothesisSocial classInheritance (object-oriented programming)MereologyMathematicianMusical ensembleAbsolute valueReal numberQuicksortState of matterMappingKognitionswissenschaftBus (computing)Decision theoryGroup actionLevel (video gaming)BitMathematicsSoftware developerText editorFigurate numberFlow separationGrand Unified TheoryWordPoint (geometry)19 (number)RoutingField (computer science)FrequencyDescriptive statisticsCognitionUniverse (mathematics)Direction (geometry)Transformation (genetics)Meeting/Interview
13:34
Social classEmailMathematicsMultiplication signDifferent (Kate Ryan album)Dependent and independent variablesRevision controlLattice (order)InferenceMachine learningVirtual machineSurface of revolutionGame controllerDirection (geometry)StatisticsBitVideo gameReinforcement learningComputer programmingComputer scienceNeuroinformatikOffice suiteGoodness of fitKörper <Algebra>AlgebraCategory of beingProcess (computing)Graph coloringSignal processingTerm (mathematics)Decision theoryField (computer science)Point (geometry)SoftwareRight angleExpert systemLevel (video gaming)Endliche ModelltheorieStudent's t-testContext awarenessHypothesisRoboticsControl flowGroup actionMathematicianWebsiteAxiom of choiceLaptopBounded variationAlgorithmSoftware testingElectronic mailing listMeeting/Interview
23:17
Machine learningPattern languageOnline helpReinforcement learningPhysical systemPredictabilityVirtual machineBitMoment (mathematics)Supervised learningGradientUnsupervised learningDecision theoryTerm (mathematics)MassExpected valueFluidFacebookVideo gameMultiplication signEmailAlgorithmRule of inferenceBinary fileBoundary value problemCartesian coordinate systemPoint (geometry)Right angleFilter <Stochastik>Medical imagingFeedbackMetropolitan area networkWordFocus (optics)Real numberNear-ringStudent's t-testQuicksortDirection (geometry)Order (biology)Process (computing)KognitionswissenschaftGoogolAxiom of choiceHypothesisMereologyNavigationCompilerDifferent (Kate Ryan album)Data conversionOffice suiteQuantum stateThread (computing)Query languageEndliche ModelltheorieField (computer science)MathematicsGoodness of fitEntire functionSequenceSinc functionView (database)DeterminismType theoryContrast (vision)Meeting/Interview
32:59
Right angleLevel (video gaming)GradientSocial classGoodness of fitStudent's t-testMultiplication signMereologyOffice suiteGroup actionUnsupervised learningClosed setElectronic mailing listDifferent (Kate Ryan album)Process (computing)SequenceTerm (mathematics)Endliche ModelltheorieAxiom of choiceExistential quantificationFigurate numberHard disk driveHypothesisAlgorithmNatural languageLaptopVirtual machineComputer scienceExpert systemPredictabilityResource allocationType theoryPoint (geometry)Computer clusterVideoconferencingMathematicsLoop (music)Cycle (graph theory)Overhead (computing)State observerGoogolExterior algebraNeuroinformatikCollaborationismEvent horizonPattern languageDatabaseSpacetimeFacebookSocial engineering (security)Supervised learningMachine learningField (computer science)Position operatorData miningContrast (vision)PhysicalismFormal languageSet (mathematics)Meeting/Interview
42:42
Right angleAxiom of choiceAngleBitService (economics)Physical lawStudent's t-testRadiusException handlingComputer scienceVirtual machineMathematicsEndliche ModelltheorieStatisticsProcess (computing)Combinational logicNeuroinformatikVideo gameMathematical analysisCentralizer and normalizerCross-correlationDynamical system1 (number)Machine learningMultiplication signWordSoftware developerPoint (geometry)AutomationGroup actionComputer configurationSocial classGoodness of fitComputability theoryArithmetic meanComputerMachine learningWave packetMeeting/Interview
47:45
Virtual machineUniverse (mathematics)Mathematical analysisMeasurementCausalityCore dumpMachine learningPersonal digital assistantField (computer science)AlgorithmBoiling pointMereologyGrand Unified TheoryCommutatorRight angleDependent and independent variablesFaculty (division)Different (Kate Ryan album)Process (computing)Multiplication signQuicksortOffice suiteLattice (order)Set (mathematics)Student's t-testSheaf (mathematics)Computer configurationEvent horizonLatent heatImage resolutionReal numberLine (geometry)EmailAssociative propertyPoint (geometry)Meeting/Interview
52:47
AlgorithmSimulationBlack boxReading (process)Point (geometry)CausalityComputer scientistLevel (video gaming)Field (computer science)NeuroinformatikOffice suiteVirtual machineDecision theoryComputer scienceRoboticsBuildingCategory of beingInheritance (object-oriented programming)Normal (geometry)Scaling (geometry)Machine learningPresentation of a groupReal numberGoodness of fitPrisoner's dilemmaFrequencyRight angleMeeting/Interview
56:38
Order (biology)TelecommunicationAlgorithmSound effectRight angleVirtual machineInformation privacyForm (programming)Real numberPhysical systemResultantDecision theoryQuery languageInformationCategory of beingPredictabilityDatabaseSelf-organizationArithmetic meanMeeting/Interview
58:41
Internet forumBit rateComputer animation
Transcript: English(auto-generated)
00:17
David, what kind of a kid were you? Where are you living? Who are your parents?
00:23
Start at the beginning. Okay, what kind of a kid was I? I was, I grew up in Storrs, Connecticut, which is a small college town. Where University of Connecticut is? Yeah, my dad's a math professor there. Okay. And I grew up, as you, as you
00:44
might imagine, I liked computers. I liked... Well, I don't imagine. Some of the great computer scientists didn't. Yeah. So at what age? I mean, are you touching one as early as you can remember? More or less. When I was seven years old, I was born in 1975, and when
01:01
I was seven years old or eight years old, my parents brought home a Commodore 64. And I actually know a TI-99-4A, Texas Instruments, and they attached it to the TV, and I started learning how to program. And... Was it tough? Do you remember just the thrill of figuring things out? I loved it. Yep, I loved it from the
01:23
beginning. And at some point, we graduated the TI-99-4A to a Commodore 64, still attached to the TV. I kept programming it. At some point, I got a monitor. That was a big deal. Moved to my room, and the Commodore 64 was in my room, and I kept programming the Commodore 64. And... Are you the only kid in the
01:46
family? So, I have a younger sister. And so there's another part of the story about how I got into computer science, which is that I had a babysitter. And my babysitter, remember this is Eastern Connecticut, his name was Ray Sidney. And he was the son, is the son, of one of my dad's colleagues, Stu
02:04
Sidney. And he was really into computers. And he came over, and he would teach me how to program. So, I really credit my interest in computer science to Ray Sidney. My sister and I, when we talked about this joke, because she was around too. And somehow, he was babysitting both of us, but all I
02:23
remember is Ray and I. So, this is the famous male bias? I don't know. Ray Sidney ended up being the, like, eighth or ninth employee at Google. And so, you know, many years later, I was on a train. This is after college, and I was on a train in California. And I see this guy who looked just like Ray
02:45
Sidney. This is years and years later. But, you know, computer scientists, often we don't change that much, or at least we don't change the kind of clothes we wear. I saw this guy. He's wearing a tie-dye shirt, and jeans, and sandals. He looked like Ray Sidney. And I went up to him. I said, Ray Sidney? And he said, Dave Blythe?
03:00
You know, this is on the Caltrain from San Francisco to Menlo Park. And anyway, he said, this is 1998. He says, you know, I'm working at this little startup called Google. We should come and have lunch. And so, I, you know, went to Google to have lunch with him that, you know, the next day or a week later or something. And it was in a little office in
03:24
Palo Alto. And then afterwards, he said, so, you know, do you want to join Google? It would be fun to have you here. And I said, you know, it sounds nice, but I got into grad school. So, I had gone into grad school at Berkeley by then. And I said, so I'm going to go to grad school instead. You know, later on we may get to
03:43
the subject of commercial choices as opposed to academic choices. Because when you start making your decisions, but this is a great reference. In your family, are both your parents scientifically inclined, or is it just your father? Right. So, my dad's a mathematician. My mom is a lawyer and a
04:02
lobbyist. So she, as children, we grew up, like our family vacation was to go to Washington DC for the big NARAL march. She lobbies for a lot of left-wing organizations for gay and lesbian rights and for pro-choice organizations and women's rights. Exactly. And, and so anyway, so yeah, Ray
04:24
Sidney, I grew up liking computers. At some point, another story about my sister, she had called me up and said, Dave, you know, I read this cool book called What Color is Your Parachute? And it's about finding your path in life and finding what you want to really, really want to do. And I was
04:41
maybe 30, 35 when she called me to tell me about what color is your parachute. It's a famous book. Yes, it is. And I said, Michaela, my sister's name is Michaela, you know, I know what color my parachute is. My parachute is computer. And you, in a way, always knew that. Yeah. You know, I, well, I was lucky. I really, I, I, I loved it from, from the
05:01
beginning. And also lucky in your generation as we'll find out in terms of what's breaking in this field. Okay. So basically going to school is like a vacation from the, from the serious computer work at home. Where are you, are you getting any support and well, elementary school, but let's get to high school for this kind of interest.
05:22
Is it the right kind of school for you? Tell me about your education. Sure. Younger education. Yeah. So in you know, stores is a small town. There's a, you know, now I live in New York City. We think about what school do we send the kids to all this stuff. It wasn't like that where I grew up, you went to the school, the school bus picked you up. It took you to school and came home at the
05:42
end of the day. And yeah, my elementary school and middle school were, were good. They were, they were nice places. It was a small town. My height in high school, I started doing a bit more you know, I was into math and computer science and there was a computer team that I was on. Okay. I remember, I
06:01
remember the computer team teacher asking us to build a sorting algorithm. And he said, okay, so go home and figure out a way to input a bunch of numbers and spit out the numbers in order. And I came up with what I now know is called destruction sort, where you go through the numbers and find
06:20
the smallest number, print it, turn that number into the biggest number you can think of, which for me was, you know, 9999. Then you find the smallest number, print it, turn it into that. And so at the end of the day, you've printed the numbers in order, and you've lost all the information because all those numbers are equal to 9999.
06:41
Anyway, so in high school, I also was, I played in a band like in a goth band. And I was really into that. So you were normal? You know, I played in a goth band. As normal as that is. You can interpret that as you like. Are you, are other people keeping up with you? Are you know, you've been celebrated as somebody who really knows his
07:02
stuff. And I won't use the G word, but you're you're said to be pretty smart. Was that a problem in high school? Do you have friends who were at the same pace you were at, interested in the same intellectual problems? Or are you just darting out ahead? In high school, there was really, you know, there was a
07:23
group of us, we all took honors classes together and things like that. And, and there was no, there was a very nice high school, there wasn't a lot of competition and we enjoyed ourselves. I was the editor, one of the editors of the school paper. I really enjoyed that. So you're not bored, basically. I mean, you're,
07:41
you're getting the intellectual and social sustenance that you want. Absolutely. Yeah, yeah. You know, I hung out with the nerdy kids. Okay. Plus the goth band. That's a shocker. So, um, let's get you to college. Yeah. How do you decide where to go? Yeah, so in college is where I think my interest in AI
08:02
and machine learning began. But how did you know to go where you wound up going? Right. I went to Brown. And I'm trying to remember how I chose Brown. I loved Brown. And, but I can't
08:21
remember why I chose it. Are you, are you getting counseling? I mean, by now you've demonstrated a real ability in computer science. And this is now the 90s. Yeah, I graduated high school in 1997. Okay. So, no, sorry, 1992. I graduated college in 97. So, um, and
08:41
this is 1992. I applied to a lot of schools, mostly on the East Coast. I got into some, I didn't get into others. And I went and visited Brown and liked it. I liked the vibe of Brown, kind of had a sort of a countercultural spirit, which I liked. So you could argue the decision was, obviously it was a great university, but the decision was more
09:01
cultural than scientific. Your father's been saying, go here to develop yourself to the next stage. I mean, you're just going to a place you want to go to. More or less. Yeah. Yeah. My parents liked that it was close by. Yeah, as all parents do. And they liked that Brown has a good reputation. And we
09:21
knew some people that had gone there and liked it. And I liked those people. So there you are, you made the decision on joking, of course, for the wrong reasons. I mean, in terms of the development of an intellectual career, but Brown turns out to be an intellectually productive place for you to be. An amazing place. Yeah. And there's a professor there that really made this happen. And her name
09:42
is Leslie Kebling. She's now at MIT. But at Brown, really there were several professors, but Leslie stands out. She was my thesis advisor and she taught the artificial intelligence class when I took it and turned me on to this whole world of AI and machine learning that I've been a part
10:02
of ever since. Let's situate AI intellectually for this era. Okay, so you are, you're in this direction, you've got the right mentor, teacher and you're learning, what is the state of AI? What are the assumptions about it? I mean, that's a big subject, but just characterize the 90s
10:23
because your career as it takes off is going to take off during a tremendously transformative period. Right. So I'm interested in the before right for you. That's a good question. I'll have to think about it. What was AI like back then? Well, let me describe it and maybe yes, yes, of course,
10:42
maybe maybe what we're looking for will come out of that description. Well, first of all, I read this book. I'm sure you know it girdle Escher Bach. I read my father's copy, his sister had given it to him. And that book energized me. And it was and to be clear, that book has very little
11:04
to do with modern AI and machine learning. But it was, you know, a meditation on cognitive science and what might be possible and logic and intelligence. And it was and for a however old I was 19 year old kid, it was very, very
11:22
energizing around thinking about the possibilities of computers and computers doing things that we don't think they can do. So I read girdle Escher Bach and I loved it. I had dreams about it. I was very, I was completely into this book. And then I took AI and AI was
11:41
taught by Leslie Kaling. And there's a great AI book textbook called artificial intelligence and modern approach. It's by Stuart Russell at Berkeley and Peter Norvig, who, who's at Google now. And that book has been a the AI textbook for many years. And it is they just wrote
12:01
written it in 1995, or whenever it was that I took in your timing in life was pretty good. Yeah, they had just written that book and they were friends with Leslie and they sent her the draft copy and said, Hey, can you teach out of this book and tell us what you think. So I took AI with Leslie Kaling. And that was the book we were using. We were using this book that
12:21
became a classic. It was it was fantastic. It was fantastic book. It's all based on this agent. I remember it's all based on this notion of an agent and intelligent agent and what algorithms can the intelligent agent use to act in the world. And, and through that class, we learned about lots of classical AI
12:40
methods, but we also learned about some more modern methods. And in particular, we learned about reinforcement learning. I don't know if reinforcement learning has passed by these interviews yet. But the idea and reinforcement learning is very natural, is that there is an agent, an intelligent agent, and it's acting in the world, taking actions. The agent has a
13:01
catalog of actions that it can take. And what it does is it roams around taking actions. And when it and when good things happen to that agent, it gets reward. And the agent is, while wandering around the world, trying to learn what's called a policy. And a policy is a mapping from your state, like where you are
13:21
right now, to action, what you will do. And by the agent trying to learn a policy, what I mean is that the agent is trying to learn how to act to maximize its, loosely, its long term reward. Right. Which, you know, when you hear about this, you can't not think about just daily life. Right. And you're talking about a
13:41
human being. Yeah. Right. So, so reinforcement learning is about making that whole idea mathematical and, and, and we started learning about reinforcement learning algorithms, how to wander around. And there's all these fascinating angles, like for one, as one example, the explore exploit conundrum. Do I go
14:01
try a new restaurant that I've never tried before? Or do I go to the restaurant that I know I like, and I want to have the burger there? All right. And, and, and reinforcement learning agents have to solve this problem. Do I explore new ways of acting in the hopes that they are going to lead to even bigger rewards than I currently think I can get? Or do I do the tried and true behavior that is
14:21
going to get me the rewards? Here's a way I want to characterize this era, and please correct me because I'm certainly not an expert. AI at this point is aspirational. I'm saying this, you're then going to correct me. It's aspirational. What you're really doing at this stage is not yet understanding how machines can
14:43
do some of this. Again, you can correct me, but you're trying to figure out how human beings do it. And then you're going to get to how can machines do it now? Tell me really what's going on. Yeah. So, well, you're accurate, except you're also still accurate. So
15:02
the AI, you know, Mike Jordan just wrote this AI, the revolution hasn't happened yet. The revolution hasn't happened yet. So we have not achieved AI. And we'll get to where I'm donating AI as possible. You're going to contribute to at least getting us where we are now. But at this point, but at this point, yes. So things are very aspirational
15:21
and we're thinking about how people do things, but also connecting it to mathematical fields like signal processing and electrical engineering and control and statistics. And these are all it's all kind of coming together a little bit at this time. It hasn't yet totally come together. And reinforcement
15:41
learning makes mathematical the intuitive, agent-based action that I just described for you, where you are wandering around deciding what to do, hoping that you're making good decisions. Now that you've brought in the term mathematics, let me just ask this question. Somebody with your capabilities and
16:03
interests might or might not have chosen mathematics as a field rather than computer science at this point. What's the difference in terms of a life choice? Because you need mathematics to do computer science. Yeah, that's right. So why didn't you become a mathematician? Is
16:23
that because you've been playing with machines your whole life? I think so. Yeah, I think, you know, I, I wish I had a more sophisticated answer than I like it. I like programming computers. I still like programming computers. So there was no torment, intellectual torment about do I go this direction or that? Yeah, no, but like I said, the my
16:42
parachute was computer colored. I just I would, and I love mathematics. So I was a double major in CS and mathematics. And you were? Yeah. Okay. And there's some math classes that I particularly loved abstract algebra comes to my mind as very computer sciencey math class. Okay, that's going to allow us to get you to graduate school, because you're a double
17:01
major. You've had an amazing mentor. Yeah. Maybe by this time, you figured out you could make a life's work with this. Yeah, yeah. Yeah. So that's gonna take graduate school. Yep. How do you decide about the next stage? Right. So I took this course in AI with Leslie, and it was life changing. And
17:20
then Brown is an amazing place. And Leslie's an amazing mentor. And she folded me into her research. So she folded me into her research group. So I then used to go upstairs. This is before everyone had a laptop, there was an AI lab where all the graduate students sat and programmed and worked on their research. And so I had access to the AI lab at Brown. And I used to go up there and sit there
17:40
and work on my, my undergraduate thesis. And that's where I met other graduate students and learn what it was like to be a computer science graduate student. I loved it. After college, I went and worked at a research institute called SRI. Oh, so you took a break from the academic. I didn't go right to graduate school. I mean, SRI was an
18:01
academic research institute. It's, it used to stand for Stanford Research Institute. But then, after Vietnam War protests, it became just an empty acronym. It's a famous place for AI research. It's, it's where Siri first started. Really, a lot of robotics first started there. And my mentor, Leslie, she worked there. And she
18:20
helped pay for it. Did she urge you to do that before you went on to enroll in an academic program? I can't remember. I think I knew that I wanted to work a little bit. And I knew I wanted to move to California. I know I'd always had this idea that California was a great place. And turned out I was right. Yeah. And, and so
18:43
Yeah, you went, and there were no risks in taking time off. I mean, just in terms of getting into the program you wanted, or you just didn't worry about that. I didn't worry about that. Yeah. You know, it's funny, I, my wife and I often talk about how it's different now because I interact with undergraduates a lot in my job.
19:03
And, and it, it felt like a different time. And I sound old when I say that. But if, you know, I don't know if the world felt as competitive back then. When I was graduating college, I don't remember my friends who now are
19:20
all very successful, happy, upstanding citizens. I don't remember us thinking about, oh, you know, what move should I make to get to the next place where I want to eventually be? There was very much a feeling of, Hey, what do you want to do next year? And some people said, I'm going to go and I saved up some money. I'm going to travel or I'm going to get a job at a startup or I'm
19:41
going to go to California and get a job in the tech industry. But it wasn't, it wasn't as deliberate as, as well, I eventually want to be an academic. Yeah. So yes, I went to SRI. So how long are you in this California paradise for research? It was paradise.
20:00
So I went to SRI and I basically was coding in LISP. I was writing LISP software with researchers, people who had PhDs in AI. And at the same time, I was still communicating with Leslie over email because we were turning my undergraduate thesis, which was about reinforcement learning into a paper, a workshop
20:22
paper that we could publish. And basically Leslie sent me an email. I remember saying, Hey, it's probably a good time to go to grad school now. And I thought about it and I thought, you know, I, I don't love having, even though I liked my job, I didn't love having to come to the office every day. I wanted the, the,
20:41
now it's an illusion, but I wanted the freedom of academic life where I could, you know, kind of set my own agenda and think about what I wanted to think about. And so I said, yeah, that's a good idea. And I applied to graduate school. And in your mind, it was obviously going to be a good graduate school, but was
21:02
it, did it have to be in California? No, I applied all over the place. But then it boiled down to, I was deciding between Berkeley and UBC, the university of British Columbia. When UBC at that time had some really great reinforcement learning researchers. And that was the world that I knew back then was reinforcement. But I also
21:23
got into Berkeley and I had also met my wife and she was in California. And again, this is going to be how it's a different time. I, so at Berkeley, my advisor was Mike Jordan, and he's a very well known and important scholar in
21:40
the world of machine learning. And, you know, before I even got into Berkeley, I emailed Mike, Leslie said, Hey, you know, Mike Jordan might be a good person to work with. He's, he just moved to Berkeley from MIT. And so I emailed Mike and I said, Hey, you know, I'm a, I graduated from Brown. I work at SRI. I'm interested in applying to grad school. Can we
22:00
chat? I'd be interested in maybe working with you. Leslie Kaling suggested it and Mike wrote back, sure. Why don't you come and meet with me for 15, you know, 15 minutes, 30 minutes. And before you meet with me, here's a paper. Why don't you read this paper that I just wrote? He's going to test you. It was a great thing. And now I do this when I meet with students, but I'll tell you something else, which is that nowadays, Mike probably
22:22
gets a thousand such emails a day and probably can't respond to any of them. Back then, again, it was a different time for machine learning and AI. I, I, you know, took the BART to Berkeley and had a sit down meeting with Mike Jordan. So what's the short version of what the paper was about? So
22:41
the paper was called an introductory or variational inference. I don't remember what paper, the title of the paper is, even though I've cited it many times, but it was a tutorial about, about how to do something called variational inference in the context of something called graphical models. And what's important for the context of this
23:00
interview is that that ended up being my field, basically what I, what I've been working on since, since that time for 20 years, almost 20 years. So then you'd really better explain what this is about. And I would love to be able to be a fly on the wall in that meeting to hear what on earth I thought of that paper back then, because there's no way I understood even a, even part
23:22
of it. I did read it, you know, and I read it and tried to have an intelligent conversation with Mike about it. But anyway, so I, I met with Mike, I applied to Berkeley, I got into Berkeley. That's when I was offered an early job at Google, which I declined to go to Berkeley. Again, now's the time to
23:40
dwell a bit on this decision. Now we know what the decision to go to Google would have represented, at least financially. But as you're thinking about it, and you're still young and fluid in your expectations in life. What did you think the choice was when you were offered the chance at Google as opposed to graduate education? Might
24:02
you have gone to Google or was that not even realistic in your ideas of what you wanted for your life? Well, I'll tell you. So when I was graduating from college, I looked at lots of different kinds of jobs. So I looked at startup companies, I looked at SRI, and I
24:21
interviewed at a bunch of places and decided to go to the research institute because I wanted to do research. When Ray offered to really talk more with the Google folks, I don't want to misrepresent. When, you know, I was a kid, first of
24:43
all, Google looked like every other startup. That's one thing to remember. But one thing that is true is that I went back from that lunch back to my office. I remember this too. And I fired up Google. This is in 1998. And I typed in a query and I immediately noticed, and I think many people have had this experience, I immediately
25:00
noticed that there's something different about this. It was better. And I remember I told my wife, I told my friends, and we all started kind of using Google at that point. It really had that kind of immediate feedback feeling that there was something good happening here. That said, to be honest,
25:21
I didn't really take it seriously. I didn't, or I didn't, that's not the right word to use. I didn't really seriously consider it. I got into grad school. I was really excited about going to Berkeley and I was looking forward. So why make a change? Let's get us from the near
25:43
incomprehensive, near incomprehension of the paper given you by Jordan, to the going into this as a direction for your own research. So now you're in graduate school. You're pretty much free in the way graduate students are to map your course. So how
26:02
are you making decisions intellectually about on the way to your dissertation? Yeah. You know, it's funny, you said that we're going to talk about my life and I now realize there is a theme here that I'm learning about. So maybe this is like a therapy session for me, which is that, well, I'll tell you
26:22
and then I'll tell you what the theme might be. So in grad school, I had a great time. I love graduate school. I loved it. When we moved to New York at first, I told my wife, oh, I miss California. She said, you don't miss California. You miss graduate school. I love graduate school. I liked going, I liked
26:40
meeting people. I liked having coffee. I liked learning. I liked Berkeley. I like riding my bike around. It's a good man. It was great. Nothing wrong with that. And so I was very happy. And I'll tell you what happened. So my work, I guess the preface is that my dissertation was about large
27:01
scale, what's called unsupervised learning on text data and images. And what unsupervised learning means is that, you know, in some machine learning problems, let's contrast it with supervised learning. In some machine learning problems, the goal is, a good example is your spam filter. The goal is to say, okay, I'm going to ingest, I'm a machine learning algorithm. I'm
27:21
going to ingest email and I'm going to decide if it's spam or if it's a real email. And if it's a real email, I'll send it to your inbox. If it's spam, I'll send it to the spam folder. Okay. That's called classification. And it's called supervised learning because you need labeled data, data, a bunch of emails, and whether or not there's spam or not spam in
27:40
order to create the machine learned algorithm that can decide whether or not the email spam or not spam. That's called supervised learning. It's the algorithm that takes in labeled data and learns a rule to take your email and put it in the right bin. Right. Like a recipe or something. Exactly. Yeah. Well, it can be statistical, but the point
28:00
is that it takes in labeled data and then it does something with unlabeled data later. Now, unsupervised learning is literally unsupervised. There's no labeled data anywhere in the picture. Okay. So unsupervised learning, and this is the problem I worked on in my dissertation, is, you know, somebody hands you a 100,000 or a million articles, say,
28:21
in the New York Times. The New York Times calls you up and says, hey, we have all our articles digitized. Here's a million articles. We want to build a way to navigate around them. We want to build a way to understand them and to help people explore them. Unsupervised learning says, okay, I'm going to ingest all that data, and I'm going to find patterns in the data automatically. And then I'm going to use those patterns
28:40
that I found to create something like a navigation. And is the I at the moment the machine? Or is the I? Who is the I? It's the machine. Yeah. I am I the machine. I'm going to ingest this data, learn the patterns, and then build a way to navigate through this data with those using those patterns. And so my thesis
29:01
was about unsupervised learning of text. And this is called topic modeling, where you ingest many, many, many documents, articles, whatever they are, you learn automatically the themes that pervade these documents. And then you use those themes to build a navigator or predictor or recommendation engine or something that you bet. And the point is that
29:21
you never needed anybody to label any documents. You learn this automatically. So is anybody else working on this problem at this time? Many, many people. Many. Oh, at this time in graduate school? Yeah. Oh, no, few people. Yeah. Yeah. Unsupervised learning is a thing. But specifically with documents, I feel like we really we carve
29:41
that out, though. Though there were others for sure. No, I understand. It's not entirely but not so many. Yeah, that's right. It was a it was one of these threads of machine learning research. And it came out of cognitive science really, with the work of people like Sue Dume on on latent semantic analysis and Thomas Hoffman. Allowing for modesty. I want you to tell
30:03
me whether we can say that this kind of focus is what has turned artificial intelligence into a more likely medium. I mean, I this feels like a real turning the path. Well, I will say that, you know,
30:24
something you've brought up a few times is the sort of not right place, right time. But the the timing of this is very interesting that that this was the time before the time now when technology is permeating our entire life. Okay. And and something
30:42
that's happening now is that there is a newfound importance to unsupervised learning. So in the 90s, there's there loosely I don't like these labels and boundaries but but they're helpful for understanding the landscape of a field. There is supervised learning, where we have labeled data and we want to
31:00
predict the labels on new data spam filtering. There's unsupervised learning, which I just described. And then there's reinforcement learning, which is what I described earlier. And back in the 90s supervised learning was the main activity of machine learning. And it was also where many of the main applications of machine learning were coming. So spam filters that emerged in the 90s and 2000s. And that's a
31:21
supervised learning problem. What's happening now is a lot of the benefit of unsupervised learning and reinforcement learning. But let's not talk about that yet. Or maybe we won't get to it. But unsupervised learning where, you know, suddenly we've built measurement devices on the whole world. And so that gives us lots and lots of unlabeled data, data where nobody's
31:40
marked it as spam or not spam. And we need to do something with that data, we need to we want to use it to help our lives. And so how do we do that? Well, we do that by extracting patterns in that data, and then using those patterns to form downstream predictions. So for example, recommendation systems, the systems that predict movies for you, these use unsupervised learning almost
32:00
entirely. Now, back then, unsupervised learning was arguably less important than supervised learning. Is another thing that's about to happen, as you say, right time, right place, but that, let me start with, is the term big data being used yet?
32:22
No, because it's because big data comes in, that unsupervised Exactly. becomes so important. Yes, that's exactly right. This massive amounts of data that has no labels, and it's there's so much of it that we can't even hope to hire enough people to label. There you go. Yeah, exactly. And what's
32:40
interesting is that what's happening now, so this now has transformed the world of technology, it's clear. I think what's happening next is science, where it's not just that we build Facebook and we build Google, and that gives us lots of unlabeled data. But scientists suddenly can sequence millions and millions
33:01
of people, and astronomers and astrophysicists can point incredible measurement devices at the sky and collect terabytes and terabytes of data about the light and the various unimaginable observations in the sky. And just as lurking in the unsupervised, in the unlabeled data that we get by
33:21
building things like Netflix and Google and Facebook, just by using unsupervised learning on that kind of data can help us build algorithms that form predictions that help us do whatever it is we're doing now with technology. Those scientific data sets, those hold inside them the answers to scientific questions. Now,
33:41
you need a different type of computer science to get those answers out of that kind of unsupervised learning, and that's something I'm interested in right now. But anyway, yes, in short, unsupervised learning, very important now, will be important in the future, and back then wasn't as important. So how did I get into it? I took a class with
34:01
someone named Marty Hurst, a great professor who taught a class called text data mining. And this class was about unsupervised learning and text, or really it was about text. If we have digitized text, we have machine readable text, what can we do with it? And
34:20
again, this is the theme I was referring to. I just liked it. I don't know what to say. I enjoyed writing algorithms that operated on text. I enjoyed thinking about text as data. I enjoyed finding patterns in the text. And something I liked about it, in particular, is that, you know,
34:41
as a kind of intelligent, educated person, I could take a big collection of documents and immediately see if my algorithm is doing something sensible with them because I speak the language. And in contrast, if I was working, trying to work on something like computational biology or physics, which these are very
35:01
important, interesting problems, you know, I would, I could ingest a big database of genetic data, but I wouldn't know if what I got out of the algorithm made sense unless I called up an expert, an expert biologist and said, hey, does this make sense? So now I do that kind of thing. But back
35:20
then, so I loved that I could have all this data on my laptop. That's the other thing. Text data is small and it can fit on a hard drive in 1999. And so I could have all this data on my laptop and I could just start exploring and I had the, like a tight loop between making changes to algorithms and seeing what those changes would do. And there wasn't a lot of
35:41
overhead. And that, that put me in a, in a cycle that I really enjoy of developing algorithms, seeing what they were doing, seeing how they worked. And yeah, so that, that's how I got into unsupervised learning. Marty Hurst's class, I went back to Mike. I said, Hey, I'm interested in text and unsupervised learning. Mike said, Oh, me too. And, and we started working together.
36:02
So I'm just assuming your dissertation was well received. And now you've got some career choices to make again, in terms of where you're going to go. I assume you've, you're committed to an academic career. So how are you going to, what are you going to do with your PhD?
36:22
Good. I know you're trying to get, trying to take us forward. I want to, I want to say one more thing about graduate school. There's something else about grad school and it's an important part of the story for me anyway. Which is this, you know, in graduate school, something I was so fortunate
36:41
about was the cohort of graduate students that was in Mike's group. And, and the late nineties in Mike Jordan's lab was a very special time. You know, the, the people that I was in grad school with were people like Francis Bach and Andrew Ng and Barbara
37:00
Inglehart and Eric Xing and Long Nguyen. These were, and there's, there's a list of us, you know, there's maybe nine or 10 of us. And we've, we became close, we bonded and we became close as friends, but also close as, as scholars and as, you know, as intellectuals and made a huge
37:20
difference. John McAuliffe was, you know, one of my closest friends from that time and still is. And, and we wrote papers together, we read papers together. And I think it partly had to do with Mike and his good job of social engineering, getting us to be in a room together, reading the same papers, discussing ideas together. And it partly
37:40
had to do with luck, just a group of people that got along well, but it was very special and it created an atmosphere that is hard to recreate. Is there any alternative to that? In fact, I mean, you had the luck of the right group and the mentor and so forth, but can you do any work now in isolation? Yeah, no, I don't think you can. Yeah. So that's,
38:01
that's changing too. That's the, yeah, that's changed. And and, and though from that time, something I still take with me is that I still prefer in-person collaborations very strongly. You know, John McAuliffe and I wrote a bunch of papers together. And what we would do is, you know, I'd wake up in the morning, I'd ride my bike over to his house, we'd go to the coffee shop, we drink very strong coffee and
38:22
work on research together. It was fun. We laughed a lot. We, we, you know, it was a, it was a social event, but an intellectual event too. And that's hard to recreate over Skype and video conferencing and things like that, I find personally. Well, if a computer guy says that, then it makes it true. Anyway, so
38:41
one of the people I mentioned is Andrew Ng. He's now become a rock star. He's a legend. And so my thesis, you know, was about unsupervised learning on text. And in particular, you know, one of the chapters in the thesis develops a model called latent Dirichlet allocation, which became a very popular model for analyzing texts, for finding
39:00
these themes that pervade the collection. And, and I, and, you know, I think how that model came about, which was such an important part of the thesis, tells the story of that time in machine learning, which is that, you know, so I was in Mike's group, Andrew Ng was there. He's a senior student. He was my TA in Mike's class. And
39:22
at some point I'd gone on an internship to Boston and worked on something called the aspect model, which was a precursor to this latent Dirichlet allocation model. I worked with someone there named Pedro Moreno. And, you know, I got into it. I worked on it. I got into working on these, these topic models there. I came back
39:41
to Berkeley and Andrew Ng said, Hey, you want to get some coffee? And, you know, at some point I realized that all we did at Berkeley was have coffee with each other. It was a huge, it's, you know, I realized it's come up a few times and it's true. It happened all the time. Anyway, so Andrew Ng says, Hey, you want to get some coffee? I said, sure. So we got, sat down at the coffee shop, Brood Awakening, and Andrew said, what, what did you
40:02
work on over the summer? I'm curious. I said, Oh, you know, I worked on the aspect model. I worked and then I explained it. And Andrew said to me, you know, something's always bothered me about the aspect model. And he then, and I know this sounds like a, like a, like a myth because you know, stories like this happen in computer science all the time, but this is really true. He
40:21
took out a napkin and he drew a picture on the napkin, which became a figure in my thesis, not, not his actual drawing, but the same figure and explained to me the problem that he saw with the aspect model and how he wanted to fix it. And that idea spawned
40:42
what then later became this, you know, it sounds immodest, but it's true. This important model in the world of unsupervised learning. And, part of why it's important is because we kind of did this thing before all that happened. But part of why it's important is that it's just a good idea. And, and
41:03
yeah, so it, you know, that's how it started. So, so Andrew, the senior student and me and Mike, we started working together on this and that's, and that grew into this, this field, this, not because I'm determined to get you on, but because I'm really interested in this cohort idea, when you decide on academic positions, which you're
41:20
now qualified for, do you decide it in terms of with your cohort? Do you say, I'd like to work with you at the next stage at Princeton, and you both go for that kind of lab or what happens in this? Right. Well, we're all scattered now. But we get together every year at the conference that we started
41:40
going to when we were in graduate school. So the cohort continued? Oh yeah. So, you know, there's this conference called the NIPS conference and we rent a house at the NIPS conference and we all stay together. I room with the other people there. And this has been happening since, you know, 2000. And, and so, so
42:02
yeah, so yeah, no, we've remained close. What is it going to take for me to get you to Princeton? Good. I will go there. You can tell I love grad school. I would happily live there. Yeah. So, okay. At first I went to Carnegie Mellon. After graduate school, Mike Jordan invited me to his office and
42:21
said, Dave, isn't your girlfriend tired of dating a good for nothing graduate student? And that was his way of saying it's time to graduate. And, and Mike knows my wife, who was my girlfriend. And so, you know, and he knows you. Yes. And so, right, he knew I was very happy and clearly leave. Anyway, so I said,
42:43
yeah, you know, it was, I think it was, I was there for five years. And so I said, okay, well, you know, Tony, my girlfriend, she's going to law school in Michigan. So I'm going to get a postdoc. Mike always recommends his students get postdocs or a couple exceptions. Andrew was one of them. You know, and Mike
43:03
said, that's no problem. You know, draw a five hour radius around, around Ann Arbor. And that's where you can get a postdoc. And, you know, it seemed like he had just thought of that then, but in retrospect, I realized, you know, that contained two major hubs for machine learning,
43:20
right? Toronto and Pittsburgh. And so, now it seems like it was maybe possibly a little bit more deliberate than it seemed at the time. But what did I He's like a very good mentor. Yeah. Yeah. And, oh yeah, no, I mean, Mike. And he's, and he remains a good mentor. One of the students, Alice Zhang in that same cohort said that what we get when we work
43:40
with Mike is Jordan Care, like Apple Care, where like, you can call him up, you know, I still talk to him when, when I, when I would like advice or just to talk to him. Anyway. So it winds up being Carnegie Mellon. Yeah. So it winds up being Carnegie Mellon where I worked with John Lafferty, who's now at Yale. And, and yeah, I love my postdoc too. What
44:01
can I tell you? What about the nutrition, the intellectual nutrition of the kind of work that they were doing? I mean, do you, by this point in the development of computer theory, have many options? Are serious computer labs happening everywhere? Right. So now things are starting to heat up a little
44:20
bit for machine learning a little bit. This is 2003 that I went there, 2004. Okay. I think it's, I can't remember, but it's 2004 ish. And Carnegie Mellon's, of course, a central place for, for computer science and for machine learning. And they have a department of machine learning and that's where I
44:40
was. It wasn't called that then. It was called Center for Automated Learning and Discovery, but it's since turned into the department of machine learning. And some, and at Carnegie Mellon, it was an amazing experience. First of all, I loved Pittsburgh. I think it's a really great city. And, and when you were in, when you're renting apartments in Berkeley, and then you get to go to Pittsburgh. It's amazing. I
45:02
lived in the palace. People came over, they're like, why do you have such a huge place? And I just said, you know, because I can, it's the same rent. But anyway, it was in register. I should have had a smaller place, but, but I was, but it was, it was a central place for machine learning. And I worked with
45:20
John Lafferty and that's where we started working on text analysis on scientific articles. And these are OCR articles from the journal of science. And we, we did two pieces of work there that I'm very proud of. One is called correlated topic models and one's called dynamic topic models. And they're about how the, how to identify structural changes in these topics,
45:41
especially dynamic topic models, is about how to identify automatically topics that pervade a collection that drift. So for example, a topic like scientific apparatus is one topic we found when we analyze that dataset, we could find how it drifted from words like tube and wire and battery, all the way to words like Silicon and
46:01
technology. I use this example all the time when I talk about topic modeling. So that was work I did as a postdoc with John Lafferty. There I worked more, more just with him or on my own. It wasn't as big a group, but I liked that. I liked that change. And then the other thing that happened in Carnegie Mellon is that I got deeper
46:21
into, into statistics. I had already gotten into statistics at Berkeley because Mike is both in the stat department and the CS department. I took a lot of statistics classes. But then in, in Carnegie Mellon, I started working with people like Steve Feinberg and getting to know people like Larry Wasserman and Chris Genovese. These are statisticians who are in the Carnegie Mellon statistics
46:40
department, which is a great department, and just started reading more and getting deeper into statistical thinking. So can I get you to Princeton now? Yes. Now I can go to Princeton. Then I applied to jobs, got a job at Princeton, visited Princeton, loved it. Is it Princeton you loved or the kind of computer
47:01
capabilities that it had? Then again, I'm very interested, combination of life choice and also an intellectual choice. What did Princeton offer you? I mean, circumstantially. Right. So, yeah, again, it's this theme. So, okay, I applied all over the place. I applied, so now we have to talk a little bit about, we don't have to, but there's a
47:21
personal angle to this story too, which is that, so I mentioned my mom is a left wing lobbyist and lawyer, and my wife is a left wing lawyer, government lawyer as well. And she was in law school in Michigan and she wanted to go into public service afterwards and do government law. And what that meant was that there were a few places where, where,
47:42
where, where I think opportunity. Yeah, where we could both be happy. Thank you. And namely they were big cities. Okay. So big cities, and also we like big cities, you might tell them from our apartment here in New York. So, so I, I didn't restrict my search that much, but I only looked at
48:01
places in big cities and I had a bunch of interviews and I applied to Princeton because it's a great school. It's not in a big city, but they had an opportunity for machine learning and it, and it's a top department seemed too good to pass up. Then when I went to visit Princeton, I learned that people can live in New York and commute to Princeton. And
48:21
I thought, oh, that's cool. So this is actually a realistic option for us. So, so that's one aspect of, of Princeton. I also just, and this goes back to the theme that I just kind of did what I like to do. I, I just had a good feeling. I don't know what to say. I trust my gut. I went there. I liked
48:40
everybody I met. I enjoyed being in the town. I, my talk went well. I got a good response. They only had one other person who did machine learning at that time, Rob Shapiro. Only one other. Yep. That's right. And that's not a disadvantage or is it an advantage? Well, it can be either. And he does a completely different flavor of
49:01
machine learning than I do. So we, we compliment each other. And, and that can be important when you're going to, to start a faculty job. And, yeah. And so what was I excited about? I liked the people I met. I liked the university. They made me a nice offer. I liked the chance to, to help
49:20
define machine learning at a place like Princeton, which is such a great university. That excited me. And we're, cause we don't have a lot more time. I want to, again, take the measure of the field now. So now you're a professor and I guess an assistant assistant professor, assistant professor, 2006.
49:40
Yes. So where is AI? Are we at deep learning yet? No, we're not yet at deep learning, but, um, we are, but again, I think this, this, this slow boil is continuing where, um, there's more data. Um, I started working on doing things like scaling up my algorithm from my dissertation to larger
50:01
data sets, which was, which was a big part of my time at Princeton. Um, I also at Princeton started, well, two things happened. I started reaching out to scientists to understand what kinds of data problems they had, but also scientists started reaching out to me. So that was a change. Um, and
50:22
Cause your papers, I mean, you're beginning to circulate your ideas. Yeah, but also the need for machine learning that, that this is, this is to your question of where we are in the sort of broader field that is starting to happen. So I'm sitting there in my office and I'm starting to get emails. Hi, I'm Ken Norman. I'm a neuroscientist. I have
50:41
all this data. Hi, I'm, you know, um, uh, I'm a biologist. I have all this data. And, and of course I would go out to coffee. This is again, uh, and, um, learn about all these fascinating problems and learn that we didn't have the machine learning tools. So that's in Princeton is when I started, which has become my, my MO of finding
51:03
interesting scholars who have interesting data sets, who have problems that they want to answer with those data sets that the existing machine learning tools can't help them with. And then trying to build out of that a research agenda that both pushes machine learning and helps solve a real problem in science. Well, you're, you're, you're pushed by the problems.
51:21
Yes. Yep. And people bring you interesting problems. Do you go out and seek interesting problems? Both, both happens. But at this point, often people come to me and then I have coffee with them. Nowadays, what I do is I bring them to my lab meeting and we have a section of my lab meeting, which is called interesting people and their problems. And we learn about their problems. And then the idea is that hopefully
51:41
something sparks where a student or a postdoc and me think, Oh, we can do some interesting machine learning research and solve this problem. Now it can go south. It can go south, but it can, sometimes the problem is interesting, but the solution doesn't push machine learning forward enough that makes it worth our while. It's unfortunate, but that, that can happen. But
52:02
often we, there's this nice synergistic relationship between us and the scholars. And so that, that is what I started doing at Princeton and really have been doing ever since. And so, you know, I still work a lot on text analysis. I still work a lot on core machine learning algorithms and methodology. And I like to
52:20
work on developing new machine learning methods to solve specific problems in the sciences. That's a great last word, although we haven't actually gotten you to Columbia, but it's along this line of inquiry. The last question I wanted to ask you is the inevitable ethical one. Now
52:40
that you're dealing with machines that are capable of incredible things. And you thought about this a lot. So I know this isn't coming from left field. What frightens you about the stage we are at, the artificial intelligence. I mean, a lot exhilarates you, of course. What frightens you?
53:01
Okay. That's a good question. It's true. I expected that question. Well, first of all, you've, you've now talked to many computer scientists, you know, that we are very optimistic. And, you know, as an example, like when I was first doing work with Leslie cabling and, and reinforcement learning, you know, you'd want to build a robot to do something. And
53:20
the example was always, we want to build a robot to deliver a bagel to Leslie. And that's like, of course, all the computer scientists are sitting around saying, we're building bagel delivering robots when the, you know, when the NSA or the CIA is thinking, yeah, well, it's not going to be a bagel or it'll be a poison bagel. But, um, but, but so the computer scientists are optimistic in
53:40
general as people, and sometimes maybe a little naive. So with that in mind, um, where are the risks and what are the ethical issues? So I think there are some real ones. And, um, but let me begin by saying what they are not. So I don't think that some kind of super intelligence is going to happen
54:00
anytime soon, period. And I wouldn't sit around except that at night, I like to read science fiction novels. I'm happy to contemplate that when I'm reading scientific novels, science fiction novels. But in general, I don't think this is something to spend a lot of thought on. However, in those years that we've been discussing, which were very happy years in graduate school, especially, you know,
54:22
the world changed and that we, we connected all the computers. We all hold computers. We, we started sharing data. We started relying on technology and algorithms to help make decisions for us. And that has consequences, real consequences. They are, they are societal consequences. They can be economic consequences and computer scientists need to
54:42
think with policy makers, I believe about the emergent properties of deploying algorithms at this scale and deploying learning algorithms at this scale when the algorithms. So it's one thing to analyze. If I have an algorithm that's sitting in a black box that I can deploy, I can think about what it's
55:00
going to do. I can run the simulation on my computer, but if that algorithm is actually ingesting data from the world that I cannot predict what that data is going to look like, cause that's like the whole point is to use the data to make the thing that does the good stuff. Then how do I think about it? How do I understand? And you know, the example I like is that
55:21
if we take all the previous loan officers decisions and use it to build a machine learning algorithm that can give or deny you a loan and all the previous loan officers are racist, then we're going to build a racist machine learning algorithm. And so how do we, and computer science is now starting to address this,
55:41
thinking about fairness and transparency in machine learning algorithms or in algorithms in general. But I think this is a very important ethical question to answer both. How do we set up the norms that we, and how do we quantify the norms that we desire as a society? That's already fascinating and new.
56:01
Like I can't, you know, lawyers and like I said, my wife's a lawyer. Lawyers like to set up rules and set up criteria, but now we need those criteria to be quantifiable and measurable with data. So that's already a challenge, but then, so what are they? How do we build
56:20
algorithms that respect them? These are, these are interesting, open ethical problems that surround that we have deployed algorithms across the whole world. I would ask you to end with another one of those dilemmas. It's all related, which is of course the private and the public and what we need to
56:42
know about the private in order to make advances in the public and what that danger is. Right. So I, so this is a problem I'm starting to work on, which I think is a very interesting problem. How, so machine learning algorithms ingest data, form predictions, we all benefit. We get to watch the movies that we want to watch on Netflix. We get
57:01
to buy the food we want to buy on Amazon. We get to see the search results we want to see on Google. But then, you know, lately we've all woken up. Oh wow. I'm telling all these faceless organizations, all kinds of information about myself. Do I really want to do this? Right. And the machine
57:21
learning challenge then is, can we build tools that give us benefits like recommendation systems and search queries and so on? Can we build tools that give us benefits without violating that private information? And then, you know, Netflix is fine and good, but what about some even
57:40
more impactful problems like can I build better medicine? Can I make better treatment decisions based on the data? Based on knowing the most intimate things about individual human beings. And the answer has to be yes. So I, you know, I believe that in the treasure trove of electronic health records,
58:01
there are many secrets about the effects of drugs and the effective treatments, both for personalized medicine and for global medicine. And, but we don't know how to unlock them, one. And two, the privacy issues are real there, right? We, and they're real everywhere, but they're, they're, they are, there are protections in place that
58:21
prevent us from doing the kinds of things we might want to do. And the answer is not to remove those protections are there for a good reason. The answer is to think about how to build machine learning algorithms that still preserve privacy. I mean, these are very important issues. Thank you very much. Thank you.
Recommendations
Series of 66 media