If Ethics is not None
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 160 | |
Author | ||
License | CC Attribution - NonCommercial - ShareAlike 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this | |
Identifiers | 10.5446/33732 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
00:00
Event horizonSelf-organizationWeb 2.0VideoconferencingMultiplication signSound effectMetropolitan area networkLine (geometry)Endliche ModelltheorieLecture/Conference
01:26
Data conversionLevel (video gaming)ComputerBuildingType theoryMoment (mathematics)Focus (optics)Endliche ModelltheorieCircleField (computer science)JSONXMLUML
02:33
Computer fileSlide ruleComputerFunctional (mathematics)Computer animationEngineering drawing
03:23
Multiplication signPhysical systemSatelliteLevel (video gaming)Computer engineeringComputerComputerNeuroinformatikLogical constantComputer scienceSoftware testingVirtual machineTouchscreenOperator (mathematics)Lecture/ConferenceEngineering drawing
05:37
Virtual machineTask (computing)ComputerCyberneticsWeizenbaum, JosephSoftwareTerm (mathematics)Process (computing)Line (geometry)Arithmetic meanWage labourFormal languageMechanism designObject-oriented programmingSound effectFeedbackRevision controlType theoryQuicksortCategory of beingNeuroinformatikPhysicalismCondition numberArtificial neural networkGoodness of fitStatement (computer science)Computer scienceWordVideo gameData conversionBlogBitAgreeablenessFactory (trading post)Physical systemNatural languageFamilyMathematicianDevice driverSpeech synthesisPattern languageRemote administrationChatterbotTelebankingMultiplication signDigital photographyRight angleComputerState of matterMathematicsComputerVirtual machineComputer simulationSet (mathematics)Dependent and independent variablesTask (computing)StatisticsField (computer science)Conservative forceRoboticsSoftware developerEstimatorDecision theoryEndliche ModelltheoriePower (physics)Bit rateEntire functionArithmetic progressionCyberneticsComputer animation
13:41
Content (media)Revision controlConvex hullComputer programmingCode refactoringLocal GroupComputerCategory of beingARPANETComputerGroup actionVirtual machineComputer programComputer scienceOperations researchNeuroinformatikPhysical systemCASE <Informatik>Router (computing)CoprocessorInterface (computing)Revision controlInternetworkingStandard deviationContent (media)Communications protocolProjective planeDesign by contractBitDirection (geometry)Multiplication signVideo gameObservational study1 (number)Flow separationFocus (optics)Digital photographyPartition (number theory)DivisorIRIS-TSpeech synthesisMereologyPower (physics)Metropolitan area networkExclusive orPresentation of a groupProcess (computing)ComputerMappingSelf-organizationScheduling (computing)Type theorySubsetPoint (geometry)BuildingStatisticsRight angleCategory of beingSoftwareDependent and independent variablesArithmetic meanIntegrated development environmentMessage passingMachine codeExtension (kinesiology)Endliche ModelltheorieARPANETMachine learningDecision theoryAlgorithmInformation modelForm (programming)Computer networkError messageFigurate numberFormal languageCompilerParallel portAlpha (investment)TrailCapability Maturity ModelCyberneticsGoodness of fitComputer animation
21:44
Unitäre GruppeFunctional (mathematics)Physical systemProcess (computing)InformationKey (cryptography)Bernstein polynomialProjective planeBuildingElectric generatorMultiplication sign5 (number)Service (economics)Group actionBitPatch (Unix)Computer hardwareVirtual machineFlow separationSoftwareStandard deviationCommunications protocolPhysical systemExpert systemPoint cloudEndliche ModelltheorieCASE <Informatik>ComputerAssociative propertyPattern recognitionRight angleMachine visionPersonal computerField (computer science)WordComputerSpeech synthesisDifferent (Kate Ryan album)Greatest elementType theoryElectronic mailing listDependent and independent variablesProcess (computing)Content (media)InformationKey (cryptography)Computer programSlide ruleARPANETOperating systemComputer scienceDecision support systemMathematicsScalable Coherent InterfaceMereologyPeer-to-peerWritingData conversionSpeichermodellMIDISurface of revolutionDiagramInformation securityData managementNumberGenderCoprocessorTheory1 (number)Router (computing)Natural languageWeightAutonomic computingSoftware developerLattice (order)Focus (optics)PlanningMeeting/Interview
29:48
Computer simulationEndliche ModelltheorieInformationMachine learningData conversionFeldrechnerComputer scienceField (computer science)WordMultiplication signCategory of beingComputerComputerNeuroinformatikExpert systemGenderSoftware industryDifferent (Kate Ryan album)Cartesian coordinate systemGoodness of fitFrequencyInverse elementTerm (mathematics)Table (information)LaptopNichtlineares GleichungssystemRegulator geneArithmetic meanVariety (linguistics)DatabaseProcess (computing)Standard deviationType theoryMachine codeQuicksortKeyboard shortcutContext awarenessGroup actionInformation privacyCuboidNatural languageState of matterNumeral (linguistics)Natural numberPhysical lawIntrusion detection systemSparck Jones, KarenProduct (business)Information securityCondition numberInternetworkingComputer programTouch typingVideo gameLine (geometry)Meeting/Interview
37:52
LaptopData conversionType theorySemiconductor memoryFamilyComputer scienceGoodness of fitDecision theoryRight anglePosition operatorMachine codeCASE <Informatik>Revision controlWordElectric generatorProjective planeMultiplication signPersonal computerComputer animation
40:41
Hill differential equationComputerSoftwarePrisoner's dilemmaBitData conversionDecision theoryForcing (mathematics)MereologyGoodness of fitSoftwareInternetworkingWeb 2.0Virtual machineTouch typingNatural languageComputerMachine codeComputerOpen setFreewareMachine learningComputer programThermal conductivityProduct (business)Group actionRight angleWeb-DesignerProcess (computing)Internet der DingeACIDComputer iconOpen sourceSoftware engineering
44:24
Expert systemData conversion1 (number)Type theorySpacetimeMereologyArmLogicSelf-organizationOpen setProcess (computing)Ocean currentCharacteristic polynomialProof theoryWhiteboardInternetworkingGroup actionData miningImage resolutionComputerCASE <Informatik>Thermal expansionVirtual machineCommunications protocolBasis <Mathematik>Coefficient of determinationNormal (geometry)Point (geometry)Error messageElectric generatorMultiplication signBitMessage passingTwitterView (database)Power (physics)Dependent and independent variablesState of matterBlogRight angleMoore's lawVideo gameEmailMetropolitan area networkMassField (computer science)Mathematical optimizationComputer scienceSurface of revolutionDifferent (Kate Ryan album)Scaling (geometry)Local ringArithmetic progressionPhysical systemSet (mathematics)Arithmetic meanBit rateAreaNeuroinformatikInformationSign (mathematics)Goodness of fitMachine learningSinc functionResultantFigurate numberPeer-to-peerLecture/Conference
Transcript: English(auto-generated)
00:00
All right, so now I want to welcome you, our next keynote speaker. It's Catherine Dremel. First time I saw Catherine was actually online on videos recorded at PyCon US, where she was talking about web scraping and data mangling, and I really loved her talks. Finally, we got to know each other actually
00:20
in Bilbao in 2015, and as you know, like she's an author. She wrote a book on data mangling on O'Reilly. She does video courses. He's one of the organizers of PyData Berlin. She's a very active member of our community, and now she's going to talk about ethics and data. Oh, no, sorry.
00:40
You're basically also like, you're European basically. She was born in LA, but you're in Berlin since three years? Yeah, about three and a half. Three and a half. So, and today she's going to talk about ethics and ethics and data. So this is not a technical keynote. It's more an ethics keynote.
01:00
So I want to welcome, give you a very warm welcome to Catherine. Thank you for being here. I know there was quite a fun social event last night, so I appreciate you getting up early and joining us, and I want to thank all of the organizing committee
01:21
for organizing such a great conference and also inviting me to be here. Today I'm going to talk about a topic that is quite controversial in some circles. Now, ethics might be something that you learned about in school. It might be something that you even debate on the world stage or when you talk about politics
01:40
with your friends. But it's not often that in the work that we do every day we get into ethical conversations. But I wonder if it should be. I myself work as a data scientist, and I myself face these types of ethical conversations often when I'm building models or when I'm thinking about how we use data.
02:02
And because of this, I think that perhaps us, even as a larger computing field, even if you don't work with data, perhaps this is something that we should be talking about. So what we'll do today is I'm going to take you through some of the history of computing with an ethical focus. So we're going to take a look at important moments
02:21
in computing history and some of the ethical reasoning behind them. To begin with, we will start with an IBM advertisement from 1960.
02:50
Oops, sorry. Don't know how to make this.
03:38
All right, we'll actually move forward.
03:41
Because in the interest of time and in the interest of all of your time, what this is an advertisement for and what it goes on to explain is the SAGE system. And this system is a satellite air system. And what it is supposedly doing is tracking all of the aircraft over the United States. And its goal is to have these people,
04:03
these intelligence workers, to be able to click on a screen at these unknown aircrafts and then fire missiles at them and destroy them mid-air, therefore protecting the United States from quote unquote Soviet attack. This was a system that was designed
04:20
by a lot of intelligent computer scientists, a lot of people that were building IBM's best computers at the time in the late 50s and early 60s. And when I saw this advertisement, I mean it's quite over the top. I will have to post it later so you can look at it yourself. But I wondered what the computer scientists who were working on this machine thought.
04:42
I wonder if perhaps they were at all concerned ethically about their creation. And in testing, actually, this SAGE system was wholly ineffective. In one test called Operation Sky Shield, it would have only neutralized 25% of targets, leaving 75% of missiles incoming
05:02
and to whatever devastation they were able to make. And so I wonder if when they saw this advertisement and it's made to seem like, ah, you're completely safe and secure because we have a computer and the computer will take care of it all. I wonder what the computer scientists actually thought,
05:20
the computer engineers who were building this system. And I wonder if they were concerned with how it was marketed and if they were concerned about failure. So this is a constant problem in computing. Computing has been touched by the military, has been touched by state intelligence systems
05:41
in a lot of ways throughout history. And the first computer scientist and mathematician we'll look at is Nobodvina. And Nobodvina was a famous statistician in his time and he worked with neurologists and other mathematicians to help discover some of the ways that our brain sends electricity.
06:01
So this was when we were first discovering how electrical signals travel through the brain and when we were first discovering neural networks. And he also did some statistics for the military. In fact, his work went on to contribute to the minimal mean squared estimator, which essentially allows us to estimate, he used it for flight paths for missile defense.
06:25
So he himself worked with the military, he himself probably had his own ethical qualms. But his seminal work is a book called Cybernetics and he was actually one of the first to coin this term. This is this idea of using computers as an aid to help us make decisions.
06:44
And in Cybernetics, he has a quote that I think is important to share today. I'll read a slightly longer version than we have here. I have said that this new development, computers, has unbounded possibilities for good and for evil.
07:00
It gives the human race a new and most effective collection of mechanical slaves to perform its labor. Such mechanical labor has most of the economic properties of slave labor, although unlike slave labor, it does not involve the direct demoralizing effects of human cruelty. However, any labor that accepts the conditions
07:22
of competition with slave labor accepts the conditions of slave labor and is essentially slave labor. The key word of this statement is competition. It may very well be a good thing for humanity to have the machine remove from it the need of menial and disagreeable tasks, or it may not.
07:42
I do not know. When I read this quote and I realized that it was written in 1948, I was a bit surprised that I don't feel like we've moved much further along in the conversation. I feel like this quote could be shared in an article today, in a blog post today,
08:01
in any type of news today as we debate automation and jobs. The people in our field, we work to automate tasks away. Sometimes we work on systems that will entirely replace an industry and therefore are we then responsible for the degrading of lives that happens when that occurs.
08:24
Perhaps it's better that nobody has to be a factory worker or that nobody has to drive a taxi if they don't want to or be a truck driver and so forth. But what does it mean? Is it the tyranny of the few? Is it perhaps those of us that can automate away things
08:40
and the societies that can afford to automate away things versus those that perhaps can't? And if those other societies or if other nations can't keep up, does this mean that they're essentially competing against these robots, against this, as he so-calls, slave labor?
09:03
The next mathematician that we'll look at and great computer scientist is Joseph Weissenbaum. Now, Weissenbaum was essentially a contemporary. He was a German Jew and he was born in Berlin. His family escaped Nazi persecution by emigrating to the United States
09:21
and he grew up and was educated primarily there. He became a professor at MIT in the 1960s and you might know of his work because he built Elizabot. So for those of you familiar with natural language processing or perhaps you've just heard about it, the Elizabot was probably the first chatbot and it used natural language patterns
09:41
to essentially mimic human speech and act as a therapist. Now, Weissenbaum was also famous in his own right for some other inventions that he did. In fact, this is a photo of him at Dezeit in 1965 and he's demonstrating remote login to his MIT machine.
10:03
Weissenbaum came to become quite active politically within his time and he actively worked to challenge those around him on these ethical and political concerns that he had. I think he felt it especially important given his family's history to talk about ethics
10:23
in computing and what he saw around him. And he began to question whether AI and computer science were a force of good in the world. This is a quote from an interview in an MIT publication in 1985
10:41
where I feel like he was particularly candid about his feelings about computing. I'll again read a longer quote. I think the computer has from the beginning been a fundamentally conservative force. It has made possible the saving of institutions pretty much as they were,
11:00
which otherwise might have had to be changed. For example, banking. Superficially, it looks as if banking has been revolutionized by the computer, but only very superficially. Consider that say 20, 25 years ago, the banks were faced with the fact that the population was growing at a very rapid rate.
11:20
Many more checks would be written than before. Their response was to bring in the computer. By the way, I helped design the first computer banking system in the United States for the Bank of America 25 years ago. Now, if it had not been for the computer, if the computer had not been invented, what would the banks have had to do?
11:42
They might have had to decentralize, or they might have to regionalize in some way. In other words, it might have been necessary to introduce a social invention, not just a technical invention. And this quote for me brought me pause as somebody that aims to make unmanageable tasks
12:03
manageable. I aim to make data clean and accessible, to use large and disparate data sets to make inferences or determine some sort of meaning or signal. Am I helping consolidate power? By doing the job with maybe 30 lines
12:23
and a sidekit learn import, by doing the job that perhaps would be very untenable, take quite a long time, or maybe even impossible, am I therefore helping consolidate this power? Am I using technology to thwart social progress?
12:45
I don't know the answer to this, but these are the types of questions I'd love to hear your feedback on. Moving on throughout history, we have Ole Johan Dahl and Christian Nagat. They're essentially the fathers
13:00
of object-oriented programming, so we have them to thank. Simula 1 and Simula 67, the languages that they developed, were computer simulation languages. They were used to simulate physics in a military laboratory. And Christian Nagat, here on the right,
13:20
was a staunch leftist and activist throughout his entire life. In fact, he eventually identified as a socialist. In his belief system, he of course believed in supporting workers' rights, and so when he was approached by the Norwegian Iron and Metal Workers Union to help them build a system
13:40
so that their jobs would not be automated away and instead they would learn computing skills, he jumped at the opportunity. And he was able to work with them and build what many people think is the first example of participatory design, one which the active participants or users or workers are able to help make the design process
14:01
and able to help determine what the system looks like. And he essentially helped them build an operations research type of scheduler and backlog organizer for the union. He was also quite famous for walking away from military work. In 1960, he quit his job at the Defense Research in Norway
14:22
and took several of his teammates with him. He often joked that because of that, he had the most funding request rejections out of any person in Norway. He gave a speech at the IRIS Conference, which is a group of Scandinavian researchers that meet.
14:42
And I'll read a slightly longer quote from it. "'You need a self-defense against yourself and the temptations to choose a comfortable but wrong way out in critical situations. But compromises may be necessary. The greatest danger then is not the acceptance of a dubious compromise,
15:01
but in not being cynical and honest about it. Your mental processes will try to justify your actions to yourself, making the compromise the desired solution. And you will change yourself if you're not honest and astute.' Bo Dalbum asked me to talk about the iron and metal project.
15:21
Why? Many people don't know about it properly, he said. And some have forgotten those aspects that ought to disturb them as their environment pushes them slowly to the right. Perhaps I should ask some questions to those in the audience who believe that they have been influenced by the project and its successors.
15:42
Has anyone resented the content of your work recently? If not, what is your excuse? Have you had any real conflict in your research activities lately? Or does such conflict only belong to your now romanticized glorious radical past? Will your recent research to any extent
16:01
increase the power to influence their own fate for people with whom you feel solidarity? He went on to quip that he's joking a bit. But I think the message remains clear. If we're building systems, tools, algorithms, and so forth, if we're building these and they actively work
16:22
against our politics, against our ethics, against our morality, however it is you choose to make valuable decisions in your life, if we do that, why are we doing that? And if we choose instead to build tools and systems
16:42
that we believe support our politics, that we believe support our ethics and we distribute them, can we be then a form of justice? Can we therefore spread instead of spreading let's say something we don't believe in, can we therefore become a way to spread something that we do believe in?
17:01
Or as he suggests, are we just constantly perhaps making compromises? And do we need to be very honest with ourselves about the compromises that we are making? Andrei Urshov is a prominent figure in Soviet era computer science and cybernetics.
17:21
He works on numerous inventions along the same time, working in parallel and often collaborating with several computer scientists in the United States and the UK. He's actually famous for likely writing the first optimizing compiler for a language that's more complex than FORTRAN.
17:41
He wrote this for the alpha language that he worked on. And he was a really big proponent of education. In fact, he's probably the first computer scientist to talk about computer literacy. He built several schools during his time. He appropriated funding for them.
18:00
And this, for example, is a photo at the USSR Academy of Sciences during a summer school for young programmers. His belief in teaching and in learning and that being an important part of computer science is really espoused in his speech titled Aesthetics and the Human Factor in Programming, which he gave at a 1972 computer conference
18:22
in the United States. I'll read the full quote of which only a part is shown here. In past ages, the ability to read and write was considered a rare, God-given talent, gift, the destiny of a limited group of the specially chosen.
18:42
In the present age of general literacy, we perceive reading to be a universally attainable accomplishment, but we are tempted to single out a new elite group who become arbiters between the lay generality of mankind and the arcane informational model of the world hidden in the machine.
19:01
Is it not, however, the highest aesthetic idea of our profession to make the art of programming public property and thereby to submerge our elite exclusiveness within a mature mankind? Indeed, I feel like Urshov approaches these ethical problems slightly differently
19:22
than some of his Western contemporaries. His idea is not, do we hold some power that we should then further hone for good or for evil? His point is, should we even hold this power at all? Should everyone learn to code?
19:41
Should everyone learn how a computer works literally on the inside, not just how to turn it on and off? And if not, does our own understanding of computers, does our own ability to code, does, for example, the data scientists in the room, our understanding of statistics and machine learning models,
20:00
does this give us some special power or privilege? And if it does give us a power or privilege, are we an elite? And what does being part of an elite mean? Does it mean we have a responsibility? Does it mean that we have to do things differently?
20:24
Moving on to the networking era, which is always fun, right? We all love the internet. Here you can see the employees at BBN, which is Bolt, Beranek, and Newman, a Cambridge research firm. And BBN won a contract
20:41
with the Advanced Research Projects Agency, otherwise known as ARPA, and their project was to build the first computer network in the United States. They're seen here in 1969, along with their interface messaging processor machine, which is essentially the first router.
21:02
And in case you don't know, this is the same group that started to create the protocols and the standards that we use in today's internet. There's a bunch of amazing engineers and architects that I could talk about in this photo, but I'm gonna focus on two of them. Here in blue, I have Severo Ornstein,
21:20
and in yellow, I have Bob Kahn. Why just talk about the two of them? Well, first and foremost, because they went on to have longer careers and do quite a lot of work after ARPANET, but secondly, because they were very outspoken in their political and ethical beliefs, and they went in quite different directions. So I think it's perhaps a good case study
21:42
to explore the life of a computer scientist. We'll focus on Bob Kahn first. Bob Kahn worked on networks and essentially networking protocols. He, along with Vincent Cerf, was able to create the TCP and IP protocols and standards,
22:01
which are mainly unchanged today, which is pretty amazing feat. He's essentially a networking genius. He went on to build very large networks for the US military at DARPA, and then after that, he went to IPTO, which is the Information Processing Techniques Office.
22:22
Then he heard about this new project brewing in Japan called the Fifth Generation Computer Project, and this idea was how can we create true AI thinking machines? How can we create these decision support systems that we might be able to use? At that time, he then pitched the idea
22:42
of the Strategic Computing Initiative, of which this is one of the plans. So for the Strategic Computing Initiative, you have, as you can see at the bottom, all of the infrastructure. These are the networks that Bob Kahn loved. This is something that he loves to work on. He was able to get quite a lot of funding
23:01
to just build networks and come up with new networking concepts. On top of that, they would then try to build some chips, some different hardware designs, and eventually working to natural language processing and speech and vision recognition and so forth. On top of those technologies, they would build autonomous systems,
23:23
pilots' associates, and battle management. And this was all to, and I quote, "'Develop a broad base of machine intelligence technology "'to increase our national security "'and economic strength.'" So what does this mean? I mean, when I first saw this diagram,
23:41
I said, well, perhaps Bob Kahn didn't even know about how his networks would be used. Maybe ethically, for him, he just really wanted to work on networks. I can understand and appreciate that. And then I found a quote from around this era of Bob Kahn. And it states, "'The nation that dominates this information
24:01
"'processing field will possess the keys "'to world leadership in the 21st century.'" As I began to read more of his writing during the time and writings of his peers at SCI, for example, Lynn Conway, who is a prominent computer scientist in her own right, it was clear that they were very well aware
24:20
of how their programming would be used. In fact, it's rumored that the chart that we saw in the last slide, that Bob Kahn himself designed that pyramid. So it's clear that he knew he was building weapons. What I wonder is, what does it mean when we have to take military funding,
24:42
or let's say, corporate funding of corporations maybe we don't agree with, to do our work? Does this make us ethically culpable in what the greater process is? And if it does make us ethically culpable, if something goes wrong, let's say civilians are bombed and so forth,
25:03
does that make you, as the computer scientist or as the data scientist or the machine learning expert, does it also mean that you are culpable? And if you aren't culpable, then who is? Going back to Bob Kahn's peer, Severo Ornstein,
25:23
he was a coworker, again, and a peer at ARPANET, and here we see him with Laura Golt, who was a computer scientist and activist as well. Severo's work at ARPANET was as lead hardware expert, so he built most of the hardware for the first router
25:41
and also did some of the software programming. He later went on to join Xerox PARC and he worked on the Dorado, which was the fastest processor at the time and operating system. So around the same time that Bob Kahn was petitioning the government for millions of dollars to build weapons,
26:01
Severo Ornstein and Laura Golt were starting another initiative, and this was for a group called Computer Professionals for Social Responsibility. This is one of the patches that the Computer Professionals for Social Responsibility gave out at a joint AI conference,
26:21
most likely the one in Los Angeles in 1985. And some older folks in the room, or perhaps some that know a little bit of Americana history, this is a play on a famous Cold War era advertisement, if you will. It was a public service announcement that came on
26:41
and it would say, it's 11 p.m., do you know where your children are? And its goal was to enforce curfew and to make sure that all the young people were inside at night in case of danger. Now this play on it is, it's 11 p.m., do you know what your expert system just inferred?
27:03
And we see the telltale mushroom cloud of an atomic bomb. And these are the same expert systems that Bob Kahn himself was trying to build. I mean, they never really came to much, right? At that time, it was very quickly before what we all know now as AI winter.
27:22
And these expert systems were never much more than large statistical models. But the fear at the time was that they would be treated as experts, was that if the system itself said, okay, we're under attack, it's time to launch the nukes, that that is something that would actually happen and that a human would go about doing it
27:40
based on this expert system. That was a massive concern for CPSR, the Computer Professionals for Social Responsibility. Now CPSR was around actually until 2013 and they gave out fairly regular awards, actually named Novartvina, and highlighting people in the field
28:01
who were working on social responsibility issues as computer professionals. It's a pretty amazing list, we've actually covered some of the people on the list, but it goes on to talk about whistle blowers and other people who really worked actively to act as an ethical agent. But I think what we can really take away from CPSR
28:20
or what I took away from CPSR is that it was starting this type of activist conversation amongst computer professionals. And it was asking what can we do as people who are concerned about the world, that as people who are concerned about social responsibility, how can we help? How can we be a part of change
28:41
or a part of something that we believe brings justice to the world? And I think that it's quite sad when you read about the history of CPSR that around the time of the personal computing revolution in the mid to late 80s and early 90s, they saw their numbers dwindle.
29:01
And I think one of the problems of this is computing went from being a social experience in the computer lab, working with your peers, chatting probably a lot about these types of issues and theories, it became a personal and alone experience. This was also the time that is often tracked
29:20
to the gender disparity within our field. That's often tracked that the advertisements were all of a sudden, be a gamer, it's really cool for boys to have computers. And so I feel like this was perhaps not intentionally, but unfortunately, one of the fatalities of that era was this conversation about computer professionals
29:43
for social responsibility. But CPSR was not the only ones fighting for more professionalism in the field or fighting for us to be having these conversations. You might know of Karen Spock-Jones, she's quite famous.
30:01
She's a statistical, she's a statistician as well as a natural language processing expert. And she worked at the Cambridge Language Research Unit on a variety of NLP research. Primarily her first forays, I guess, are into looking at inferring meaning from the sorry definitions in text.
30:22
You might have also heard of TF-IDF, term frequency inverse document frequency. It's a popular way to extract important words or phrases from a text when you have a larger corpus. She wrote the paper that defined the IDF portion of the equation.
30:41
So she's pretty important when it comes to natural language processing. But that's not her only contribution. Spock-Jones was an outspoken critic of the gender disparities within our field. She also was quite a critic of not having some sort of professional code.
31:01
She thought that the growing importance of computers in people's lives meant that we had more and more influence and she was concerned that we didn't have any type of licensing, board, review, that we didn't necessarily have any standards for our own code, our own actions.
31:20
I'll again read a slightly longer quote. I certainly think that professionalism is very important. To be a proper professional, you need to think about the context and motivation and justifications of what you're doing. You don't need a fundamental philosophical discussion every time you put your finger on the keyboard.
31:40
But as computing is spreading so far into people's lives, you need to think about these things. I've always felt that once you see how important computing is for life, you can't just leave it as a blank box and assume that somebody reasonably competent and relatively benign will do something right with it.
32:00
Her quote again evokes this idea, who is responsible? If I build a program or write a script or build a model that does something unethical or that can be used unethically, am I responsible for when somebody does that? If I'm not responsible, who is?
32:21
As especially machine learning and numerous other fields that we work in, as they touch more and more lives, as they're used even by legal experts or by the police or by state intelligence or by doctors, are we not also beholden to these same licenses, these same ethics that these fields have?
32:45
Now I'm not necessarily saying we need to have a licensing or we need to have a particular code, but if we have no standards, if we have no way to review one another's work and to think about whether we're following an ethical principles or whether we're helping or hurting lives, then we don't even have
33:03
a starting place for these conversations on professionalism. And if we're not willing to take on the burden of thinking about these things, then does the burden actually go away or are we just ignoring it?
33:20
Finally, we reach some of the outspoken leaders of today's AI in today's era. This is Joanna Bryson. She is a professor of computer scientist, or she's a professor of computer science who speaks on AI and natural intelligence at the University of Bath. And she's been working on the field
33:40
of AI ethics since the 90s. She is a prolific writer and she's published all the time. She has a paper over this past year on the epics of looking at word vectors and different gender disparity within them. But she also has some older works.
34:01
For example, her paper, Just Another Artifact, Ethics and the Empirical Experience of AI, articulates this problem of humans over identifying with artificial intelligence. She essentially says that instead of treating it like a book or a reference, we treat AI as a human actor, as another intelligent human being
34:22
with reason and common sense and so forth. And she states that this is quite dangerous. If we start treating AI as more expert than ourselves, if we start treating it as the sole singular expert in the room, then what happens when it makes a mistake?
34:42
So she's been quite outspoken on these issues. On one of her longer posts, she articulates some of her well thought out opinions on how we use data. I'll read again from a longer quote. As we in the computational computer social sciences
35:01
learn more and more, our models of human behavior get better and better. As our models improve, we need less and less data about any particular individual to predict what they are going to do. So just practicing good data hygiene is not enough, even if that was a skill that we could teach everyone.
35:22
My professional opinion is that there's no going back on this. But that isn't to say that society is doomed. Think of it this way, we all know that the police, the military, even most of our neighbors could get into our house if they wanted to. But we don't expect them to do that. And generally speaking, if anyone does get into our house,
35:43
we are able to prosecute them legally and claim any damages back from insurance. I think our personal data should be like our houses. First of all, we shouldn't be seen as selling our own data, just leasing it for a particular purpose. This is the model software companies
36:00
already use for their products. We should just apply the same legal reasoning to we humans. Then, if we have any reason to suspect our data has been used in a way we didn't approve, we should be able to prosecute. That is, the applications of our data should be subject to regulations that protect ordinary citizens from the intrusions of governments,
36:22
corporations, and even friends. What Joanna Bryson is asking of us here is to actually be better than the current laws. Now, we could get into some debates about the new EU privacy regulations and the fact that yes, indeed, it has some ways that I can control my own personal data
36:40
and I can request information about how my data is used. But those are clearly not global and they have yet to be determined in the courts how they'll be enforced. What she's asking us to do is to treat people's data like their property. Would you buy and sell another person's property
37:02
without them knowing it? And if not, why do we do this with customer data? Would you collect pieces of another person's property without properly informing them in plain English so that they can understand what they're agreeing to? And if not, why do we do it in our terms and conditions?
37:24
And a particular concern of mine regarding security, would you take your friend's laptop or computer or books or whatever it is that they have and would you leave it on a public table outside for everybody to walk by and perhaps look at? And if not, then why are you leaving your databases
37:41
with protected information or your models with protected information on the public internet? Debating ethics at work is nothing new. My great-grandmother was a pretty amazing woman. Here's her, her name was Adele Matilda Rich.
38:04
Her on the left is her growing up in New York City and she used to tell me really fun stories in the 20s and 30s of growing up in New York City. And her on the right is her with a very young version of me. Now, Madel, as I used to call her, was amazing.
38:21
She has a lot of good memories that I have in my mind. But the reason why I bring her up is the fact that she worked as a secretary on the Manhattan Project. The Manhattan Project was a project, in case you don't know the code words, for the nuclear bomb in the United States.
38:41
And she worked as a secretary. She used to tell me fun stories that she did something important for the war and that she had to burn her notebooks and so forth. And I just thought it was a fun story until I got older. She died when I was a child, so I was never able to have these types of conversations with her. But I did talk with my mother,
39:02
who I know had several conversations. And she said that Madel, at first, didn't know exactly what she was working on, but slowly put the pieces together over time. And then, of course, when the bombs were dropped, she was quite aware. And I asked my mom, did she ever regret it? Did she ever stay up at night
39:21
wondering if what she did was right or wrong? My mom said that she thought that Madel felt, like many of her generation, that she was helping the good guys and that any deaths that were created by the atomic bombs dropped were just small casualties in a larger war.
39:44
I wonder if I could go back and ask her if we could have a longer conversation about this. But it also makes me wonder if my own privilege now, something that she worked so hard to do, she worked to help make sure our family could survive through the war and the Great Depression,
40:01
and she made it so that my mom could be the first one to go to college. And my mom going to college meant that I was able to get a computer and that I was able to go pursue computer science. So maybe it is wrong of me to question something like that. Perhaps she was not in a position to make a decision between ethics and feeding her family.
40:23
And perhaps it is rather ignorant of me to even ask that question. But I also wonder if it is not a right for all of us to have rewarding work that's both challenging mentally, financially rewarding, and also ethically something that we can support.
40:43
So I challenge us as a community. As data scientists, web developers, if you work on MicroPython, CorePython, Jython, whatever it is that you do, Internet of Things, I challenge us to start having these possibly more difficult conversations.
41:02
I challenge us to start thinking about who is responsible and how we can hold one another accountable. Now, you might not work on anything that touches ethics, but perhaps you do, or perhaps a coworker does, or a friend, or a colleague, or somebody you meet at this conference.
41:21
Perhaps a future you, or a past you. I know in my career, I've been definitely asked to do unethical things, and I've been asked to lie, to make charts look different, to make data look different, for the greater good, right? For extra funding, for whatever it is
41:41
that the goal of the company or the product is. So I've had to have some of my own ethical decisions. I think that hopefully what we've heard today and what we've shared today can start to be a ground for a communal conversation.
42:01
I know that in a community like this one, we have values. For example, I feel like as a Python community, we value diversity. I feel that we value free software and open source, and I'm very proud of that. I also feel welcome and supported today, just the same as my first Python conference in 2010.
42:25
So we have these shared beliefs and these shared values already. But perhaps we don't have a larger understanding of how they relate to ethics and computing as a whole, or how they could inform a communal ethic, or communal principles, codes of conduct,
42:42
ideas that we share with one another, and conversations that we have openly. Now, it may not be that we come to some communal agreement but just the idea of sharing our stories openly, sharing stories like these from around the world openly that tell us a little bit about the history, the ideas,
43:01
and these deep questions and dilemmas that we have, perhaps that is a way to start being a better force in the world. We, data scientists, programmers, Pythonistas, computer and software engineers, we do have this cultural and ethical history.
43:21
I hope that is shown, this is only just a few of what I was able to find in my research. I want us to share these stories, and I want to hear your stories and have you share your stories as well of these ethical dilemmas that we face, and the decisions that we make around them. And I truly, truly believe that in doing so, we are more,
43:44
that when we have these conversations together, we're actually contributing to a greater good. Perhaps by having all of these conversations and working together, we can make sure that AI, machine learning, natural language processing, and Python have a bright future in justice,
44:04
and that that's something that we can be really proud of. And I know that I want to be a part of that, and I hope that you do, too. Thank you so much for being here, and I hope to continue this conversation throughout the conference.
44:43
It's really quiet because there's a lot of thinking going on in the room, but we have time for questions. So who has questions or comments? And if you're able to...
45:21
Basically, because you have automated ways right now, like PayPal, and most recently, cryptocurrencies, which basically allows us to operate without banks. So what do you think about this? Yeah, so I definitely think, I shared some folks today throughout history, and there was a few that touched a little bit on this,
45:43
but I feel like this is definitely something more of our generation, our era. The idea of using kind of some of this technology to subvert the norm, right? To actually actively work against corporate influence on our life, or government influence on our lives. So I think that there's a lot of ways that technology can be used to kind of actively work
46:02
against some of these ideas. I completely agree, and I think that that's a really important conversation that unfortunately didn't quite fit in in this talk, but I think that's really important to share and think about. And this is kind of this idea of us creating technologies that actively support what we believe in, and then as we spread them in the world,
46:21
that that hopefully makes more things possible. Like I look at teams like the Signal team and what they're working on, and I think it's really great. There's some amazing work out there, and perhaps you work on those things as well, and these are really interesting conversations to me and things that we should also be highlighting, right?
46:41
Thank you. Hi, thank you for the amazing talk. I would like to ask you a question about the autoization, especially if you see difference between what will happen probably with this machine learning revolution in some sense, and something like the industrial revolution or what happened in China, because the advantage that I see,
47:02
or those other things that happened in the past, is that in fact they lifted a lot of people from poverty, even though it was not a great process in some sense, that was the main advantage, where one should always say, when you talk about an autoization machine, it has also human side, which was helping people.
47:22
What do you think there is some differences between what will happen, that had already happened, and why this should raise some different question regarding ethic? So, sorry, the question exactly is about how the computer revolution is different
47:40
than revolutions before it, or? And, sorry. The question is, how do you think ethically this is different in some sense? Why should we, what we should think about? Is this just automatization on a larger scale?
48:01
Or is something inherently different? Yeah, I wonder about that, because, yeah, clearly the industrial revolution was, and other previous revolutions, for example, the printing press and so forth created their own ethics and their own issues. What I wonder about with computers is, as we've seen with Moore's Law,
48:20
and then also with the new post-AI winter, we're seeing quite a lot of progress in a short amount of time. I don't know if that's the same amount of progress. I obviously wasn't alive then, and I'm no expert on previous revolutions. I'm not even an expert on this revolution. So, I don't exactly know, but I wonder if the pace,
48:41
and perhaps the expanse is slightly more impactful now, that maybe because of the internet and because of the advances that we've made in computing, perhaps we have a more quick impact on people's lives. I don't know if this is true, but sometimes I think that perhaps this might be the case. But I would be curious to hear more of your thoughts
49:00
on that topic as well. Great talk, really great. I was thinking about, thinking about the ethical side of what we do on daily basis is really important, but my concern is more, the people who started investigating the outcomes and their sub-particles many, many years ago,
49:20
they had no idea that this will lead to the Manhattan Project. So, do you have any advices how we can try to predict the future implications of our work and how they can, the ethical result 50 or 100 years from now? This is something where if I had an answer for you, I would be telling it to the world from the rooftops.
49:42
And I don't know. I think perhaps we need to, I echo that concern. I don't know what things that I work on today and things that people much brighter than me work on today. I have no idea what large impact these will have on the world a decade from now, even five or 10 decades from now.
50:03
So, but I do think we should be thinking about it. And I think that I was heartened to, when I was reading and doing research for this, that some of these people were trying to project and think about what this could do in the future. Now, a lot of the times, they were pretty far off. But I think that it doesn't hurt to start the conversation now and to work together
50:23
to perhaps see what this might mean in 10, 50 years. And for example, how we can protect against things that we're fearful of and also support ideas that we think would be really great things that we can do with technology. Thank you for an awesome keynote.
50:41
I think, at least from my point of view, it's pretty obvious that automation of work is a trend that will continue and that will probably worsen any social disparities that we have now. Do you have any ideas where to start a conversation about how technologists can help in this situation
51:05
instead of worsen the state that we are already in? Yeah, so I think that as I was thinking about this and when I found the computer professionals for social responsibility. Now, there are some groups that are working on these types of things. There's like fairness, accountability, and transparency
51:22
and machine learning group and conference. There's some other ones that are trying to start these conversations about how we can consciously, how we can be socially conscious actors when we do automation. I think that this would be a great thing to have a group around or a conference around or ideas around.
51:40
I am no expert. I am just a person like you who is concerned. And I think that the more that we are able to share ideas, I think that honestly, collectively, we are much more smart than any of us individually. And perhaps by having these types of debates openly, we can figure out maybe thoughtful ways
52:01
that we can automate so that perhaps the impact is lessened. I also agree, automation is definitely not gonna go anywhere and nobody's going to stop automating because of a keynote or because somebody has concerns. So I think that, yeah, having these conversations and collectively coming up with some good solutions
52:21
or some good ideas is most likely the most powerful thing that we can do. We have three more questions and if you promise to keep them short, I'm going to take them. But I think probably, since it's such an interesting subject, would you be willing to do an open space for that?
52:40
Yeah, sure. Sure, so basically, if you don't know open spaces yet, we have a large room there. You can just join there and make up your own. You're a Python. Yeah, that's what it's for. So there's a whiteboard downstairs. You can use it. So we're going to put an open space for this on. I'm going to tweet also about it
53:02
because I think many are interested and there's like three more short questions, yeah. Yeah, cryptocurrencies have been just cited as one of the possible good signs of organization. I see them as a clearest example of what you said when you have technological advance without social advance because basically we have to burn oil
53:22
because there was no other way than the proof of work to distribute bitcoins. Do you think this is related to some more intrinsic characteristic of computer scientists that this belief not just in current democratic institutions but in democratic processes in general which makes it difficult maybe to have
53:40
even just discussions because if they are pointless, we will not have them. Yeah, so you're asking like the anonymity that a computer allows me or that the internet allows me and does it break down these types of conversations? Yeah, a more anarchic background compared to the idea that democracy can help us have social advances.
54:01
Yeah, I mean, this is a massive debate and something that I thought about including but I just didn't have time to include. Like for example, these people that believe that we should have parts of the internet where you have to be yourself, right? Allowing us to have, let's say, that use the internet as a system of political or social trust, right? And then you also then have governments using that
54:22
to actively persecute activists and using those same like, okay, I can publicly identify you or I can easily de-anonymize you amongst big data so I can actively persecute you. And this is this like thing, the more that we create systems that we can use for perhaps creating these types of like
54:42
public conversations where I can represent myself and on a global stage and be an identifiable actor is also a problem and this is nothing new, right? This is something that since way before the internet but it means that perhaps, yeah, in some cases
55:01
anonymization and these types of things are necessary to protect people. But I think it's a really interesting debate and there was a few people that I wanted to include that are having this debate openly about e-democracy and so forth and I think that it's fascinating and it's something that, yeah, hopefully we one day can trust,
55:22
let's say, international institutions to help support people if they come forward publicly and they identify themselves and they speak of atrocities within their country or so forth, Snowden, that they perhaps then are supported globally and have protections globally and I just don't know if we're there yet, unfortunately.
55:42
Okay, last question. Okay, so my question was a bit based on what you said right now. How exactly do you come out after you find something that's just seriously unethical, exactly like Snowden? He discovered that NSA was really not doing what it was supposed to do and he got to a point where, hey, you're just getting everyone's data
56:00
and you're doing something that's incredibly moral. How exactly do you come out? I mean, you start to dispose information that's not really for the public but if you keep on doing what you're doing, you're actually compromising more. It's a very moral shaky ground. What would be your recommendation or your view? Yeah, so I think the protections for whistleblowers within computing, within internet and these types of things
56:23
that we have to really evolve the conversation that being a whistleblower for something like a massive multinational corporation or a massive multinational state intelligence agency, like these are things, where do you then go? Like who's supposed to protect you, right?
56:40
And I think that this is particularly dangerous within our field. Like if I worked for a massive multinational corporation and I knew that we were doing something wrong, what am I supposed to do? If I come forward, I will most likely be fired. There will probably be a target of everything against my name, make me out to be an idiot, an unethical human
57:03
and so forth. And if we as a community have more of ways that we treat whistleblowers and more ways that we protect whistleblowers, maybe people would feel more comfortable, right? Because I also agree that what are you supposed to do? And in some of these stories, I'll be publishing a little bit more about this
57:20
and I'll be putting some stories on my blog of some of the folks and what they did. But for example, there is a man called David Parnas and he was a computer scientist and worked in Canada for the US Department of Defense along with a few other multinational defense agencies. And he wrote a big public resignation letter
57:42
and he sent it all around everywhere he could think of. This is the days of the mailing list, not particularly public internet. And he publicly said, this is unethical, everything that's happening within this organization is not something I believe in. I refuse to work here and I don't know why my peers continue to do so.
58:01
And I think that we can look at some of these references in history to perhaps think about how we can make some of the same statements, but that also we have to figure out how to protect one another when one another is willing to step forward and do that. Okay, thanks again for this amazing keynote.