We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Being Good: An Introduction to Robo- and Machine Ethics

00:00

Formal Metadata

Title
Being Good: An Introduction to Robo- and Machine Ethics
Title of Series
Number of Parts
66
Author
Contributors
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Machines are all around us: recommending our TV shows, planning our car trips, and running our day-to-day lives via AI assistants like Siri and Alexa. We know humans should act ethically, but what does that mean when building computer systems? Are machines themselves—autonomous vehicles, neural networks that diagnose cancer, algorithms that identify trends in police data—capable of ethical behavior? In this talk, we'll examine the implications of artificial moral agents and their relationships with the humans who build them through the lens of multiple case studies.
RobotAbstract machineVideoconferencingMusical ensembleTrailMultiplication signJSONXML
Web 2.0WebsiteTwitterAbstract machineObservational studyComputer programmingGoodness of fitObservational studyTwitterSoftwareCASE <Informatik>Point (geometry)Multiplication signBitRobotComputer hardwareState of matterWritingFlow separationInformationHash functionFreewareDiscounts and allowancesAbstract machineSoftware engineeringGame theoryComputer scienceQuicksortPhysicalismComputer animation
SoftwareComputer hardwareComputer-generated imageryAsynchronous Transfer ModeDescriptive statisticsContext awarenessMedical imagingContent (media)SoftwareComputer hardwareComputer animation
DreizehnRight angleFamilyMereologyQuicksortControl flowAuditory maskingAlgebraComputer animation
SoftwareComputer hardwareStaff (military)Bit rateQuicksortMereologyMetric systemComputer animation
Rule of inferencePersonal digital assistantSoftwareRule of inferenceObservational studyPhysical systemArithmetic meanLatent heatInstance (computer science)Decision theoryQuicksortMachine codeCASE <Informatik>Fuzzy logicSoftwareNumberTheoryGoodness of fitCalculusMorley's categoricity theoremRobotMereologySpectrum (functional analysis)Computer animation
Computer-generated imagerySoftwareResultantAsynchronous Transfer ModeThermal radiationMultiplication signConcurrency (computer science)CausalityDirection (geometry)Term (mathematics)Constraint (mathematics)Software protection dongleEndliche ModelltheorieMassComputer hardwareNumberSoftware bugCASE <Informatik>
Machine codeCondition numberBuffer overflowComputer hardwareSoftwareAsynchronous Transfer ModeParallel portSequenceQuicksortBound stateStandard deviationComputer hardwareSoftwareMachine codeAsynchronous Transfer ModeBuffer overflowCondition numberIndependence (probability theory)High-level programming languageCombinational logicSoftware bugRevision controlComputer animation
Computer-generated imagerySoftware testingReal numberVideo gameDifferent (Kate Ryan album)Regulator geneCASE <Informatik>Asynchronous Transfer ModeQuicksortNumber2 (number)EstimatorResultantRight angle
PressureSoftware testingFaktorenanalyseCondition numberAsynchronous Transfer ModeMachine codeComputer programPosition operatorSoftware testingSurfaceMetric systemTraffic reportingHazard (2005 film)Regulator gene1 (number)Machine codeSoftwarePrisoner's dilemmaResultantFrequencySpeech synthesisCryptographyAsynchronous Transfer ModeQuicksortMereologyForcing (mathematics)Computer programmingIntegrated development environmentRight angleComputer animation
Computer-generated imageryQuicksortScaling (geometry)Autonomic computingCASE <Informatik>PlastikkarteDesign by contractMachine codeSelf-organizationSoftware bugSoftwareBitSoftware testingDatabase transactionObservational studyFinite-state machine
PlastikkarteState of matterRippingMach's principleBEEPPattern recognitionObservational studyDesign by contractRobotArtificial neural networkCASE <Informatik>Computer programmingDecision theoryState of matterPredictabilityAutonomic computingQuicksortTraffic reportingSoftwareSeries (mathematics)Instance (computer science)PlastikkarteNumberDifferent (Kate Ryan album)BitNetwork topologyPower (physics)Computer clusterResultantUniverse (mathematics)Slide ruleRight angleFunctional (mathematics)Copyright infringementTuring-MaschineFinite-state machineProgramming languagePattern recognitionSoftware bugOrder (biology)Database transactionSupervised learningRippingComplete metric spaceLine (geometry)Validity (statistics)Computer animation
Computer-generated imageryRobotQuicksortDecision theoryTraffic reportingDemosceneAutomatic differentiationBiostatisticsCopyright infringementInformation privacyInformation
Information privacyVector potentialIdentity managementGroup actionWeightModulo (jargon)Artificial neural networkVector potentialDisk read-and-write headIdentity managementPower (physics)Decision theoryDimensional analysisAxiom of choiceGroup actionRight angleWeightVideo gameHash functionQuicksortDifferent (Kate Ryan album)PasswordData miningExistential quantificationComputer animation
Computer-generated imageryDecision theorySoftwareTelecommunicationEndliche ModelltheorieData structureSet (mathematics)Different (Kate Ryan album)Open setDecision theoryNetwork topologyDivisorTraffic reportingInformationLength of stayComplete metric spaceComputer animation
Decision theorySoftwareTelecommunicationEndliche ModelltheorieData structureBlack boxPower (physics)Decision theoryMereologyPhysical lawSoftwareTelecommunicationSelf-organizationSystem callVotingNetwork topologyVector spaceAlgorithmComputer animation
Computer-generated imagerySlide ruleDecision theoryAutonomic computingProfil (magazine)Demoscene
RobotGroup actionHausdorff dimensionChannel capacityInsertion lossSelf-organizationGroup actionRight angleMultiplication signSimultaneous localization and mappingNP-hardMachine codeSelf-organizationCASE <Informatik>QuicksortNumberFlow separationCalculationHazard (2005 film)Mechanism designOrder (biology)Functional (mathematics)Stability theoryHorizonVideo gameError messageGoodness of fitRobotDimensional analysisChannel capacityDecision theoryPlastikkartePressureComputer programmingWave packetSoftware developerComputer scienceParallel portThermal conductivityAnalytic continuationMilitary baseAutonomic computingSystem callReinforcement learningCycle (graph theory)Metropolitan area networkMereologyRing (mathematics)Power (physics)MathematicsFormal grammarDegree (graph theory)Connectivity (graph theory)Social classBitSlide ruleSubsetSoftwareBootingStandard deviationSoftware frameworkOcean currentDifferent (Kate Ryan album)Computer animation
Computer animation
Transcript: English(auto-generated)
So, we'll get started. First, thank you, thank all of you. Thanks so much for coming to my
talk. I really appreciate it. I know there are multiple tracks. There's lots of ways to spend your time. So I really do appreciate every single one of you for, for choosing to spend your time here. Thanks also to RubyConf, to all of our fantastic sponsors, and to the venue, where, you know, I've been in LA now for about three years. I don't make it downtown a lot.
LA's kind of like a weird archipelago, and when you're on your own little island, you tend to stay there. But this is a fantastic venue, so, so thank you so much. Let's get started. I tend to speak very quickly. I'm working very hard right now to not talk a bile a minute. Because it gets worse when
I'm excited, and talking about software engineering and Ruby and writing ethically and ethical programs I think is, is really, really exciting. So if at any point I start going way too fast, feel free to make some kind of, yeah, thank you, Michael, exactly.
Some kind of very visible gesture and I will, I'll slow it down. I'll also check in a couple of times to make sure that this pace is good. So I'm gonna talk for about thirty-five minutes, so we'll have a little bit of time at the end for questions. If for any reason I go a little bit long, you know, feel free to come find me after the show. I'm happy to talk about this stuff forever.
So my name is Eric Weinstein. I'm a software engineer here in LA. I'm currently CTO of a little startup that I co-founded called AUX, A-U-X. We are building auction infrastructure for blockchain technology, so a lot of price discovery, a lot of game theory. People sometimes in the, in, in
other programming communities will check in with me to see if I've sort of secretly gone crazy because I'm doing all of this blockchain stuff, but it really is fascinating. That's another thing I'm happy to talk people's ears off again after the show. My undergraduate work was in philosophy. Mostly philosophy of
physics, but also some applied ethics. My graduate work is in creative writing and also computer science separately. So ethics in software engineering has been on my mind for a while now. And the rest of my information is in this human hash I felt compelled to make, so I will be
up at the end, again, if you'd like to get in touch. I'm Eric Q. Weinstein. Q as in Q. Q-U-E-U-E. I guess Quebec is probably a better Q. So please feel free to reach out on Twitter. Feel free to, you know, find me, and I'm happy to have conversations. A few years ago I wrote a book called Ruby Wizardry that teaches Ruby to
eight, nine, ten, eleven year olds. So if you want to talk about that, I'm happy to chat about that too. Oftentimes the good folks at No Starts will put together a discount code, so if you would like the book but can't afford it, please let me know. We'll make something work. This is not a long talk, but I still think we benefit from some kind of
agenda. So we're gonna start with what it means to be good, because it is a fuzzy idea, right, to say we should be good when we write software. Our software should be good. We're gonna look at three mini case studies in roboethics and then three mini case studies in machine ethics. Roboethics concerns itself
with humans being ethical when building robots. And I'm gonna be a little bit loose with robot and include program. So any piece of software or hardware that a human designs, roboethics is about behaving ethically and having ethical considerations top of mind when building these machines. Machine ethics is
more of a science fiction-y field, and we'll see why in a little bit. But this is more about how we design artificial moral agents that are themselves ethical. So if a machine makes a decision, how do we know if the machine is being good? How do we assess the ethical state of the machine's decisions, and how does
that compare to how humans make ethical decisions? And then, as I said, we'll hopefully have a little bit of time at the end for questions. So please note, this talk contains stories about real people. No one who was injured or killed is specifically named. There are no images or descriptions
of death or gore. However, there is going to be a lot of discussion of how software and hardware can fail, what those failure modes entail. And we're going to touch on some topics that are potentially upsetting for members of the audience. There are a couple
that are medically oriented, so it will not hurt my feelings in the slightest if you need to leave. Do not worry about it, but I want it to be up front so that everyone is aware of the content and the stakes involved. So, the stakes involved are actually quite personal to me. This is my son, who was born in
March. He was born twelve weeks prematurely. So, we decided to skip the third trimester. I do not recommend this, but luckily there are, I think, kind geniuses, is how we describe the folks in the NICU, where he spent seven weeks. Unbelievably hardworking, kind, passionate, genius people. His doctors, nurses,
therapists, everyone who took care of him is phenomenal. He was twelve, sorry, two pounds, thirteen ounces. When he was born, it's a little hard to see, but if you see that black and that red wire, right below that is my wife's hand. He was about two hands full, so. I
also like this image, and I will show this at every possible family gathering forever, because he appears to not appreciate me taking a picture of him. If you're curious, he's wearing a bilirubin mask, so he, as part of his treatment, required very bright lights to sort of break up the excess bilirubin in his blood, which can harm his eyes, so he's wearing a mask to protect himself. I like
to think of it as like a little spa treatment, but. Now for those in the audience who are concerned, do not worry. He is a happy, healthy, seventeen pound, eight month old baby. Thank you. He is finally bigger than the dog, which is great.
She is a chihuahua mix, so impressive. Like I said, we are unbelievably lucky to have had the team that we did at Mattel here in Los Angeles, and I cannot overstate the unbelievable gift that was every single member of the NICU staff. But
we think a lot about the people in the room, and we don't always think about the people not in the room. Everything from his heart rate to his O2 saturation, every single metric. Somebody wrote software or built hardware to measure it. Every treatment he required, everything from the way his food was administered, required a machine to do the right thing. And
it's easy to forget, in the sort of whirlwind of something like this, where you have a medical emergency, you have something very scary happen to you, that there's a lot of hardware and software running in the background that is critical for maintaining people's well-being and their lives. And so this talk is
in part about that. So, like I said, the stakes are high, in more ways than one. And part of the fuzziness that I wanted to spell early on in this talk is what it means to be good. There are a number of
ethical theories. We're gonna be thinking mostly about, talking mostly about applied ethics in this talk, which is what to do in concrete situations. How to live ethical lives and make ethical decisions rather than talk about the abstract. You know, what if this were the case, how would we behave? Imagine, you know, various thought experiments. So there are a number of
schools of thought. I'd like to focus on three. The first is utilitarianism. The idea that we're trying to produce the most good for the most people. Whatever that means. And you can imagine, if there were some way to measure goodness, and there were some way to figure out how much good each person has received, then we could do
some kind of calculus and say, great. This, this optimizes, you know, this is the most good for the most people, and this is what we should do. Clearly, there are scenarios where that is not going to work. There are also deontological ethical theories, and these are rule-based. You can think of the code of Hammurabi, the ten commandments, Kant's categorical imperative, which basically
say, we're going to have general rules slash laws, and they're going to describe what we should do in broad circumstances. Things like, it is never acceptable to kill anyone. Or, it is acceptable to kill someone to preserve your life, but in no other case. Things like that.
The middle ground that we're going to pursue in this talk, insofar as we talk about how to be good, is what's called casuistry, which is funny, actually, because I think, I looked this up earlier. Those of you with broader vocabularies that I have might know that casuistry also means sophistry. Like, things that appear to be informative, but are not. In this context, what we're talking about
is sort of case-based ethical reasoning. Extracting rules and sort of a, a deontological way, but from specific cases, like what do I do in case A, B, C, all the way to Z, and sort of trying to abstract a rule state, a rule system, rather, from there. So, as you can see, I'm gonna take something
of a casuist approach, and I'm going to break one of my own rules briefly and just read this to you. For the purposes of this talk, being good means safeguarding the well-being of moral agents that interact with our software by deriving best practices from specific instances. And that's why we're going to look at these mini case studies
to figure out what we should be doing. So we'll start with roboethics. And roboethics, again, is sort of how humans are ethical when they design robots or design programs. And we're gonna look at three cases. The Therac-25, the Volkswagen emission scandal, which Caleb mentioned in his
talk, and the Ethereum DAO hack, which seems different from the first two if you're familiar with them. Sort of, you know, sort of a financial thing rather than, you know, people's lives or health being endangered, but it is, I think, illustrative of the broad spectrum of ethical concerns that we should be thinking about.
So the Therac-25. Raise your hand, actually, if you're familiar with this. OK, cool. So I will, please indulge me if you know this story, because I think this one is important. The Therac-25 is a machine that's used to perform radiation therapy for cancer patients. It was designed in 1982, and it
had two modes. One was to provide megavolt slash x-ray, which was often called photon therapy, and another, which was direct electron beam therapy. So two different modes. The earlier models had hardware locks, and were
thoroughly tested in terms of the hardware that was used. The Therac-25 gave up some of these hardware constraints in favor of software constraints. And so, what happened was, between 1985 and 1987, the machine caused massive overdoses of radiation for a number of
cancer patients, and I believe six people died as a result of their injuries. So what happened here, right? We have a machine that has two modes of delivering radiation to ostensibly make people better, but
it malfunctions, and it actually causes severe injuries and in some cases death. The review at the time blamed concurrent programming errors. So we know that concurrency is hard. It is also potentially, you know, lethally so when we get things wrong. What happened is, it was possible that, as a technician, if you selected
one mode, realized you'd made a mistake, and switched to the other mode inside of an eight second window, the software locks would fail to engage, and the patient would receive a hundred times greater dose of radiation than was intended. And what allowed this to happen? As I mentioned, the hardware interlocks replaced
software ones, so we now have a less robust way of preventing people from being harmed. The code was not independently reviewed. It apparently was reviewed, sort of, by the engineers on the team, but there was no independent third party review of the code involved. It was also written entirely in assembly, so we did not have any kind of
high level language to help. The failure modes were not thoroughly understood. Not just incorrectly programmed, not understood. And the hardware and software combination was actually not tested at all until the machine was assembled on site. So, all of these played into the injuries and deaths that
resulted in the mid 1980s, and the underlying bugs were everything from arithmetic overflows to race conditions to truly cargo-caulted code, code that was just copied over from earlier versions of the machine, some of which did not work or do anything in the current machine. How is this speed? Is this good? Thank you. So what do we learn from this,
right? We have a machine that is meant to heal people and does the opposite. Well, I think what we gather from this, we see a parallel between the medical profession and our own, and here there's a, in medicine there's something called a standard of care. That is the, the sequence of treatments that you are, you know, supposed to offer, they're
sort of the bounds within which you operate as a fully trained, you know, up-to-date medical professional, and deviations from the standard of care do happen, but generally speaking, when you hear about a malpractice lawsuit, it is because someone has deviated from the standard of care and caused injury or death. And in this case, engineers deviated from their standard of
care, and we endangered people who depend on us. And in this case, deviating from the standard of care was not following best practices, was not testing our code, was not being a hundred percent sure of the, you know, in the hardware, and, and off-boarding things to software because software is, is a self-problem. Which, as we know, is not true.
Our second mini test case is Volkswagen. So as Caleb mentioned, in 2008, Volkswagen vehicles were caught using what is called a defeat device. So the idea is that the vehicle will operate differently under test mode
than in real, in real life. So you will have fewer emissions, different emissions, different behavior when the vehicle's being tested by regulators than when the vehicle's out on the road performing in, in sort of quote unquote production, right. This, it's harder to quantify this one. As I mentioned, there were a number of, you know, a concrete number of
deaths with the Therag-25. It was clear exactly who was harmed and who was injured. Less so here. Certainly there was a lot of money that was spent, a lot of fines that were paid. Estimates are that this resulted in 59 premature deaths due to things like emphysema, COPD, general respiratory illness as the
result of exposure to the increased fumes from the vehicles. Again, difficult to quantify, but certainly financial and human costs are involved. Now this was allowed to happen because everything from the speed to the steering wheel position to the ambient air pressure could be used to determine whether or not the machine was in
test mode. And as software engineers, that sounds like a thing we should have, right. We have test, we have staging, we have production. We have different environments and the code is tested differently in them. And it may on the surface not sound unreasonable for someone to say, oh, let's have a test mode. Or let's detect when the machine, when our vehicle is being tested. That in and of itself might not
be bad. Maybe there's metrics we want to report back to Volkswagen. Maybe there's other things that we can learn from having, you know, the ability to detect when the vehicle is under test. But we have a moral hazard here of this quote unquote victimless crime that was not so victimless. An idea here is, well, we don't want to have to follow a
bunch of really stringent regulations. We're going to lose money for the company. We're going to have to do this and that. No one's going to be hurt if we just have lower emissions in tests and we pass all of these regulations with flying colors, everyone's happy, and we make a bunch of money and quote unquote nobody gets hurt.
So what I think we get from this one is that we must always ask ourselves what our code is going to be used for. And I think it's not only our right but our obligation to refuse to write programs that will harm people or other moral agents who interact with them. And we'll talk about this more later and what
this means and how hard this can be. But I think the takeaway from this is that we have to be able, as people who write software, who build things that people depend on, we have to be able to say no. And as Caleb mentioned, you know, engineers have gone to prison as a result of this. For those of you who are familiar with the Goldman Sachs high frequency trading debacle, stealing code
can send you to prison. Back in the early days of cryptography, there were questions about freedom of speech because cryptography was generally understood to be a weapon, a sort of part of the top secret sauce that makes the U.S. military work. And so there are always going to be powerful forces that tell us what we can and can't
use software for. And I think we need to be empowered to disagree. This third sort of microtest case, or micro case study rather, in roboethics concerns the Dow. And the Dow existed in 2016. It was the first attempt
at sort of a large scale, decentralized, autonomous sort of venture capital vehicle. The idea is people put crypto currency, in this case ether, into the Dow. Members can vote on how those funds are deployed, and you sort of have this decentralized VC fund that can be used to fund various projects or initiatives.
However, the problem was that there was a smart contract, and smart contracts in Ethereum are code that are deployed to the blockchain, to the Ethereum blockchain, and are sort of elaborate state machines. They can receive input, they can change state, they can provide output.
You can think of a sort of a vending machine is the classic example. I tend to think of things like escrow, like I want to sell you something. My item goes in, your money goes in. Once both parties have fulfilled their obligation, they're exchanged. Disintermediation, decentralization, these are the utopian themes involved. However, because of a bug in this contract,
four million ether, which is about 70 million dollars at the time, and as of this morning, about 840 million dollars, were siphoned out of the Dow, out of the smart contracts, out of the organization. This reminds me of a panel I gave a talk on a little bit ago to attorneys who were trying to
figure out how blockchain technologies work and what the liabilities involved are. One attorney raised his hand and said, right, but who do I sue? Do I sue the software engineers? Do I sue the hacker? Do I sue the people who are, do I sue the miners who are confirming these transactions and quote unquote shouldn't? This is a theme that's gonna come up
more and more as we get further into the talk. But liability is a huge concern. Now, this could happen. I actually just realized I have the remote, which is why I've been moving around and making the videographer, but thank you so much for dealing with me moving around and wandering. The Ethereum smart contracts are Turing complete, so they can do anything a Turing machine
can do and that means that there are lots of smart contracts that you can implement that do nothing, that throw errors, that just spin endlessly and waste all of your funds for computation, which in Ethereum is called gas. This means that there are state machines within valid states that are now possible
and the particular bug in this smart contract was a re-entrancy bug, effectively the order of two lines should have been transposed. Rather than perform a transaction and then update state, we probably should have updated state and then performed the transaction, but because the transaction was performed first, it was possible to repeatedly
siphon funds out of the contract. And this is something that could have been caught again with more robust third party testing, with more comprehensive, you know, maybe gear improvers or, you know, more powerful programming languages or tools that would allow us to say, oh, this is actually, this contract can get into a state that we don't want. Now, the result was not as bad,
quote unquote, as, you know, certainly as the fair 25 people dying, not as bad as people losing money and potentially getting sick and dying from the Volkswagen scandal, but at the same time, this one bothers me in some ways more than the others, because I don't think we learned anything from this one.
The result was a hard fork on the Ethereum blockchain. The people who said, well, that's the cost of doing business, that's how things work, people steal your money sometimes, continued on Ethereum Classic, ETC, ETH is the ether you probably, if you are familiar with Ethereum, know now, and that was the result of making things right. So now there are two parallel universes, there are two ethers, and people who held both got
more money. Now, certainly people who invested money in the Dow lost their money, but I think there was no serious introspection that resulted from this. And I think, sort of, what Uncle Ben has said, and RIP Stanley, I definitely did not finish these slides this morning, or yesterday, with great power comes
great responsibility, right? And I think we are obligated to make programs powerful enough to do what they need to do, and not more powerful. We don't need to write programs that can do things just because, or maybe we'll need that functionality, or maybe we need turn completeness. I think we are obligated to make programs that do what they do, and do not have the ability to do more.
I think that one is probably a little bit more contentious. So, those are our three case studies in Roboethics, how we should think about writing software, and what should kind of cross our minds, who's going to lose money, who's going to jail, who's going to be harmed or potentially killed,
if I make a mistake. Now we're moving more into the realm of machine ethics, and this got kind of heavy, so I put in this cartoon pirate robot to lighten things up a little bit. It's an illustration from the Ruby Wizardry book. So the three sort of case studies we'll look at here are a little bit more hypothetical,
although they rely on technology that exists now. And these are facial recognition, the use of police data in predictive policing, and autonomous vehicles. So, as I said, there's going to be sort of a theme here. It turns out Minority Report, or Philip K. Dick more generally, just kind of predicted all of these things to some extent, so there's going to be a lot of Minority Report references in the last half of the talk.
For those of you who are not familiar with machine learning, the idea is simply producing programs that can do work without being explicitly programmed. It's effectively pattern recognition. You say, you know, this is a picture of a car, this is a picture of a car, this is a picture of a car, and then you say, okay, what is this a picture of? Right, you're teaching the machine in some sort of way
to replicate human learning, to derive in generality from a series of instances. There are a number of different ways of doing machine learning. We're going to talk mostly about supervised learning in a particular decision trees and neural networks.
So here's our first one. This is Apple's Face ID. And what I think is interesting about this is not simply being provided access to something based on our biometric data, but what does it mean for the machine to recognize us? Right, what does it mean for us to provide
our biometric data to Apple for Face ID, to have our faces and information about us sort of derived from photographs if you're using Google Photos, tools like that. You know, it was fascinating to me that Google knows what my baby's son looks like because it has sort of gone through and tagged all these photos.
It's like, I don't know who this is, but this is all the same guy, which is fascinating and horrifying to me. And again, this is the sort of obligatory Minority Report reference where, you know, there's that scene where John Anderton goes into the mall. He's recognized by his retinal scan and receives these ads that are tailored to him, right? So who owns these biometric data?
Do they belong to us? Do they belong to Apple? What happens if somebody uses my biometric data or it's sold to a third party? What happens if there's a colossal privacy invasion? I almost said piracy invasion because of the robot earlier, which is much more entertaining. You know, above and beyond this,
what does it mean for the machine to make decisions using my data, my face, my fingerprint? It's really interesting, and I think some people will, or I, at first, kind of glossed over this, thinking, well, how is this different from the machine accepting my password, right? Hashes the password, looks it up in the database, the hashes match, great, this is you.
I think the difference here, you know, above and beyond ownership, above and beyond the potential for identity theft, is that the machine is making a decision in a very rudimentary way, and we don't know why it makes the decision that it makes. We know that if our password is rejected, something's wrong.
Maybe there's something going on in the server, maybe we typed in the wrong password, but there's a deterministic, you know, modulo network hiccups and other things, reason for this. There is not necessarily a deterministic reason why the machine would not recognize us. Is it because I turned my head to the side? Is it the lighting? Is it because the machine has been trained on some data and the machine does not recognize faces
that look like mine for some reason? You know, men versus women, adults versus children, people of various backgrounds, ethnicities, races. And when we entrust machines with the power to do human things, to decide, to recognize, to permit and to deny, we're implicitly giving their actions moral weight. And I think this is the beginning of machines
having moral dimensions to the choices that they make. Now, you know, I wouldn't say that any machine that we've created so far is sentient, right? It's not able to make decisions in a classical sense. It's not able to, I would not ascribe an inner life to any machine that we've created so far. But we're getting there, you know, at least in a sort of superficial way.
And it's something we don't, I think, treat carefully enough. This is how you know you've made it, by the way, when you get to reference one of your other talks from a talk that you're giving. So this one, again, is referencing the Department of Pre-Crime from Minority Report.
And I gave a talk at Euroclosure, I think a little over two years ago, that focused on the use of policing data in Los Angeles and in LA County. And Los Angeles actually has a very robust, very complete set of open data for all kinds of things.
And one of those data sets has to do with police investigations. And the one I looked at effectively was, who got pulled over, and when someone was pulled over, were they arrested, right? And I used a couple different machine learning approaches here. And this one is a decision tree.
And decision trees are nice because the machine can tell you effectively why it's chosen to do what it's done. There's some relatively straightforward ways for it to kind of divide the tree at a certain point, and say, hey, the most information gained is here, I can actually make two relatively equal-sized blobs, and then we'll keep doing it, we'll keep doing it, we'll keep doing it.
So unsurprisingly, the machine looked at the data, built a decision tree, and 77 or 78% of the time, it was right. And it learned to be right by saying, is this person African-American? Is this person a man? This person should go to jail. It's very obvious.
Now, I think that this is something that is surprising to people when they hear this. You know, you think about the dispassionate machine making recommendations. The DOJ and the NIJ have launched initiatives to explore what they call predictive policing, which is much closer to this scary idea of pre-crime.
And the idea here is to figure out who is likely to commit a crime, who is likely to commit crimes again when it comes to sentencing. And what people don't realize is if you train a machine on racist or biased data, the machine is going to be racist or biased. And it's particularly dangerous when we say, oh, it doesn't, you know, I didn't do this,
a human didn't do this, this is a dispassionate machine. The machine can't be racist, so this is the right answer and this person goes to jail and that's the end. Now, in the decision tree example, the machine can sort of explain its decisions. But you can imagine something like a support vector machine, a neural network, a much more elaborate black box machine learning algorithm that cannot,
you know, there's no explanatory power here. Again, biased data, we have biased machines, and the worst, most dangerous part is people saying, it doesn't, you know, humans didn't have a role in this so this must be the right answer. Absolutely not. The machine has learned from human decisions. And just as Conway's law tells us that organizations are constrained to produce software
that mirrors their communication structure, I think we are, when we're using machine learning models, predictive models, we are constrained to mirror the biases present in the data. Now, I don't know if this is a law already. I'm gonna call this Weinstein's Conjecture because apparently I can cite my own talks and name my own conjectures and laws. If Weinstein's Conjecture is a thing,
we'll call it something else. Actually, we can take a vote later because I think there are cooler names than Weinstein's Conjecture. Anyway, so the critical thing, again, bad enough to have a bias in is this a tumor or not. We would like to be right when the machine is making this decision. But if it's who gets a loan, who goes to jail, who gets arrested, these are critical problems
that we have to confront before we can allow machines to act as moral agents. The last one here, self-driving cars, autonomous vehicles, probably needs relatively little introduction. Many of you are probably aware of the high-profile deaths associated with Uber and Tesla
when these machines fail. And the question, again, calling back to the attorney in the blockchain discussion is who's liable, right? Is it the machine? Is it the car that's liable? Is it Tesla or Uber or Google? Is it the people who drove around in the car for hours and hours and hours and hours and taught the car how to drive?
It's very unclear who's at fault. And again, the machine is not able to explain to us why it chose to do what it did. And this is really dangerous when we, again, start trusting machines to make these decisions on our behalf. And again, if you remember, when I read your report, there's that scene where he kinda like slides into the car and the car's kinda like self-driving down a highway.
So again, I don't know how Philip K. Dick predicted my entire talk, but here we are. How many of you are familiar with the trolley problem? Cool, so I won't spend a lot of time on it. The idea is you see a trolley heading to hit, I don't know, five people. You can pull the rail switch and it will hit three people. Do you do it? From a distance, you can't tell anything about these people.
Even if you could, you wouldn't know much about their lives, their behaviors, who is good and who is not. It's a hard enough problem for humans, but now the trolley problem is something that we're asking machines to solve. Do we swerve to avoid an obstruction, preserve everybody in the car, but we kill somebody on the sidewalk? Do we try to figure out the number of people who will be harmed or injured or the gravity,
the severity of their injuries and make a calculation based on that? None of these sound like good answers. When I say how do we teach our robot children well, this notion, kind of calling back to moral hazards in the Volkswagen scandal, how do we perform mechanism design? How do we design these reward functions so the machine does what we want and does not do what we don't?
This is not an easy problem. And again, who is ultimately liable? I think the takeaway here are that machine's actions are not only imbued with moral dimension when we get here, but that need for explanation is critical and the capacity to accept blame. We have to know why the decision was made
and whose fault it was. Because in order for things to function legally, in order for us to learn and move forward, we have to be able to say, this is the reason this happened and here's what we're gonna do to fix it. I think that that explanatory power, the capacity to accept blame is critical in the sort of humanization of our robots.
Actually, just a quick thing. I meant to ask this earlier and I totally forgot. How many of you have some kind of formal computer science education? And here I mean a boot camp or a CS degree or any classroom experience. Great, keep your hand up if there was an ethical component to your coursework, somebody made you take an ethic class.
Okay, yeah, it looks like a little bit less than half. I think we have to, have to, have to start teaching ethics in comp sci courses in universities, in boot camps. And we're getting down to the TLDPA, which is what I call the too long, didn't pay attention. So if you were not paying attention for the first part,
that's great, that's totally fine. We can get it all in this one slide. We have to have a standard of care and that means best practices. We have to have a structure, a framework for writing software. When we are writing programs, we have to know what's acceptable and what's not. We need the right to refuse. And here I'm thinking of a Hippocratic oath,
something like that, where you make a promise and this is true for attorneys, for doctors, for engineers, for all these professions that we pretend to be. But then we're like, oh, we don't need to be licensed. We don't need to take an oath. We don't need ethics. We can sit down and we can do get push Heroku master and our software is out in the world. Not to pick on Heroku, get push anybody master.
And I think we have to imbue these artificial agents as we build them and make them more complex with the sound moral bases that we lay out for ourselves. Because it's one thing for us to figure out how to be ethical when we write programs. It's another, once they're out in the world doing their own thing, whether they're smart contracts, neural networks, autonomous vehicles, robots, what have you,
have to have some way to say, here's why I chose to do this. Here's the reasoning. Here's who made that decision and here's how we learned from it. Now, I had not really intended for this to turn into a call to action, but the recent election cycle has got me all amped up.
I'm very excited. I think because people can be injured, people can be killed when we screw up. And people say that. I can't tell you how many places have worked where the answer was, hey, don't worry about it. We had an outage, nobody's dying, nobody's going to jail. Sometimes people do die. They do go to jail. People are harmed when we make mistakes.
What I think we need is we need an organization as software developers to fight for these things. Now, as you can see, I'm a white man with a beard and a wedding ring. So if I walk into a conference room as CTO, senior engineer, whatever, and I say, we're not doing this, it's not ethical, and we'll have no part in it,
there's a non-zero, a reasonable chance the business will back off and say, okay, let's at least talk about it and figure out why. But if I have just come out of a boot camp, if I am not a straight white man with a wedding ring and a beard, if I am someone who has substantially less social capital, less privilege, less power in the organization,
and I say, I don't want to do this because I think it's unethical, and the answer is do it or you're fired, and you do have an elderly parent, a new baby, someone who's ill. This is the first time you've actually, you know, you've done a career change, and for the first time, financial stability is on the horizon for you. And the difference between you having a semblance of a life and not is taking the stand.
The pressure is unbelievable to write that invasive code, to share that third-party data, to gloss over some error in the training data because it will probably shake out with enough training. And what I think we need in the absence of ethical organizations inside
of our computer science programs and inside of our organizations, I think we do need those, but we need somebody like, I don't know if it's a union, I don't know if it's the EFF, somebody we can go to and say, I need you to have my back because I am not gonna write this code, and it is not acceptable for me to be fired for taking the stand. Now, I'm happy to talk about what that means,
what that looks like, who does it, but it reminds me of these, of a quote by Elie Wiesel, who was a Holocaust survivor, and he said, we must take sides. He said, neutrality helps the oppressor, never the victim, and silence encourages the tormentors and never the tormented.
And that has a lot of, I think, parallels in the current political climate, and I'm happy to talk about that later too, but most importantly, what that means is, if we don't say something, we agree that we don't need these guidelines, and I think we have to, have to have them. So, that's all I've got.
Thank you so much for coming to my talk, I really do appreciate it. And, sure, the question is, how do we operationalize all this in our code reviews, in our day-to-day work? I do think that having an explicit code of conduct for the organization is a great place to start, where someone says, we have RuboCop for linting,
we have these ways of building our process, here's our continuous delivery pipeline, but we can also say, this is how we address these questions. I think having a wiki, case studies, being able to anonymize data and say, somebody came to us and said, you know what, they were not acceptable writing this ad. You know, when I worked at Conde Nast, there was an ad called The Guillotine,
it did what it sounds like, it was extremely hard to deal with. I started running an ad blocker so I didn't have to look at it, even though our team was tasked with building it. These, even these trivial questions are things that you can build into a wiki, build into an internal document, and someone can say, I don't feel comfortable, what do I do? Now, again, it's tough when you have it within your organization and you don't have
a third party that has your back, but I think that's a great place to start. That was an excellent question, and I have settled, for now, on the Legion of Benevolent Software Developers. So sorry, yes, the question is, what is the name, what name are we thinking of for this group that will defend our rights to not write unethical software? And I've, for now, settled on the Legion
of Benevolent Software Developers, but I'm always open to input, and if we build this thing together, I think we can arrive at something great. All right, I think that's all the time I have. Thank you so much. Please come find me if you have questions. Thank you.