Keynote: Killer Robots Considered Harmful
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 112 | |
Author | ||
License | CC Attribution - NonCommercial - ShareAlike 4.0 International: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this | |
Identifiers | 10.5446/60814 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
00:00
Goodness of fitRoboticsDigital photographyAutonomic computingQuicksortRight angleTerm (mathematics)Computer animationLecture/ConferenceXML
00:40
Digital photographyDevice driverQuicksortJunction (traffic)Autonomic computingFunctional (mathematics)Storage area networkOperator (mathematics)Computer animation
01:43
SoftwareAlgorithmDisintegrationVideoconferencingMachine visionHecke operatorSoftware engineeringRoboticsInheritance (object-oriented programming)Normal (geometry)Process (computing)QuicksortChainMedical imagingCore dumpAreaModel theorySoftwareProjective planeGraph (mathematics)PlanningConnected spaceSoftware industryMachine learningPoint (geometry)VideoconferencingComputer configurationMathematical analysisData centerDesign by contractEstimationBitField (computer science)Forcing (mathematics)InformationOperator (mathematics)Phase transitionGoogolType theoryBuildingWater vaporRow (database)AutomationVirtual machineWave packetFlash memoryComputer animation
06:14
MIDITrailControl flowPhase transitionVideo trackingStatisticsoutputSoftwarePlastikkarteSurfaceFrequencySystem programmingMobile WebRoboticsDegree (graph theory)Goodness of fitCategory of beingDecision theoryIdentity managementMachine visionTrailLevel (video gaming)Autonomic computingFunctional (mathematics)QuicksortFocus (optics)WordDuality (mathematics)Phase transitionParameter (computer programming)Multiplication signShooting methodCoefficient of determinationSoftware developerPeer-to-peerPlastikkarte1 (number)SoftwareFrequencySystem identificationInformationMultiplicationAsynchronous Transfer ModeMilitary baseInstallation artArithmetic meanPhysical systemArmElectronic mailing listCASE <Informatik>Validity (statistics)BuildingIntegrated development environmentPlug-in (computing)Selectivity (electronic)Position operatorThermal radiationDivisorNatural numberWater vaporAreaWhiteboardError messageSocial classBitMoving averageLecture/ConferencePanel paintingComputer animation
14:30
System programmingMobile WebSurfaceFrequencyThermal radiationAreaMachine visionForceRevision controlDirected setObject (grammar)Duality (mathematics)Social classShape (magazine)Game controllerContext awarenessQuicksortMultiplicationAreaSquare numberPhysical lawCategory of beingPerformance appraisalGroup actionSeries (mathematics)LeakPrisoner's dilemmaDifferent (Kate Ryan album)State of matterNormal (geometry)Rule of inferenceVirtual machineHeegaard splittingThermal conductivityComputerObject (grammar)Goodness of fitVector potentialMachine visionPattern recognitionSoftwareForcing (mathematics)ArmRight angleBasis <Mathematik>Arithmetic meanBuildingProcess (computing)BitRevision controlMereologyInstallation artModel theoryRadical (chemistry)Uniformer RaumStatisticsException handlingPhysical systemReal numberIdentical particlesCASE <Informatik>Set (mathematics)MetadataRoboticsSystem callMultiplication signComputer configurationMatching (graph theory)Validity (statistics)WeightStandard deviationMilitary baseStudent's t-testInformationVideoconferencingComputer animation
22:32
ForceObject (grammar)Revision controlDirected setDuality (mathematics)Programmable read-only memoryDistanceMenu (computing)Task (computing)StatisticsMathematical analysisMathematical optimizationGame theoryVector potentialSystem programmingMachine visionDuality (mathematics)Game theorySoftwareTask (computing)SpacetimePresentation of a groupObject (grammar)Autonomic computingPhysical systemVector potentialMathematical analysisDecision theoryInheritance (object-oriented programming)AreaVirtual machineMachine learningStatisticsQuicksortOcean currentMultiplication signBuildingRule of inferenceSystem identificationLogicData structureCASE <Informatik>Context awarenessForcing (mathematics)Artificial neural networkWeb servicePhysical lawConfiguration spaceIntegrated development environmentTerm (mathematics)MereologyContinuum hypothesisPlastikkarteSoftware developerRoutingCartesian coordinate systemLatent heatCollisionSet (mathematics)Machine visionMorley's categoricity theoremMilitary baseRight angleBlack boxWeightProcess (computing)Computer hardwareExecution unitFigurate numberData miningService (economics)Mathematical optimizationRational numberArithmetic meanConstraint (mathematics)Game controllerGroup actionThermal radiationComputer animation
30:19
AreaComputeroutputParameter (computer programming)Model theorySystem programmingIntegrated development environmentVideo trackingPhysical systemType theoryComputerWechselseitige InformationCondition numberPredictabilityOperator (mathematics)Channel capacityDecision theoryQuicksortInheritance (object-oriented programming)Frame problemMultiplication signArithmetic meanResultantVector potentialAreaPower (physics)Process (computing)Duality (mathematics)Rule of inferenceVirtual machineIntegrated development environmentLoop (music)Nuclear spaceRouter (computing)NeuroinformatikModel theoryType theoryAutonomic computingPhysical systemMathematicsPoint (geometry)Validity (statistics)Arithmetic progressionMachine visionMachine learningBitBlack boxInductive reasoningSign (mathematics)Object (grammar)Artificial neural networkSource codeRight angleRadarVotingForm (programming)Thomas BayesTelecommunicationGame controllerVideo gameBounded variationBasis <Mathematik>XMLComputer animation
38:05
Category of beingMereologyPhysical systemTask (computing)QuicksortGame controllerTime zoneProcess (computing)Computer animation
38:52
Complex systemCrash (computing)TelecommunicationFeedbackFlash memoryDigital electronicsRobotContext awarenessAreaQuicksortEngineering drawingDiagram
39:39
FeedbackRoboticsMultiplication signArithmetic meanAutonomic computingWater vaporGroup action
40:15
Group actionLogical constantEvent horizonSynchronizationMultiplication signSource codeLecture/ConferenceMeeting/Interview
41:04
ArmAutonomic computingSoftware developerArithmetic meanPhysical lawQuicksortControl flowMultiplication signDivisorLecture/Conference
42:34
Multiplication signQuicksortSource codeGroup actionProgrammer (hardware)Single-precision floating-point formatSound effectField (computer science)Lecture/Conference
43:18
Term (mathematics)Different (Kate Ryan album)PixelControl flowQuicksortInheritance (object-oriented programming)Lecture/ConferenceMeeting/Interview
44:05
Gamma functionControl flowLecture/ConferenceMeeting/Interview
Transcript: English(auto-generated)
00:06
Anyway, good morning. I am so happy to be here. I am from Dublin, so it's a great privilege and honour to get to speak to EuroPython in my own home city. So thank you very much for having me. So just to sort of get things straight at the start, what is this talk about?
00:23
It is not about my worries that the robots will become clever and smart and, you know, overthrow humanity with their super intelligence. That's the first thing that people think about when we start throwing about terms like autonomous weapons and killer robots. But actually what I worry about is the opposite thing, that we will introduce autonomous weapons
00:45
and they will be too stupid. So this photo here is a bunch of autonomous cars from a company called Cruise. This was taken on June 30th this year, so only a couple of weeks back. And this company is allowed to operate 30 autonomous cars in San Francisco
01:04
and it has a license to pick up passengers at night and drive them around with no driver in the car. And of the 30 cars that they're allowed to run, about 20 of these cars basically decided to pull in and sort of clog up this junction in San Francisco, bringing
01:20
traffic to a grinding halt. Now, there are no lives lost here, which is great, but then cars are not supposed to kill, cars are supposed to avoid killing people. What will happen if we introduce a fleet of autonomous weapons and all those weapons start doing stupid things all at once? Or even one weapon does a stupid thing once?
01:40
You know, people will die and that is what I worry about. So just a brief recap about who the heck I am. So I've been a software engineer or an SRE for my entire career. And I joined the campaign Stop Killer Robots about four years ago and I will tell you that story shortly. It's super weird to wake up in the morning and find the Guardian with an op-ed that says you're a modern day hero.
02:03
I'm not. I'm just doing the thing that I think is the right thing to do and that I hope that all of you would do as well after you've listened to this talk. So here is my back story. In 2017, I was working at Google and Google at that point
02:20
had taken this secret contract with the US Department of Defense to work on a thing called Project Maven. This is the logo for Project Maven. There's a little motto here. It says, Officium nostrum est aduvare, and that means our job is to help. Now, I don't know who made up this logo or what they were thinking, but either, you know, option one,
02:44
they thought this was a cool logo and they'd never heard of I'm from the government and I'm here to help, or else they did know about that and it's extremely subversive, but either way, it's pretty funny. So 2017, they kicked this project off and basically the idea is normal military government
03:02
procurement processes are slow and cumbersome and what if they could get some of that, you know, private software industry kind of special sauce and use it to supercharge their military systems so that they could have, you know, warfare at the speed of thought and all this kind of stuff. So basically their problem was they had all this video surveillance stuff. This is called
03:24
wide area motion imagery and it's the video that you get when you fly a drone over somewhere and you record that video. So it's from high above and, you know, got relatively little detail but there is enough detail there to pick out people moving around, vehicles moving, that sort
03:42
of thing. And, you know, they fly these drones over certain areas pretty much continuously and so they end up with this huge, huge amount of video and they literally cannot hire enough people to actually do all the analysis on this. So basically their idea here is automate that. Okay, so that's fine. You want to do some machine learning. What's
04:04
the problem? Well, the problem is kind of this, right? You know, at the end of that process, at the start of the process is machine learning. Yeah, sure. At the end of the process is people getting blown up. So this is a very, very blunt quote and apologies for any offense caused,
04:22
but I think that Jamie Zawinski of Mozilla here is correct. There's a thing called the military kill chain and it starts off, identify your target, dispatch forces to your target, attack, and then destroy. That's the kill chain and that is the end. That's, you know,
04:43
Maven was not a weapon, but it was very much feeding information into the weapon systems. So I was not the only person at Google that was concerned about this project. I knew about this a few weeks before it went public. When it did go public, every art department for
05:04
all of the tech kind of publications went on a bit of a field day and produced all these kinds of images. What I had been asked to do in particular, I wasn't asked to work on the core Maven project, but what I was asked to do was to help Google set up new air gapped data centers.
05:23
Phase one of Maven was 18 months long and what they were going to do was basically hand software to the DOD, so handing trained models. And they also had some pretty fancy plans for nice UXs that would give you like a timeline of people's activities and like a little connected social graph of, you know, who in which houses visits who and which other
05:45
buildings, all this kind of stuff. But what they were asking me to do was help them to run that software in-house in phase two in these air gapped data centers that would have been sort of supervised in operation by the US military. So I ended up leaving Google and I started
06:05
speaking out publicly about this issue and I joined the campaign to stop killer robots. I still volunteer with them regularly doing advocacy type work. So ethics 101. I am not going to talk about ethics 101 because this is a tech conference and Vicky said, don't talk about
06:22
ethics, Laura, talk about the tech. So for me, my ethical stance against killer robots is actually very grounded in what I see as fundamental and probably insoluble problems with the technology. So that's mainly what we're going to talk about. If you do want ethics, actually the good place is pretty good. I did a whole master's degree in ethics and
06:43
I learned maybe nearly as much from the good place anyway, so it is what it is. So tech stuff. So here is sort of a, I guess, very high level schematic of your autonomous weapon. So first off, an autonomous weapon is a weapon that is not just any degree of autonomy.
07:08
You can have an autonomous drone that flies autonomously, that takes certain decisions around routing, all this kind of stuff, but when we talk about an autonomous weapon, we talk about autonomy in the critical functions and that's targeting and the decision to attack a particular
07:25
target. So target identification and target selection. An autonomous drone that can autonomously fly somewhere, but a human is making the decision to attack, we don't consider that an autonomous weapon, even though it has some autonomous capabilities. We're all about the
07:42
lethal stuff here, those critical functions of choosing the target. So how targeting works is kind of key to this whole argument. So the autonomous weapons that we're seeing emerge or that have existed for some time in sort of proto-autonomous form, they tend to use sensors
08:02
to sense the environment and then they make some decisions internally. So the sensors that they use tend to be radar and lidar, infrared, mobile phone signals, a huge amount of of targeting that's done on individuals is actually done at mobile phones as opposed to
08:22
that you're not looking at the identity of the person, you're looking at what phone are they carrying and we're assuming that that tracks radiation, so enemy radar, sound, so for example that's detecting the origin of someone shooting at you. Other signals, so there's a thing called
08:41
IFF beacons, that's identification friend or foe, it basically tells you is this aircraft or or ship or whatever friendly or enemy. You could get data from other devices, so you could have like peer-to-peer networks of these things sharing information and cameras for vision.
09:00
That's probably not an exclusive list but I think that covers kind of most of the bases. So the first example of autonomous weapons that people tend to talk about are guided missiles and sensor fused weapons. So what these do is they use radar, infrared or some other means
09:22
to track a target but the target is first selected by a human being. So I would say okay that ship over there, we're going to attack that and your guided missile will lock onto it. So the decision is made by a human being and it's pretty close in time to when the actual strike happens. So typically you know when you target it and you fire and the attack starts
09:45
right away. And the targets are typically military targets, so tanks, ships, that sort of thing. These are not typically things that you would use to attack what we call dual use targets, so
10:01
that's people, buildings that are not exclusively military bases, that sort of thing. So that's not typically what you're doing with your guided missiles, they tend to be pretty military focused. And they use the sensor data only to control the very end stage of the attack usually. So they pretty much fly towards the thing you pointed them at and then they use their radar, their infrared to basically stay locked on
10:24
just in the final phase so they don't sort of veer off. So a lot of people say, well autonomous weapon systems, they will make warfare more precise and they will help commanders to carry out their intentions more precisely. And every time I've looked at it,
10:42
the Americans are the ones that say this a lot in the international debates. Every time I look at it, what they're talking about tends to be stuff like this, which as I say are not really autonomous weapons in the fully autonomous sense of the word.
11:00
So here's the second thing that some people talk about when they say autonomous weapons are great. So you've got missile defense shields or counter rocket and mortar, sea rams. And these are interesting as well because they're typically guarding a human occupied position, so you use them on a base or on a ship or somewhere where people are.
11:24
They don't move around by themselves. They can be mounted on a vehicle which can be moved. And what they do is they typically have more than one mode. They may have a manually operated mode where they're watching for potential attacks and they require a human being to push a button
11:44
and say, yeah, that's an attack, fire back. Or if you're expecting a big swarm of incoming missiles or these days drones or whatever, you can put them into a fully autonomous mode and they will attack whatever they see incoming. But that's not typically how they're operated most of the time. And they work based on radar mostly and thermal imaging,
12:05
and the targets are generally military in nature. So these are designed purely to attack missiles and mortars and things like that coming in. However, even with these systems which are co-located with humans don't move around and are pretty much targeted at military things
12:22
like missiles, there have been accidents that have occurred, particularly around aircrafts. And a couple of cases of missile defense installations attacking aircraft, both military and civilian. So even this is not foolproof.
12:41
Then another thing that's coming out is smart tanks and armored personnel carriers or APCs. So those, again, are often manned vehicles, but they can be completely autonomous and completely remotely operated. And these increasingly have the ability to detect threats which can be incoming fire. But here's the thing, I went to an arms fair last September and I went and I talked
13:04
to a bunch of these tank manufacturers, and what they do now is they let you install your own software plugin that defines what a threat is. So you write a little plugin that could be anything, like you could decide that all human beings are threats or dogs are threats or anytime
13:20
you detect the word hospital on a building, it's a threat. You can do whatever you want, crazy things. So that's a really interesting development here, and I think that may be the way that manufacturers are going to go with this. They're going to build these very flexible systems and let each military sort of define what it is they're going to do with them, right?
13:42
But here is the thing that really, really, really tends to have people worried. These are loitering munitions. So loitering munitions is sort of a cross between a missile and a drone. So what they're designed to do is basically fly around for a lengthy period of time, multiple hours, and sort of fly around in an area. And what they do is they look for
14:06
potential targets and then they can attack. So you can define, obviously, it's software, so you can have any sorts of behavior you want. You can build loitering munitions that will always ask for human validation before an attack. And your targets and your targeting criteria
14:27
can vary really widely. And this is, of course, one of the challenges in the debate about autonomous weapons because an autonomous weapon can be such a broad class of things, right? And the shape and the scope of autonomy can vary a lot. But the key thing, I think,
14:43
about the loitering munition is there's much less human awareness and much less human control here because this thing is moving around. You deploy this to patrol an area which is going to be multiple square miles potentially. You don't know where and when that's going to attack.
15:04
It's extremely difficult as a military commander to say, I'm going to deploy this weapon in this area and it's going to attack this sort of class of object. You can't predict what it's going to do exactly. And unless you have a really, really good awareness of what's in that area,
15:22
there's a huge potential for things to go wrong here. And I'll explain why in a bit. So here's a concrete example of a loitering munition. This is a thing called the Harpier, the Harop, and it's made by an Israeli arms company called IAI. What it's designed to do
15:40
is it flies around, it looks for military radar signals in a particular area and it can attack them autonomously or semi-autonomously with human supervision. The idea here is basically find your enemy's anti-aircraft installations and attack them and take them out. This is exclusively attacking military targets, not dual use. There's scope for accident here
16:03
and there is some risk, but the sense of processing is really straightforward. There's a little bit less that can go wrong here, so it's not the riskiest of autonomous weapons. This thing came out about two years ago, three years ago. This is the STM Kargutu,
16:22
and this is made by the Turkish weapons company STM. It's the state weapons company. So they boast about the awesome machine vision software that they have here, including facial recognition. The implied use case here is to be a hunter-killer robot that hunts and kills human beings. In fact, two years ago in Libya, it is claimed that these
16:47
weapons were actually used just for that. They were basically pointed at a group of people who were presumed to be fighters and essentially killed them all. There's a huge ethical debate
17:02
around targeted killing and around whether or not that's an okay way of waging war and an okay way of carrying out counter-terrorism activities in particular, and we could easily talk for more than 45 minutes just about that. But if we bypass all that entirely, there's a big technical
17:20
concern here because when you're talking about machine vision in this context, you are talking about uncooperative facial recognition in video, and that is not really reliable. The US National Institute for Standards and Technologies, they have a long-running series
17:43
of evaluations they do on many different kinds of facial recognition tools. In their last assessment of uncooperative facial recognition in video, their conclusion is essentially, it is not good enough for anything important without human supervision.
18:05
Building this sort of thing into a killer drone is maybe a bit concerning. This guy here is a thing called the Orlan 10 slash Lear system. The Orlan 10 is the drone
18:23
and the Lear is a base station that sort of feeds them, does control stuff and feeds them information. What Orlan 10 does is it flies around and it senses mobile phone signals. So there is a system called SkyNet that I don't know if some of you have heard of,
18:41
and it's not that SkyNet. No, we're not back to the terminators. SkyNet is a real computer system that was used by the US to attempt to distinguish terrorists in Pakistan. This was in the late noughties and early 2010s, I believe. They basically sucked up
19:06
all of the call metadata and SMS metadata from the Pakistani phone networks, and they used it to try and build a machine learning model that would determine who is a terrorist and who is not. Here's the thing. They trained that model on five people.
19:23
Five. They had five examples. That's crazy. This all came out, by the way, in the Snowden leaks. There was quite a lot written about it and it seems to be pretty reliable. So this is the sort of the statistical basis that people are being targeted over.
19:43
You know, suck in all the phone network data, build a jerryd up machine learning model based on five examples, fly this sort of thing around and kill people. Okay, so I want to talk very briefly about a thing called international humanitarian law, or IHL.
20:08
So this is also known as sort of the law of war. Basically, the idea here is that there's two sort of legal regimes that can apply at any one time. Now, sitting here in Ireland,
20:22
we are not in a state of war. Normal national laws and normal sort of human rights law applies to us, and we can't be summarily killed unless maybe we're menacing somebody in some very physical way, very immediate way. We have policing. In a state of war, a different set
20:42
of laws apply, and what they do is they provide certain specific protections to civilians and also to combatants. So there's rules around things like how you take care of prisoners of war, that kind of stuff. So it's based on just war theory, which kind of has this two-way split. Just ad bellum is basically how you make the decision
21:04
about whether or not it's okay to go to war. Is this a just war? Am I defending myself, or do I have another good reason to go to war? And then just in bellow is all about the conduct of warfare. Now, the two big ideas in just in bellow, the conduct of war, is that when you're making an attack, you have to apply
21:25
a principle called distinction. And what that says is, I have to attack military targets. The aim is to weaken military force. That's the only valid aim. And that doesn't mean that you can't make attacks where you might do damage to civilians or civilian stuff. You can,
21:46
but you can't aim to do that. So military targets are basically military objects, your tanks, your warships, your bases, and combatants who are taking direct part in the conflict. They don't have to be in a military uniform. But it's complications. These are not
22:05
straightforward criteria. So for example, there are some military forces in military uniform that are not valid targets. That's people who are wounded in combat. You also can't target medics or chaplains. And you can't target people who are surrendering. So there's actually
22:24
a bunch of exceptions here. And when somebody isn't wearing a military uniform, it gets even more complicated. So this is something that was maybe relatively easy. Back in Napoleonic times, dude is standing there with his bayonet and his big red uniform on, on a battlefield. Okay. These days, not so much. So proportionality is the second part of this.
22:47
So you say, okay, well, I really, really need to attack this thing. There are some civilians nearby. Is that okay? So yes, if you don't intend to kill the civilians, but you foresee that you
23:02
will, you're actually allowed to do that. That's okay. But you have to balance what you intend to gain from this militarily with the amount of civilian harm that you foresee. So if you foresee that you would kill a thousand civilians to take out a very minor military
23:20
objective, that's probably not proportional. Now there's even more complication around this. So again, since Napoleonic times, you have to think about this as well in terms of not only one specific attacker engagement. We also kind of think about this in terms of like weapon development. Is it possible for a particular weapon to be used in, in compliance
23:47
with these rules? So a lot of the objections to say landmines come about because landmines can't do distinction, right? Landmine just goes off no matter who steps on it. Could be a kid, could be a soldier, could be anyone, right? Then rules of engagement are another thing as well.
24:06
Militaries have these things called rules of engagement that are basically sort of playbooks for how they do war stuff. And depending on what their current playbook says, soldiers and commanders will react in different ways to different situations.
24:23
So think about this. You are a military force and you are holding a city and you have checkpoints around the place to monitor what people are doing and where they're going and make sure that the bad guys aren't moving around your city. Somebody drives up to your checkpoint and doesn't stop. What you do then depends on what rules of engagement apply.
24:44
So distinction and proportionality, they sound kind of simple, but they're actually, there's a huge amount of context and nuance here. And that's really relevant when we think about whether or not an autonomous weapon is going to be able to be used in compliance with
25:01
these laws. And I don't say whether or not the weapon will comply with the laws because a weapon is, is, is a machine and laws don't apply to machines, laws apply to people. So, but you know, are they going to be able to be used in compliance? So whether or not they can be used in compliance to me really comes down to
25:24
can they be predicted? I think the answer is not in all cases. And you know, this is risk. So, you know, risk is always a continuum. So to me, the more your target, and a lot of people agree with this, this is not just me, the more you're targeting dual use objects,
25:43
in particular people and buildings where, where there may be civilians present and vehicles where there may be civilians present, the more you're targeting dual use objects. And the further away you are from that human decision to, to, to dispatch a weapon or to
26:01
make an attack, the more risk you have of something happening that the commander did not intend. And I spent a lot of time in my career building systems that do little autonomous things, you know, I, I, I build software that runs software and runs distributed systems and runs
26:23
hardware. And this is something that I have seen to be true. You know, you, you, you build a piece of software and you game it out and you try and figure out what it should do in all cases. And it works for a while. And then something happens that some quirk of the environment or things that it interacts with that you didn't intend and bam, something happens,
26:44
you know, a software system going down is one thing, but an attack happening is another, you know, downtime is bad. Death is, is, is infinitely worse. So this is really what it comes down to, to me, the way that autonomous weapons are developing
27:03
away from the, the likes of the sea rams and the guided missiles, where there is still that very direct human control, albeit with some smart software stuff, moving towards weapons that move around, but that are more likely to attack humans and other
27:22
dual use things are far more risky. And they're risky in a way that I don't think militaries necessarily appreciate because they haven't used these kinds of software before and where they have, I mean, they often have had accidents, but there's this, this notion
27:40
and it's, I mean, we, we live in an age of artificial intelligence and machine learning hype and it's, it's, it's justifiable in some ways because yes, it's gotten amazing and there are so many low stakes and, you know, low risk tasks where machine learning is a great answer, but there are, there are areas where it's not. So when we think about it, AI is sort of, I mean,
28:06
this is all of these definitions are fuzzy, but broadly speaking, AI is around decision-making and reasoning, optimization, playing games, finding routes. These are great applications because you can, you can fully game out a game. You can simulate it repeatedly and train your
28:22
system. So you can do these large searches of potential solution spaces and you end up with things that seem magical, but these things don't apply to the real world because I cannot simulate the real world repeatedly in perfect detail. So this, I think that the myth of the AI
28:43
super strategist weapon is, is just wrong, right? But then we have machine learning, which is the automated analysis of data based on statistical analysis of data sets. So machine vision, categorization, and identification. This is, you know, the Cargoo and the Skynet that we talked about earlier. And here's how they would fit
29:03
into, into weapons, right? So an autonomous weapon roughly has this kind of logical structure. You've got some sensor data coming in. You got some configuration. So what area should I patrol? What sort of targets am I looking for? That kind of thing. We process the sensor data.
29:21
That's very possibly some sort of machine learning thing that is going to attempt to take that sensor data and turn it into something that your decision-making part of your software can work with. So here you've got your autonomous weapons systems logic. And yes, I know that the collision with Amazon web services is unfortunate, but it is what it is.
29:44
So your, your, your, your, your AWS logic is going to see, okay, have I got a valid target? What are my goals? Does targeting this target sort of meet those goals? What are my constraints? Have I met my constraints? So all this kind of decision-making stuff that we have and based on that, you're going to have your next action, which is going to be
30:03
attack or, you know, continue your, continue your patrol looking for, for more targets, right? So if you've got the harpy, this is the anti-radiation missile that we looked at. It's going to look something like this. There isn't really a lot of kind of
30:22
AI or machine learning special sauce in here. This is fairly predictable stuff. So you don't have a lot of like non-explainable black boxes here. And that doesn't mean there isn't potential to go wrong because there is. We could misidentify signals and we could, you know, decide that this super powerful wifi router is a military radar and take out of school,
30:41
that kind of thing. But there isn't sort of machine learning black boxes. And the critical thing here is we've sort of solved the proportionality and distinction target or problems by saying, okay, all military radar is a valid target and it's in scope. So this is the benefit of not building these systems to attack these very,
31:02
very gray area kind of dual use targets. But an autonomous weapon that is designed to say target people or dual use weapons has, it has a lot, likely has a lot more of this kind of machine learning special sauce in, right? So again, your sensor data could be phone signals and camera,
31:24
if it's the cargo. You're going to process the sensor data. Have I matched a target? Now you've got to start computing probability of, you know, is it who you think it is? Compute the risk of, is it an imminent threat? So a lot of the time you can only target people
31:42
if they're considered to be an imminent threat or at least according to a lot of countries who do target people as individuals and check proportionality. So how many people do I think will be affected if I make this attack? So all of these things, very gray area,
32:00
lot of risk of getting it wrong. So just to sort of talk a little bit about the philosophy of kind of AI and machine learning, a lot of people say, okay, well, there's going to be progress here. AI machine learning, it's going to get smart enough that these systems will be really well able to carry out the intent of the commander.
32:24
And the problem here is that means they have to be basically as smart as the commander. That means this is essentially equivalent to saying that there has to be artificial general intelligence here. A lot of people think that that's not going to happen. So here's,
32:42
at least not with the ways that we're currently approaching it. So here's Herbert wrote a very good book on this. He basically says, this stuff is great when you have, you know, fairly well structured problems, but it is not good on less structured problems.
33:00
If you read any of the books on military targeting, particularly around dual use subjects, and they will tell you that it's extremely difficult and extremely gray area. They do not have a defined process for doing this. They will tell you that they have criteria and a whole process, but there's a lot of judgment involved. Inevitably, when we get into these complicated
33:24
systems, there's going to be machine learning perception involved. So whether it's machine vision or other types of things. Now we're contending with all sorts of things. If you have a combat battlefield, you have weather variations, you have smoke, you have a lot of stuff going on.
33:45
And you have potential for adversarial attacks. You know, people have, you know, long figured out how to change road signs so that autonomous vehicles will be fooled. Even very basic things like tracking an object that goes behind something else
34:02
is still quite a challenge in machine learning. And it's well known that trying to use an ML-based system in conditions other than what it was trained in yields unpredictable results. Now, the problem with a weapon is that, you know, your battlefield could be anywhere in the
34:21
world, any time of the year, any weather conditions, any sorts of human behavior. You know, the environment's just very incredibly wildly. Even if I went out and I trained my machine learning weapon in a particular place now doesn't mean that the same conditions are
34:41
going to apply next December if I deploy that weapon there. And I think that's a huge problem. I think it's a bigger problem than militaries think it is. And I often look at the progress in autonomous driving as a sort of a guideline to this. I mean, the amount of money and
35:00
engineering, the amount of money and engineering time that's been poured into that, and there are still quite a lot of problems with it. And there will never be nearly so much money and engineering time put into autonomous weapons. So I think there are problems there. Eric J. Larson, you wrote a book called The Myth of Artificial Intelligence,
35:25
Why Computers Can't Think the Way We Do. And basically he says that machine learning is a form of induction. Bertrand Russell talked about the thing called the inductivist turkey, who thinks that every morning the farmer comes out and feeds him, and he does up until Christmas Eve morning when he kills the turkey.
35:42
The turkey is surprised because the turkey doesn't know, it doesn't understand the world, it doesn't try to sort of model it the way a human being would. And that doesn't mean that we can't be surprised, but we do have more capacity to actually understand the way the world works and to predict it. A great example of that was this gentleman here. This is Stanislav Petrov,
36:05
anyone else who was alive in 1983, he may have saved your life, and in fact all of us, because he was on duty in a Soviet nuclear missile bunker. And he had an automated alert come in that said the US has launched five missiles. And he said,
36:23
no, well, I mean, his job at this point was basically to say, okay, yep, deploy, counter-attack. That was his job. And he didn't do it. He instead said, there is no way that if the US is attacking us, they have sent just five missiles. This is probably wrong. And so he declined to fire those missiles and pretty much saved the
36:42
world. Now, would a computer have been able to reason through that sort of problem? There's a proposal that we could build autonomous weapons with an ethical governor. And this is the sort of the metaphor for the governor on the steam engine, back in the days of Newton and
37:01
Watts. It basically stops it exploding. The problem with this metaphor is that when you're talking about a weapon system, the operator wants the weapon to make attacks. The operator of a steam engine does not want their steam engine to explode. The incentives are not aligned. There's also a bunch of other stuff. So the thing called the
37:24
frame problem, which basically is the problem of deciding what is relevant to any given decision. It turns out that we are pretty good at this. We're pretty much built to solve the frame problem. Computers, we have not figured it out yet. And any real ethical governor,
37:41
and this has never been built, they've only been proposed with like a toy solution, any real ethical governor would have to solve a huge amount of very complex stuff. If we did it based on rules, there's a phenomenon called rules explosion, where once you get past a certain amount of rules in a rules engine-based system, it becomes unmaintainable because the rules interact in unpredictable, complex ways. Putting a human in the loop
38:08
can't solve the problem. First off, militaries want to use these systems in places where they don't have communications, so human control is unfeasible.
38:20
And then there's a problem called automation bias. This basically means that anywhere where we try to automate part of a task and sort of have humans supervise the robots, we've always to date failed because human beings are bad at this. If the automated system is doing a pretty decent job, we tend to just sort of zone out and let
38:42
it do the thing. And this is exactly why people keep driving their Teslas into the back of trucks. You know, we risk just people becoming button pushers, and that's not effective supervision. Stock market flash crashes caused by trading bots. This is a phenomenon based on sort of
39:01
emergent behavior in complex systems where you have multiple things interacting. If we have autonomous weapons interacting with the world, with each other, with people, that is a complex system. We risk flash wars, because in a context, if one weapon is going to decide to attack incorrectly, probably any weapon, all weapons that you have in that area
39:23
will decide to do that. In a stock market, we can put in circuit breakers to suspend trading when we see a flash crash. In a communication jammed battlefield, there is no way to do that. There is no circuit breaker for the real world. There's no feedback loop or no effective feedback
39:44
loop for autonomous weapons. If you're AWS or Google, you don't deploy these things with no feedback. But militaries are very bad at getting feedback on how their weapons are performing. So I'm out of time, so I'm just going to skip over this. I'm going to say, robot wars, autonomous weapons don't mean robot wars with no human suffering.
40:04
It's not this. It's robots attacking people and critical infrastructure like water plants and electricity plants that have a dramatic impact on human lives. If I've convinced you, here is a place where you can go to take action. Thank you very much.
40:26
Thanks, Laura. I hope you enjoyed that. So what I'm going to do is we're going to have Q&A now. Is that okay? It's only a few minutes. Do we have any remote? No? Okay. So if anyone
40:45
has any questions, please come up to the mic. You can have the floor and ask Laura a question. Now, if we refrain from developing this technology, will not dictators
41:07
do it and have an advantage? And how would we solve that problem? That's an excellent question. So if we refrain from developing this technology and dictators develop it, well, first off, I think it is better for the world if we do not have
41:25
the big arms manufacturers developing off-the-shelf highly capable autonomous weapons. It's going to be very, very difficult to stop somebody developing small jury-rigged autonomous weapons, absolutely. But that doesn't mean that we can't stop the big arms manufacturers
41:42
building them and selling them. Then secondly, I think bringing in a legal prohibition also acts as a sort of a moral break on it, like if we can build a moral consensus against this. Most dictators, not all, do not use things like chemical weapons.
42:00
There are several regimes in the world that have nuclear weapons and they haven't been used for the last 80 years. And those are largely because there is a strong moral consensus that these things are bad. Nothing is perfect. We don't make laws because we think they'll be broken. All laws are broken sometimes. It's not perfect. But I think bringing in
42:26
international law that says there's actually an international consensus against this, it will at least deter dictators. I think we should have time for one more. Let's try one more and see how you get on with the answer.
42:42
Thank you for an amazing talk. That was really great and very stimulating. You mentioned that you were asked to talk more about technology than philosophy. I have a background in philosophy as well and my heart kind of sank when you said that because I sort of think, you know, we should all be thinking about these sorts of things. And philosophy,
43:01
I know, I know, I know, Vicky. But I'd be interested to know, what would you suggest to a group of programmers who might not be familiar with the field of philosophy and ethics? Where should they start? You mentioned The Good Place, which is awesome. What else should they watch or read? What else? That's a great question.
43:24
So in terms of war ethics, there's a really good book on just war ethics, suggests reading that, I guess. Although just war ethics is not perfect, it's a place to start.
43:42
There are a lot of sort of intro ethics-y texts that'll tell you the difference between utilitarianism and, you know, other different, like virtue ethics, all this kind of stuff. I have never found one that was kind of super engaging. So I think my recommendation
44:00
remains a good place, sadly enough. Thanks, Laura. So thanks, Nicholas. So I think that's the end of this session. We have coffee right now, coffee break. It's 10 o'clock, according to my phone. Yes, it is 10 o'clock. And so thank you again. And thank you, Laura.