We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Robot Ethics

00:00

Formal Metadata

Title
Robot Ethics
Title of Series
Number of Parts
132
Author
License
CC Attribution - ShareAlike 3.0 Germany:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
38
49
Thumbnail
35:39
108
109
Thumbnail
46:56
110
Directed setRobotComputer-assisted translationMultiplication signNP-hardUltraviolet photoelectron spectroscopyLevel (video gaming)XMLLecture/ConferenceMeeting/Interview
Insertion lossSpeciesComputer-assisted translationPhysical lawHand fanRight angleProjective planeExtension (kinesiology)MereologyRoboticsVideo gameCategory of beingLimit (category theory)Task (computing)Meeting/Interview
Category of beingLecture/ConferenceMeeting/Interview
HypermediaBit rateRobotRoboticsRight angleVacuumGodVirtual machineLecture/Conference
CASE <Informatik>Context awarenessRoboticsRight angleType theoryGroup actionOcean currentPlanningLevel (video gaming)Closed setMetropolitan area networkMeeting/Interview
RobotInformation privacyNormal (geometry)Multiplication signAugmented realityCategory of beingInteractive televisionRoboticsCausalityChainDependent and independent variablesInformation privacyOrder (biology)Information securityComputer animation
BitRoboticsInteractive televisionSurjective functionObject (grammar)Projective planeInstance (computer science)MassLevel (video gaming)Observational studyComputer animation
Multiplication signRoboticsQuicksortObject (grammar)CubeVideo gamePoint (geometry)Spectrum (functional analysis)Divisor2 (number)Game theorySound effectAutonomic computingLevel (video gaming)Degree (graph theory)BitLecture/ConferenceMeeting/InterviewComputer animation
VacuumVacuumObject (grammar)Interactive televisionIdentical particlesAlgorithmArithmetic meanBoundary value problemRoboticsMeeting/InterviewComputer animation
RoboticsFlagMultiplication signData miningObject (grammar)Video trackingDivisorSound effectMeeting/InterviewLecture/Conference
RobotExpressionArmProjective planeTouch typingCoordinate systemLecture/ConferenceMeeting/Interview
Game theoryRoboticsPatch (Unix)Observational studyVideoconferencingVideo gameWave packetDifferent (Kate Ryan album)Sound effectLecture/ConferenceMeeting/Interview
Observational studyRoboticsVirtual machineLecture/ConferenceMeeting/Interview
Power (physics)Exterior algebraRegulator geneHypermediaSound effectElectric generatorRoboticsCartesian coordinate systemCellular automatonComputer animation
RoboticsQuicksortObject (grammar)CASE <Informatik>Differential (mechanical device)Physical lawComputer animationLecture/ConferenceMeeting/Interview
RoboticsQuicksortMereologyExtension (kinesiology)Lecture/Conference
RoboticsRight angleContent (media)Parameter (computer programming)Different (Kate Ryan album)Context awarenessMeeting/InterviewSource codeLecture/Conference
MereologyCASE <Informatik>Observational studyObject (grammar)Lecture/Conference
RoboticsRight angleObject (grammar)Computer animationMeeting/InterviewLecture/Conference
Level (video gaming)Bus (computing)Link (knot theory)MereologyComputer programmingMeeting/InterviewLecture/Conference
Level (video gaming)Presentation of a groupSpeciesGroup actionArithmetic meanLimit (category theory)Social classHypermediaIdeal (ethics)Differential (mechanical device)Dimensional analysisScaling (geometry)Lecture/Conference
Element (mathematics)Different (Kate Ryan album)MereologyDisk read-and-write headLecture/ConferenceMeeting/Interview
Different (Kate Ryan album)Workstation <Musikinstrument>Social classType theoryDifferential (mechanical device)Expert systemDiscrepancy theoryRow (database)Meeting/InterviewSource code
Graph coloringTotal S.A.Special unitary groupWindowMereologyOpen setFrequencyDifferent (Kate Ryan album)Sound effectGroup actionDrop (liquid)Substitute goodMultiplication sign1 (number)Noise (electronics)CASE <Informatik>Source codeLecture/Conference
Extreme programmingChannel capacityGodDefault (computer science)Rational numberLimit (category theory)Vulnerability (computing)Term (mathematics)Spectrum (functional analysis)Irrational numberRoboticsMeeting/Interview
RoboticsChannel capacityMathematical analysisPrimitive (album)Natural numberReduction of orderNetwork topologyPhysical systemMeeting/InterviewSource code
Rule of inferenceNatural numberGame controllerField (computer science)MereologyInstance (computer science)Context awarenessDifferent (Kate Ryan album)Meeting/InterviewLecture/Conference
CASE <Informatik>RoboticsNormal (geometry)Interactive televisionLimit (category theory)State of matterForm (programming)QuicksortMeeting/Interview
BuildingSocial classRoboticsArmComputer-assisted translationDifferent (Kate Ryan album)Lecture/ConferenceMeeting/Interview
MereologyRight angleNumbering schemeRoboticsComputer-assisted translationLie groupAnalogyMeeting/Interview
Figurate numberLine (geometry)Degree (graph theory)Physical lawMultiplication signPoint (geometry)MereologyContext awarenessSpherical capAutonomic computingRoboticsAnalogyParallel portAreaObject (grammar)Set (mathematics)Computer-assisted translationMeeting/Interview
MereologyMultiplication signWhiteboardGoogolArithmetic meanDifferent (Kate Ryan album)Greatest elementMeeting/Interview
Coefficient of determinationRoboticsMotion captureComputer-assisted translationMereologyAssociative propertyPairwise comparisonTheoryLecture/ConferenceMeeting/Interview
Video gameRoboticsVirtualizationDifferent (Kate Ryan album)PhysicalismCellular automatonArtificial neural networkLecture/ConferenceMeeting/Interview
Software testingObservational studySensitivity analysisDirection (geometry)Instance (computer science)Compass (drafting)Ocean currentNatural numberMeeting/InterviewLecture/Conference
Connected spaceCASE <Informatik>Level (video gaming)View (database)Lecture/Conference
Functional (mathematics)Object (grammar)Game theoryBoundary value problemPhysical lawRoboticsImpulse responsePoint (geometry)Source codeMeeting/Interview
RoboticsMereologyQuicksortDigitizingGroup actionPoint (geometry)Right angleCartesian coordinate systemContext awarenessComputer animation
QuicksortOnline helpPhysical systemStudent's t-testRoboticsHypermediaSoftware testingMeeting/InterviewLecture/Conference
Sampling (statistics)Power (physics)Lecture/Conference
Pairwise comparisonMereologyArithmetic meanLevel (video gaming)Right angleMedical imagingGroup actionDifferent (Kate Ryan album)Lecture/ConferenceMeeting/Interview
Row (database)CubeRoboticsPerturbation theoryMultiplication signGame theorySoftware testingGame controllerPoint (geometry)Group actionRight anglePlanningMeeting/InterviewLecture/Conference
Combinational logicFood energyGraph coloringForestOrder (biology)CountingAreaArithmetic meanLecture/Conference
Single-precision floating-point formatAverageSpacetimePoint (geometry)Multiplication signGraph coloringMachine visionRight angleMeeting/InterviewLecture/Conference
Extension (kinesiology)BitRoboticsNatural numberPoint (geometry)3 (number)AreaResultantVideo gameMeeting/Interview
RoboticsArithmetic meanInformation privacyConnected spaceSoftware developerMeeting/Interview
Control flowLevel (video gaming)PlanningMeeting/InterviewXML
Transcript: English(auto-generated)
Hi, welcome back. My name is Fredermann and I have another confession to make. I love my electronic shaver. And the best thing is about that, it loves me back.
We seriously have a special thing going on. It works so perfectly on my facial hair that I think we belong together. Not right now, as you can obviously see, but you know, that's love, ups and downs, hard times. Still there's an emotional bond between me and my shaver.
I think it's even more important to me than my cat, especially since I don't have a cat, because I'm highly allergic to their hairs. But still, if I had a cat, it would be like at one level with my shaver. If you have a cat, I think you like it, you love it. And cats are
protected by law. They are treated juridically in Germany as a thing, but we do have a criminal offense that's called Tierkvalerai. You're not allowed to treat it badly, inappropriately to their species. You must not torture it, you must not hurt your cat,
or anyone's cat, and you must not kill it. So they do have basic rights, but my shaver obviously doesn't. And I hate to see my precious little companion being treated like a profan
thing. I mean, it is a thing, but still it's my shaver. So should my shaver, your toaster, your iPhone, or any gadget you like or even love, be treated like a cat? Who decides these kind of things and how? Are projected emotions a sufficient requirement for, I have to quote this
one, an extension of limited legal rights to robotic companions and lodges to animal abuse laws? Especially when social robots, for example, as Johannes Glaeske pointed out this morning, take over more and more of our tasks and become part of our daily routine, our daily life,
and maybe attached to our hearts by that. Should we treat these robots like shavers, like cats, or even like human beings? These difficult questions are the topic of our next speaker. Please welcome Kate Darling, who is an IP research specialist at the MIT Media Lab
and a PhD candidate in intellectual property and law and economics. Welcome, Kate. Well, thank you, Friedemann, for giving my whole talk. I'm really excited to be here.
I know everyone says that, but I'm really, really excited to be here.
I do intellectual property research too, which is great and fun, but this is what I talk to people about at parties or in bars or whenever I've had too much to drink, anyone who will listen to me. So the fact that I get to be here today
with all of you is fantastic, so thank you. Robot ethics. I just started dating someone, and he's gradually been introducing me to all
of his friends and his work colleagues, and inevitably they ask me what I do and what I'm interested in. And I give him so much credit for not visibly cringing when this happens, because when people are first confronted with the idea of, or even just the term,
robot ethics, they normally have one of two reactions. A lot of people are like, ethics? Robot ethics? Robots are machines. They're machines that we build. We make them do what we want them to do. What does that have to do with ethics? Are you arguing that my toasters
should have the right to live or the right to marry my vacuum cleaner? This doesn't make sense. And then I have maybe a minute to convince them that no, that's not what I'm talking about. And then there's other people who will be like, oh my god, I know exactly what you're
talking about. Have you read iRobot? Oh, of course you've read iRobot. And oh, Battlestar Galactica, man. And Blade Runner is my favorite movie. And the fracking robots, man, they're going to be just like us, and they're going to become conscious, and they're going to want their rights. And humans are going to discriminate against them. And I'm totally on
your side. That's awesome. And while I'm immensely more sympathetic to this group of people, because I'm one of them, I'm a sci-fi nerd myself, in this case, I also then have like maybe a minute to convince them that all of this stuff is not what I'm talking about either.
The, I'm sorry, but the scenario in which robots are on par with humans and becoming conscious is something that belongs to the far future. I work with a lot of roboticists, and I'm close to a lot of robotic technology. And unfortunately, the current stage of robotics,
robots are not even on par with insects really yet. And that's not really going to change in a fundamental, meaningful way anytime soon. So this is not what robot ethics as a current
discipline is concerned with, because once we are facing this type of issue, we don't know, you know, when that's going to happen. We don't know in what context that's going to happen. We don't know what technologies we're going to be dealing with. We don't know what social norms or legal norms we're going to be dealing with at that time. I mean, it's entirely plausible that the discussion is first, you know, shaped by other things like the last talk that
we had in this room on, you know, cyborgs and augmented biology, like that could, that will probably come long before we have, you know, bottom-up humanoid intelligence. So what robot ethics as a current discipline is concerned with is what is happening right now
and what is going to happen over maybe, say, the next decade or so. And we tend to put the issues into three broad categories. So the first category is safety, responsibility, liability. What, who's responsible when something goes wrong?
And the reason this is relevant is because with increasingly autonomous technology, the chain of causality for accidents becomes longer and we might need to rethink where to assign responsibility in order to set the right incentives. The second category is privacy.
Privacy is, of course, a huge deal, you know, generally. It's not restricted to robotics, but robotic technology does, you know, introduce new ways of collecting data, storing data. And you guys saw the drone flying around outside yesterday. So this raises issues of data
security, it raises issues of surveillance, etc. And the third category, which is the thing that I'm currently most interested in, is the ethics of our social interactions with robots. And this is what I'd like to talk a little bit more about with you today. And I would like to highlight one aspect of this in particular, which is our tendency to project
lifelike qualities onto robotic objects. And the reason I think this is relevant or worth thinking about right now is because we're seeing now and, you know, within the next 10 years, a massive increase of robots entering into our lives, our homes, our schools, our hospitals,
and a lot of these robots are specifically designed to interact with us on a social level. And interestingly, studies are beginning to show that people tend to perceive and treat these robotic objects very differently than they do other objects like toasters. So people will
project onto these things, they will give them personality, they will develop, you know, emotional bonds with them, they will make them feel feelings like guilt, etc. And so people have been looking at this. For instance, MIT's Sherry Turkle is a psychologist,
and she's done a lot of work on human-robot interaction and has discovered that people will bond to robots, and they will bond to them surprisingly strongly. Now, you can say, so what? This is nothing new. This is not restricted to robots. We know that people
fall in love with objects all the time. People fall in love with all sorts of things. People will even, you know, develop emotional relationships to virtual objects. This is a companion cube from the video game Portal. If you've never played Portal and you intend on playing it, which at this point you're probably
fooling yourself because it's been out for so long, but if you intend to play it, just you might want to close your ears for 15 seconds because I'm going to reveal a spoiler. The makers of this game were astonished to see that at the end of the game, at the last level, you're required to incinerate this companion cube that's been with you throughout the entire
game, and a lot of people couldn't do it. A lot of players would actually sacrifice themselves and lose the game rather than incinerate their cube. So yes, people develop emotional attachments to objects, even virtual things, all the time, but there's a spectrum. And for robots,
this effect tends to be stronger. Why? Well, there are three factors that we think play into this anthropomorphic tendency, and the first two factors are physical embodiment and, you know, a certain degree of autonomous behavior. And actually, even just a little bit
autonomous behavior is enough. So this is a Roomba vacuum cleaner. The Roomba will, you know, vacuum your house by itself. It follows very simple algorithms. It doesn't distinguish between you and the couch. It will bump into everything. And yet people, you know,
just because this thing is moving around on its own, tend to treat it more like a creature than they would an object. They'll name it. They'll interact with it or try to. It's kind of ridiculous, really, but it happens. And, you know, once you increase the autonomous behavior of the object, things get worse. So it's well known by now that military teams
will often bond very strongly to the robots that are interacting with their teams on the battlefield or in exercises, and the soldiers will get very attached and they'll, you know,
get hurt or die. Actually, there's a story. When the military was testing this new robot that diffused landmines that was shaped like a stick insect, it had six legs and it would, you know, walk around a minefield and every time it stepped on a mine, one of the
legs would blow up and it would just continue on the remaining legs. And when they were testing this, the commander in charge of this exercise ended up calling it off. And he said, we can't do this. We can't use this robot because this is inhumane, because he couldn't stand the sight of this crippled object dragging itself along on the remaining legs.
Now, these are objects that aren't, you know, specifically designed to evoke that reaction in people, which brings us to the third factor. If you have robots that are specifically trying to push your emotional buttons, this effect becomes stronger. Social robots
aren't just cute and adorable. They also, you know, mimic certain cues and certain behaviors and sounds and expressions that will, you know, that we automatically and sometimes even subconsciously associate with certain emotions. This is a pleo dinosaur. It's a robotic toy that
hit the market in about 2006, which means it's already outdated technology, but still, it's pretty sophisticated and it's affordable. And it'll react when you touch it and it'll
behave very unpredictably, which really, really lends itself to projection onto it. And so, oh yeah, and it will like get upset, like if you hold it up by the tail, get upset, because it can recognize where it is in space. And so, I was at a conference in Geneva called LIFT a few months ago, and the co-organizer of the conference, Hannes Gassot, and I decided,
because I was there to speak about something else, but we decided spontaneously to do this workshop. And we got four pleos and we had participants kind of play with them and interact with them for about an hour. And then at the end of the workshop, we asked them to torture and kill
them. And so, we were kind of astonished too, because I had been expecting some, you know, pushback or at least some discussion, but we were surprised at how hard it turned out for all of the participants to be, to even, you know, strike the thing. And we had to end up,
you know, playing mind games with them. We had to be like, okay, we're gonna kill all of the robots if someone doesn't step up to the plate and kill one of them. And like finally at the end, this poor guy, you know, after a lot of hesitation takes this hatchet that we have and axes the thing. And you can see here on the right, like the room was silent for
a few seconds. And afterwards, we had this great discussion about, you know, what's the difference? Like, why is it so hard to act this thing as opposed to just a simple toaster? And so, that was really great. And I actually, so over the next year, I would really like to
do more of these workshops, but do them, you know, as controlled experiments where we're seeing what's really going on and trying to measure effects. And I'm not the only person interested in this. So just a few weeks ago, three German researchers made public that they'd done this study where they showed participants videos of pleos being tortured and pleos being
treated nicely. And they contrasted that with videos of humans. And they measured people's emotional reactions and found that people have a lot of empathy towards pleos. And so a lot of sent me like the press on this study. And some of them just kind of assumed that I was associated
with it in some way. And I'm not. Like, I've never heard of these people before in my life. And I really, really hope that they've never heard of me either. Because what this proves is that people are observing this independently and deeming it, you know, a worthy subject of study
and like trying to figure out what's going on here. And it proves that I'm not crazy. We respond to social cues from these lifelike machines, and we respond to them even if we know that it's just a robot, even if we know they aren't real. Now, why are we even talking about this? Like, who cares? Why does this matter?
Well, a lot of people who've studied this and looked at, you know, human machine bonding argue that this is a bad thing. They say this should be prevented because after all, robots are not alive. We shouldn't be treating them as if they're alive.
That's unhealthy. And I don't know how I feel about that. I mean, first of all, I think, how are you going to prevent this? Even, you know, even if not all of it is automatic, like people obviously really like to anthropomorphize these things. And, you know,
good luck trying to get toy companies to stop making socially coercive toys. That's going to take a hell of a lot of regulation. But much more importantly, you know, where I work at the Media Lab and a lot of other places, what I'm seeing is a lot of useful and really, really
great applications of this socially coercive technology. I mean, just in health or in education, we're seeing all of these emerging uses that rely specifically on this bonding effect. So we have now next generation robots just started working with autistic children. We have this robotic therapeutic seal from Japan that they use in nursing homes for dementia patients.
And all of this is, you know, really great. And, you know, even if we could prevent this, do we really want to give up all of this potential that's here? I didn't really like that idea. So about a year ago, I came up with kind of a playful alternative solution to this problem,
which is to say that, you know, if the line between alive and lifelike is becoming blurry inside of us, maybe we should embrace that. Maybe we should say, okay, social robots are objects, but they're special objects. We perceive them differently. Maybe we should treat them differently. And I actually proposed that, you know, we could extend some sort of legal protection
or could want to do so to social robots to, you know, protect them from abuse or torture, the same way we extend legal protections to animals. And the more I thought about this, you know, the more I thought it actually made sense. I mean, if you think about animal abuse
laws for a second, why do we really protect animals? I mean, why do we really protect animals? Is it because they feel pain, really? Because if that's the case, why are we so eager to protect
certain animals and not others? I mean, you see it in our laws. We have a completely discriminatory differential treatment of animals and also across cultures and in our societies. And I think, like, the main reason that we have animal abuse protection is because we feel very strongly about protecting things that we relate to, that we project
ourselves onto, that are responding in a way that we associate with ourselves and our own feelings. And so, I mean, you might say, okay, yes, maybe that's the reason we protect animals. But animals do actually feel pain. And robots actually don't feel pain. And we know this.
So isn't it kind of ridiculous to extend some sort of legal protection? Yeah, maybe. I can still think of two reasons why it could be a good idea. The first reason is that we have increasing parts of our society who have trouble distinguishing
and trouble knowing that a robot doesn't actually feel pain. So, I mean, you and I know this, that it's just a robot. Does your four-year-old know this? Does your seven-year-old know this? Does your 75-year-old grandmother know this? And it's getting hard to, you know,
educate these people as to the difference. But more importantly, I think there's a second reason. And the second reason that we might want to think about extending protection to social robots is to discourage behavior that could be harmful in other contexts. So the Kantian argument for animal rights was never about the animals themselves. It was always
about our own humanity. Kant says that we can judge the heart of people by how they treat animals. And he says that people who are cruel to animals, you know, become hard also when they're dealing with other people. And, you know, if it bothers us so much to torture a pleo,
maybe we just shouldn't be doing it. Maybe doing it involves turning off this part of ourselves that feels that discomfort. And maybe we don't want to turn off that part of ourselves. I mean, studies have shown incredible linkage between, you know, household cases of
animal abuse, child abuse, domestic abuse. So behaviors tend to translate. And this might also translate to these objects. So this isn't really about protecting the objects. This is about protecting our societal values and protecting ourselves. I said that I wouldn't
be talking about science fiction in the far future. But I will just leave you with this one thought. It could be, it could be, probably will be, I think, that the issues in these
science fictional, you know, movies and stories, like the issues of rights for robots and the right to live, that this comes about, you know, not at all because of the technologies that we develop. It might have far, far less to do with the sophistication of the AI that we create and it might have much, much more to do with our relationships with these objects and the
role that robots play in our society. Because, you know, even though sometimes it's hard for us to admit, I do think that at the end, it's all about us. Thank you.
Hello, I'm Katharina. I'm the new Friedemann and we do a little questions and answer now. I'm really happy to welcome back on stage Moon Rebus and Neil Harbison for that because
I'm part of the program team and we were really excited that you three agreed to join the Republica and we think that maybe there's a link between the two sessions. And first I will take some questions from the audience. Hi, welcome back on stage.
And so I'll leave and you three stay on stage and then I ask the questions from below.
I know someone who wants... Hi, thank you for the most inspiring presentations I've seen so far. I have a question for the three of you but I think that it's mostly directed to Neil.
Here's my question. I don't like sports. I don't like sports but I know why people likes it, right? I mean there are practices where we test the limits of our species and to take steroids in sports it's not legal and the only reason why it's not legal because we
tend to put an emphasis on what humans do and although I'm really fascinated by the idea of augmenting our bodies, right? I mean I'm asking you a genuine question because I really don't know the answer. Wouldn't really going down the path of changing ourselves
create two groups, the group A like you before you had your implant where you wanted to be like the others and moon that wanted to be like you where you had got an augmented sense. Would it create a citizen of class A and class B with a scale and economy where only a certain
group will be allowed to expand our senses? So my question is going down the path of putting the media in the body, don't we break or don't we question the fundamental dimension of what it means to be human? So I think the question is for the three of you.
Well I think now there's always been in history a differentiation between people that have access to technology or to tools and people that don't have access to tools. This has something that has happened in all history so the difference between using technology
as part of the body is actually more accessible to the whole world because when you use technology as part of the body you don't need an external element, you don't need an external tool, you are the tool and you just need very simple technology to apply it to your body. So becoming a cyborg as many people think that becoming a cyborg is something that only the
first world can do, only people with many money or resources can do but actually becoming a cyborg is cheaper than buying an iPhone because you can really use a very small sensor, the infrared that vibrates when there's movement and attach it at the back of your head and this will actually extend your senses. This sensor costs maybe five euros and it can
actually be attached to your body by much much less money than many of the technology that we now have access. So I think that technology will be much cheaper if we don't need external tools and it will allow us all to experience in a different way. I don't think that it will
make a difference. There's always been this difference between who has access to tools and who not, who has access to education and who not. It's just a different way of using technology. Yeah and I think the same, it's just I think the more the difference your own way
to that if you want to grow and you want to experience I think is what makes people different not like the classes or not because as Neil said there's always been this type of differentiation. I mean I think that answers the question. I mean as technology gets cheaper it will reduce
that discrepancy I guess. I don't know, they're the experts on that. Okay we got more questions in the front row. Hi thank you for the talk and I've got a couple of practical questions like
do you switch it off often like when you sleep you switch it off right? No there's no on-off switch so if there's no color it won't sound so when I go to bed there's usually no color there's no light so there's if there's total darkness there's no sound so it's it's like or I can actually cover it and then there's no sound but there's no on-off. The good thing about
sleeping with the eye is that if I actually leave the window open in the morning it can work as an alarm clock because if there's color in the wall then in the morning when the sun goes out I can hear the wall. Okay and what part do you actually, can I, what part do you actually
see? Because she wants the microphone I asked the other ones too. Have you had people who use your device who actually have the color sense? We tried it just for a while but I look forward to doing a longer period of time when people that see color hear color because I'm sure this would
be like a kind of psychedelic effect it would be a good substitute for drugs maybe because if you see color and you hear color for a long period of time I'm sure that then you'll start seeing color when you hear a noise and then it can create a new experience. No we still
haven't done this and it's one of the things that I really look forward to do with a large group of people. I volunteer. Yeah me too. Because then the interesting thing would be to keep a diary people in different parts of the world during the same six weeks and see what happens. A question to Kate please. Don't you think that these two cases like Neil and Moon and you
are on the different extremes of the spectra because Neil and Moon is about exploring,
extending knowledge in a tradition of empowerment but to accept your cognitive limits that you go back to animism what we had 10,000 years ago that we made victims we made sacrifices to people
just because we thought the gods are angry and so isn't it a fall back into irrationalism and even reactionary and not progressive to take for granted that robots may be accepted just because there are trials and people with minor intellectual capacity who think that they have
pains isn't this reactionary. That's a very good and provocative question. You can see it two ways though I mean you could also see it as enhancing our natural capacity for empathy and you
trying to create technologies that play with that more and that gives us kind of an experience as well. I don't know so I mean you're arguing that it's conservative to say well we should embrace this empathy that we have for robots because it's
it's something primitive. Is that your argument? We had this before enlightenment and enlightenment means to increase knowledge and if you accept a belief systems what I do for
religions but not for every religion like animism just I like people who are apologizing before they cut the tree but in a way I don't believe that the tree is a living being I have to apologize for. So we have to see isn't this a road a slippery road downwards to bring
animism back to project some kind of soul into a dead being like a robot just because our brain reacts empathically like our brain reacts to optical delusions and so on. So I mean I would say the difference here is that we actually have control over this technology so we have
control over this you know bias and like I was mentioning there are a lot of really great things that we can use it for. I mean we I don't know if you're familiar with the field of behavioral economics for instance which also you know takes people's behavioral biases and tries to set
incentives to you know improve their well-being and improve society and make people you know more compliant to certain rules and I don't feel like it's necessarily you know a bad thing to play off people's natural tendencies as long as you know you're controlling it and you're using it for something that's actually socially desirable. I feel like you know too much undermining these
natural tendencies too much is kind of killing off a part of ourselves also that I mean I don't know if if we really if we really know what that'll do to us in other contexts as well so I'd think about that as but that is an interesting very interesting thought. There are more questions.
In case I would not hesitate to torture a robot. Does this make me a bad person or yeah I mean I don't see that I don't know why I should torture a robot but in case I would
been asked in your experiment I don't think I would have hesitated so much like you describe people did. Well so I was just like you I didn't I when I when I bought my first pleo I didn't you know I was fascinated by the technology by what it did and I would play around with it
and test its limits and stuff and then I started showing it off to my friends and I would like I would say hold it up by the tail and see what it does and my friends would hold it up by tail and you know gradually this started to upset me and that I was flummoxed by this because I have no maternal instinct whatsoever like you can't even take care of plants
so the fact that this was making me uncomfortable I thought was really really interesting so I mean it could be that you know state-of-the-art technology doesn't push your buttons or that you know you're you're on you know the outside of you know the general norm of people or whatever but I would say you know give it a few years until this technology is
developed a little more and maybe try it and interact with robots I'm pretty sure that the majority of people do have some sort of reaction to it. I can completely imagine what you just said but how much of it is maybe because that thing is probably expensive and you
don't want to yeah and you don't want to just destruct like you wouldn't throw your iPhone off a 10-story building just for the fun of it. Yeah so I mean the Plio is I mean back when I bought it it was like 500 bucks I think now they're around 300 bucks and so I taught
a class on robot ethics at Harvard with Lawrence Lessig and I convinced him to get a Plio for his children and he came back to me a few months later and was like so Kate I think you're onto something because when my children were playing with this this robot
um my my three-year-old would go and would try to kick it and I would intervene really energetically like automatically and the reason I was intervening I realized wasn't just because this is you know an expensive thing it was because I needed to stop my child my child from
you know behaving this way because otherwise the child is going to kick the cat or another kid or whatever so I mean there does seem to be a slight difference not just and not just in children's behavior also you know in my own reaction or the participants in my workshop's reactions the Plios like those were bought Plios that we had bought with the purpose of
destroying them like it wasn't about the price of the technology and people knew that I think it's a little more than that. I understand everything what you say about empathy and has lots to do for example the love that we sometimes feel for babies like the big eyes
scheme and all that but what is he you just mentioned the ethics but what is actually the idea behind a kind of like robot rights charter where we define how are they going to be treated
or not what is allowed how many of them you can have it's like like you're not supposed to have 10 cats in one apartment for example actually what is your idea about ethics in a concrete example yeah so I think that a lot of your question can be answered through analogies to
animal abuse laws I mean a lot of animal abuse laws could apply in this context of course we have to define you know what a social robot is and that's some somewhat hard I mean we could say it has to be an embodied object it has to display a certain degree of autonomous behavior
you know according to a definition set by robotics and it has to be specifically designed to have these emotional cues and so that I mean it's not a perfect definition obviously but the law deals with that all the time that we have to draw an arbitrary line and then the abuse would just be you know analogous to what what makes us feel uncomfortable
when you know you're not allowed to set cats on fire you probably wouldn't be allowed to set pleos on fire like I feel like we can draw a lot of parallels there and it's it's something I feel that I mean right now it's obviously not going to happen but at some point society could push for it and when society does push for it will it'll be you know on the part of
legislators to figure out the details of the law I have some ideas I don't I don't think it'll be that hard is there another question for Neil and Amun okay Neil you said that you encountered
some slightly hostile reactions at times because people are not used to to seeing cyborgs in a way do you think what we heard from Kate that basically the the more human like something looks the the more with more empathy the people react to it do you think it makes sense to make cyborg-like
parts of your body look slightly more natural or does it make sense for example with google glass to actually have something look to to display the the technology that is inherent in it
I mean I thought about having a more like an eye that it would look more like an eye and I even have an eyelid that I could blink with my eyeborg but I thought that I I just will just develop it as as as comfortable as it is the more comfortable the better I just think that we will gradually get used to seeing new body parts which is not doesn't need to be human like
because if humans don't have these body parts then we just have to get used to it like having an antenna is normal for many animals but for humans no I've been I feel I have an antenna I feel that the antenna is a part of my body and I'm sure that in the next decades there'll
be more people using antennas for different reasons maybe there'll be people having tails to detect things that are behind and maybe we will gradually get used to accepting new body parts and this is something that it just happens with the more people that do it the more normal it will seem I don't know if it needs to be look like a animal or like a human
I think it it just depends on each person that wants to who wants to do it I don't know also really fun fact there I mean the reason that these robots aren't like in the likeness of cats or dogs but rather of like baby dinosaurs or baby seals is because if you make them too
close to something that people will associate with a biological thing that they know very well they get kind of freaked out because they can see the direct comparison and there's the human like they we we kind of get freaked out so there's also that to think about there's a
there's a I guess it really depends on the because it's difficult no not to to well maybe not to generalize what people feel I'm sure it's just so unusual I guess that it might not there might not be a general feeling yet about this subject we get one last question over
taking the idea of robots one step further to virtual creatures probably millions of virtual people and whatever are killed in video games every day is there a big difference between something physical and something virtual and virtual creatures becoming more and more
human every day or virtual artificial intelligences and and stuff like that so I if I take your ethics to robots you can extend that to that and so what does that make with all the people who kill in video games or so is that the same idea is is there a big difference
oh that's a great question my intuition and the intuition of a lot of people in robotics is that the physical embodiment plays a huge role but of course you know we're lacking some good you know hard evidence for this and that's you know one of the reasons that
people are starting to do these studies including myself but that that's definitely you know something that we want to test are there any more questions okay
sorry okay so this one is from Neil you basically have added a sense to your body and your brain is like accepting it as a natural sense like like hearing or any other sense
and we we have seen that people are lacking senses for instance blind people have an increase in sensitivity with the other senses does it work the opposite direction so if you have one
sense more maybe you have like a compass belt and I don't know you can feel currents with your magnetic finger yes I think will your senses like get drilled down it if you have 10 senses you can't use them really was is there a danger I think it happens yeah the opposite I think
if you add senses you actually awake the other senses and and maybe create new connections between the senses so at least in my case having an extra sense has not weakened my hearing or has not weakened what I see it has actually awakened my hearing and my my the way I see things and it actually also has awakened my smell because then if I hear a sound then I can
also remember the smell of something so it actually I feel that my other senses have actually activated and I feel it will happen with anyone else that adds a new sense because all our senses seem to be just in the basic level but when you concentrate on your
senses you not only your body doesn't only try to accept this new sense but is also conscious about the other senses so then you kind of awaken them all any additional oh okay
then first in the make so I have a question for you Kate um I understand where you're coming from like that there's a social function for this empathy that we develop to objects but um I want
to bring this point that maybe making it into a law would hinder some educational purpose that there uh that we educational gains that we can get from um stepping over those um boundaries like in medicine we people take apart dead bodies just to discover how they work
and the same could a thing applies to animals we just kind of do some experiments with them just to get knowledge about certain medicine or stuff like that and also to robotics we can take them apart we can see how they work and we may step over some internal
ethical impulses but we know that really we are not harming any human being there yeah that's a really good point so I just want to make clear like I don't think that it it should be completely analogous to animals right like I don't think that that social robots
you know need the right to live or not be dissected or something it would have to be you know more tailored to which actions specifically make us really uncomfortable and I think you know taking robots apart is you know something that would obviously need to be possible um but yeah it's it's a good point that you know sometimes we can we need to step over in our
our discomfort or ethics to for educational purposes I I like I I don't I can't think right now of of a really good application you know in this context where
you know some sort of abuse protection for robots would like stand in the way of some really you know strong educational purpose but I mean there could be and it's definitely it's definitely worth thinking about okay um a question from the other side maybe question to Kate
um did you do the experiment as well if the participant has to assemble those plios on their own or did they receive the plio um yeah as it as it would have bought it in the shop and you asked because you think that if they had assembled it and they knew fully how it
worked they wouldn't be as attached yeah so that's that's a really interesting question um we didn't we didn't have them assemble them they they got them alive they did you know take they took the dead one apart afterwards and we're very interested in that
but one thing that that I've noticed you know where I work and there are a lot of stories about this even people who build robots and even the robots that they themselves build they will get attached to which is you know it's insane but it happens so Cynthia Brazil is an MIT professor at at the media lab and she for her doctoral work she developed this
robot called kismet that is expressive and and has emotions and she said that leaving that behind when she left MIT after her doctoral work was completed that that that you know left a hole in her heart basically it was like her baby and and the the students who worked in the robotics
lab like they would have to turn it off when they were in the lab late at night because it was it would kind of freak them out to have this like alive thing there but they all know exactly how it works so I mean it might reduce it and that's definitely worth testing and that's definitely something that I do want to look at but I I feel like it's it's still there anyway
okay I got another question here yes don't you think that it's more important to develop a cyborg ethics because I think they're not far away cyborgs will dominate the non-cyborgs because of the many advantages but so you think I mean yes definitely for me those are
two separate things like if you ask me to weight them I think that this is going to become an issue you know very soon in that that society needs to deal with that you know definitely that's that's totally important or is that your question I mean no my my question is to to the other guys do you feel how do you feel this kind of advantage which we have
among other people so and do you have a feel maybe more power or to dominate them and don't you think that is also a danger if some some other people who are not artists
may misuse that power we don't see it as a power we just we we don't compare ourselves with other humans I would say we compare ourselves with other animals and we we compare our senses and perceptions with other animals we are completely disabled we have really a very very low
perception of reality if you compare it with other animals so we like to compare ourselves with animal animals not with other humans because we all have different perceptions and we all have our own way of perceiving reality and it's it's just to us it's more interesting to to see as
as part of animal kingdom and see how we can extend our senses to the level of other animals not compete or at least maybe it's true that some people use it in a bad way but it's you also say this that if you have a knife you can't you could kill with a knife but that doesn't mean that you don't have to to have nice to cut bread so it the the sebenetic is this is the
one that we all have and we use it in the way that we all think we have to we have to so there's always this way that you can use it in a bad way but it will always be like this way so i don't think that there's much difference in this sebenetic world i mean bad people is
also is they always will find a way to to do bad things about actions okay another question from the front row i have a question for kate um how long does it take till people grow
emotionally attached to this robot for example you said that the people were allowed to play with them before have you done tests with people that never seen them before like go into the room and just do something to it so that people don't have the time to have
an emotional attachment i mean with the game you mentioned with portal uh it's just a cube it's just a cube with hearts on it and so so normally you wouldn't have there's nothing less um yeah like an animal like a cube for example and then you grow attached to it over time how would have you have it in experiments what role that plays so i haven't tested that that
is something i definitely definitely want to test because during this workshop that we did um you know obviously an hour was enough time for like this specific robot uh but we did when we were trying to at first get them to strike it and they wouldn't do it a journalist who had
just walked into the room we held out a robot to him we're like hey hit this thing and he was like bam so apparently there is a difference if we're taking him as the control group and yeah i really want to test you know like how long it takes and what what what's going on there but that's yeah that's an excellent point okay any more questions okay thank you um i was
wondering what the next research will be like with other senses like tactics uh or even
paranormal abilities for the colorful people on the right so like paranormal where i i can't see you paranormal like um perceiving ghosts okay well you i think it's already great that you're
doing the visual and auditory combination and senses but we have more senses right like tactics tactile senses and also like paranormal senses that some people have are you looking into
any more research well there's the the possibility of of having an like a sensor of the aura or the energy and then detecting the color of the energy with these cameras that can detect the the the color of the aura and then i could actually hear the aura of people so uh but this is a an area that yeah we could go this way but i think we we we are more concentrated in
in not paranormal senses but senses that we already know that exist and that they work and that animals are using in a way and there's so so so many i mean sometimes yeah some some
of these senses might seem paranormal or impossible superpowers but they actually exist and that's what we really find exciting just to uh but yeah maybe i might be hearing auras in but it's not paranormal this it's just auras but maybe yeah there might be a kind of mistake
and i start perceiving something strange or we don't know we we don't okay three more questions and then we stop and who was it okay hi my name is steven this is really interesting
this idea of using senses or exploring how to sense in new ways i'm curious uh does the camera average out a single point in space and do you feel in any way that you are getting an average of something at any given time as opposed to getting the whole
well what i'm using it gives me not the average but the dominant color in front of me but you could actually add a eye tracker for example and then you could hear the colors that your eyes are looking at so you could add an eye tracker or you could divide the sensor in half so you could have stereo vision so you could hear the dominant color
on the left and the right or you couldn't have it in in any way you would like i like having just the dominant color in front of me because i i don't really want to know the color of things i just want to have a perception of color i want to have a a sense of color okay i have another question here okay my question is for kate and my question
is to what extent i think you had a bit of this in your in your talk your third reason why you think robot ethics is important to what extent is the results of your experiments showing something about the robot versus something about human nature and if it's speaking more about
human nature isn't that proving something about the human being versus the robot so is there a point to have robot ethics as a new discipline or as a new area of investigation versus strengthening the body of ethics that relates to the human being because this
thing's something which is coming from inside the person absolutely i think that you know at the end of the day this is all about us and actually a lot of the research that's happening now and a lot of the this development of personal robots that are social is learning more about us and how we respond to social cues and it's indeed about the the ethics
questions are all about just ourselves um i mean the way that it ties into robotics is because it raises questions of design so i mean robot ethics isn't just restricted to what i was talking about today um as i mentioned like there are all these issues of privacy there issues of liability there are all these like issues that that you know depending on on how we feel about
them may directly impact the design of robots or like the research that goes into into developing robots so that's that's kind of the connection and and my connection to it is also that you know we're increasingly designing these things that you know raise this ethical question
but yes you're absolutely right i think it's all about humans in the end okay last question for neil and moon anyone okay last questions for kate okay
you did one we can have a beer later i promise last question for kate again okay so i'll go back on stage i plan to do some questions on my own but i can
ask you later so um it was very interesting thank you for being here all of you and uh have a nice evening and that was it