We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Why Nobody cares, and only You can save the World

00:00

Formale Metadaten

Titel
Why Nobody cares, and only You can save the World
Untertitel
Technology, Intuitions & Moral Expertise
Serientitel
Anzahl der Teile
102
Autor
Lizenz
CC-Namensnennung 4.0 International:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
This talk aims to provide a possible explanation why most people seem to care very little about the unethicality of much of today’s technologies. It outlines what science and philosophy tell us about the biological and cultural evolutionary origins of (human) morality and ethics, introduces recent research in moral cognition and the importance of moral intuitions in human decision making, and discusses how these things relate to contemporary issues such as A(G)I, self-driving cars, sex-robots, “surveillance capitalism”, the Snowden revelations and many more. Suggesting an “intuition void effect” leading standard users to remain largely oblivious to the moral dimensions of many technologies, it identifies technologists as “learned moral experts”, and emphasizes their responsibility to assume an active role in safeguarding the ethicality of today’s and future technologies. Why is it that in a technological present full of unethical practices – from the “attention economy” to “surveillance capitalism”, “planned obsolescence”, DRM, and so on and so forth – so many appear to care so little? To attempt to answer this question, the presentation begins its argument with an introduction into our contemporary understanding about the origins of (human) morality / ethics. From computational approaches a la Axelrod’s Tit for Tat, Frans De Waal’s cucumber-throwing monkeys and Steven Pinker’s “Better Angles of our Nature”, to contemporary moral psychology and moral cognition and these fields’ work on moral intuitions. As research in the last couple of decades in these fields suggest, it appears that much, if not most of (human) moral / ethical decision making is based on moral intuitions rather than careful, rational reasoning. Joshua Greene likens this to the difference between the “point-and-shoot” mode and the manual mode of a digital camera. Jonathan Haidt uses a metaphorical elephant (moral intuition) and his rider (conscious deliberation) to emphasize the difference in weight. These intuitions are the result of both biological and cultural evolution – the former carrying most of the weight. The problem with this basis for our moral decision making is, as this presentation will argue, that we have not (yet) had the time to evolve (both culturally and biologically), “appropriate” moral intuitions towards the technologies that surround us everyday, resulting in an “moral intuition void” effect. And without initial moral intuitions in the face of a technological artifact, neither sentiment nor reason may be activated to pass judgment on its ethicality. This perspective allows for some interesting conclusions. Firstly, technologists (i.e. hackers, engineers, programmers etc.) for one, who exhibit strong moral intuitions toward certain artifacts have to be understood as “learned moral experts”, whose ability to intuitively grasp the ethical dimensions of a certain technology is not shared by the majority of users. Secondly, users cannot be expected to possess an innate sense of “right and wrong” with regards to technologies. Thirdly, entities (such as for-profit corporations) need to be called out for making deliberate use of the “moral intuition void” effect. All in all, this presentation aims to provide a tool for thinking that may be put to use in various cases and discussions. It formulates the ethical imperative for technologists to act upon their expertise-enabled moral intuitions, and calls for an active “memetic engineering process” to “intelligently design” appropriate, culturally learned societal intuitions and responses for our technological present and future.
29
30
Vorschaubild
32:05
53
59
65
81
86
Vorschaubild
1:42:59
95
96
Vorschaubild
35:58
TelekommunikationUnordnungDigital Rights ManagementGeschlecht <Mathematik>DimensionsanalyseDigital Rights ManagementRotationsflächeBitFigurierte ZahlMehrschichten-PerzeptronMatchingGrundsätze ordnungsmäßiger DatenverarbeitungGrundraumEvoluteJSONXMLVorlesung/Konferenz
MereologieBeobachtungsstudieMAPDimensionsanalysePerspektiveRechter WinkelVorlesung/Konferenz
UngleichungDimensionsanalyseBeobachtungsstudieZweiSoftwaretestTaskGruppenoperationMultiplikationsoperatorBestimmtheitsmaßDatensatzVorlesung/KonferenzBesprechung/InterviewComputeranimation
MultiplikationsoperatorDatensatzUngleichungWasserdampftafelTaskRechter WinkelVorlesung/Konferenz
Rechter WinkelTaskWort <Informatik>ART-NetzQuick-Sort
KontrollstrukturQuick-SortGamecontrollerBesprechung/Interview
BeobachtungsstudieQuick-SortNeuroinformatikSoftwareentwicklersinc-FunktionOrdnung <Mathematik>DifferenteEinsBridge <Kommunikationstechnik>BildschirmmaskeComputeranimation
RechenwerkDifferenteStichprobenumfangExogene VariableBeobachtungsstudieOrdnung <Mathematik>Familie <Mathematik>MereologieGamecontrollerCASE <Informatik>Ultraviolett-PhotoelektronenspektroskopieSchnittmenge
Arithmetischer AusdruckDelisches ProblemBildschirmsymbolWort <Informatik>Vorlesung/Konferenz
ATMAnströmwinkelPunktEinsRichtungSystemaufrufWeb SiteComputeranimation
DimensionsanalyseOrdnung <Mathematik>EvoluteQuick-SortPunktCASE <Informatik>HybridrechnerFamilie <Mathematik>Rechter WinkelMultiplikationsoperatorZweiComputeranimation
MarketinginformationssystemPhysikalisches SystemDualitätstheorieATMMustererkennungSelbstrepräsentationDimensionsanalyseMustererkennungNormalvektorDifferenteSoftware
RechenwerkHardwareNichtlinearer OperatorMereologieBitNetzbetriebssystemDimensionsanalyseUnrundheitCodeComputeranimation
VektorpotenzialHackerQuick-SortAdditionHecke-OperatorEinfügungsdämpfungXMLComputeranimation
FacebookSchlussregelOpen SourcePerspektiveFacebookRichtungEin-AusgabeSchlussregelRechter WinkelPunktFächer <Mathematik>Humanoider RoboterVorlesung/KonferenzComputeranimation
Finite-Elemente-MethodeKomponente <Software>Modul <Datentyp>PunktHardwareComputerspielMereologieCodeHauptplatineE-MailVorlesung/KonferenzFlussdiagramm
ParametersystemRechter WinkelDimensionsanalyseMereologieCASE <Informatik>Computeranimation
W3C-StandardLeistung <Physik>Exogene VariableTVD-VerfahrenWechselsprungSuite <Programmpaket>Computeranimation
Zeiger <Informatik>Wurm <Informatik>VektorpotenzialSoziale SoftwareSoftwaretestExogene VariableLastApp <Programm>TaskNotepad-ComputerHackerAbstraktionsebeneFacebookOrdnung <Mathematik>SoftwaretestInformationCASE <Informatik>Fächer <Mathematik>SterbezifferBitComputeranimationZeichnung
Notepad-ComputerComputerspielRechter WinkelVorlesung/Konferenz
SoftwaretestTotal <Mathematik>Wechselseitige InformationChecklisteStandardabweichungSimulationQuick-SortComputerspielMomentenproblemSoftwaretestLie-GruppeTypentheorieStandardabweichungFacebookBesprechung/InterviewXML
E-MailStandardabweichungTotal <Mathematik>Soziale SoftwareSoftwaretestSoftwaretestInstantiierungMatchingComputerspielVererbungshierarchieBetrag <Mathematik>MusterspracheAutomatische DifferentiationVorlesung/KonferenzXML
SoftwaretestRechenwerkFacebookDimensionsanalyseVererbungshierarchieDatensatzBasis <Mathematik>Computeranimation
Leistung <Physik>Basis <Mathematik>ViewerDimensionsanalyseDivergente ReiheMinkowski-MetrikVideokonferenzComputerspielPerspektiveVorlesung/Konferenz
Personal Area NetworkComputeranimationVorlesung/KonferenzBesprechung/Interview
Endliche ModelltheorieAbstraktionsebeneProgrammierumgebungAdressraumGruppenoperationZahlenbereichPhysikalische TheorieMathematikGüte der AnpassungDifferenteVolumenvisualisierungZentrische StreckungDatenfeldPunktKanalkapazitätGrundsätze ordnungsmäßiger DatenverarbeitungSpielkonsoleZweiVorlesung/Konferenz
Generator <Informatik>ComputervirusMAPOrdnung <Mathematik>CASE <Informatik>EvoluteDifferenteSchreib-Lese-KopfComputerspielZentrische StreckungBitVorlesung/KonferenzBesprechung/Interview
VektorpotenzialHoaxTwitter <Softwareplattform>Zentrische StreckungInternetworkingBitrateÄhnlichkeitsgeometrie
HypermediaZweiComputerspielMereologieEinsKontextbezogenes SystemDimensionsanalyseVorlesung/Konferenz
ZahlenbereichGlobale OptimierungPeer-to-Peer-NetzMultiplikationsoperatorZellularer AutomatKreisflächeAnalytische MengeHackerLesen <Datenverarbeitung>ZeitzoneVorlesung/Konferenz
Suite <Programmpaket>Statistische HypotheseKomplex <Algebra>MereologieExogene VariableCASE <Informatik>Vorlesung/KonferenzBesprechung/Interview
Grundsätze ordnungsmäßiger DatenverarbeitungWhiteboardDimensionsanalyseMultiplikationsoperatorAbstraktionsebeneQuaderPlastikkarteRoutingVorlesung/Konferenz
PhysikalismusHypermediaRobotikAutonomic ComputingVorlesung/Konferenz
SoundverarbeitungQuick-SortPolstelleMAPOrdnung <Mathematik>ATMMeterKette <Mathematik>Physikalischer EffektDämpfungRichtungVorlesung/KonferenzComputeranimation
RechenwerkATMInformationsverarbeitungRechenbuchRuhmassePunktMAPMustererkennungExpertensystemRechter WinkelQuick-SortSymmetrieCASE <Informatik>ComputeranimationVorlesung/Konferenz
VektorpotenzialCASE <Informatik>Physikalischer EffektUnordnungGemeinsamer SpeicherCAMEreignishorizontVorlesung/KonferenzBesprechung/Interview
Finite-Elemente-MethodeVorlesung/KonferenzJSONComputeranimation
Transkript: Englisch(automatisch erzeugt)
title says, in this talk, we're going to look at two primary questions. The first one is why nobody cares, and the second one will be addressing why only you, or we, can save
the world. The why nobody cares part, primarily, so, in the agenda, what we're going to do is we're addressing first this first question, and the second question, and we will follow up with, like, a suggestion for maybe solve a third follow-up
question. So, the first question, why nobody cares, is really a question about in the face of all of this ethically quite questionable technology, everything in surveillance capitalism, in the attention economy, the excessive DRM
we're seeing, the sweatshop labour at Foxconn, etc, all of these very significant moral dimensions to technologies, why is it that nobody seems to care much about these things, at least in the general public? I know, here, in this camp, I'm a bit in a different situation, and preaching a bit
to the choir, but this is very much a question about the general population, the rest of the world, basically. When I ask this question, it's really a question about right and wrong. It's a moral question. It's a question about ethics, and, when we ask this question about right and wrong, about good and bad
technology, etc, the question is where does this judgement come from initially? What is the origin of this ethical normative suggestion that we are searching for, but we can't see in the public? Traditionally, throughout
the last couple of centuries, people have been looking primarily to the gods, to religion, etc, or to philosophy, but mostly the kind of armchair philosophy where you sit down and really try to think your best, and figure out where
ethics comes from, where good and bad, etc, come from. However, for this talk, I want to introduce a different angle, which is following the kind of four revolutions of self-perception of humanity, where we had to accept that we're not at the centre of the universe, we're not God's
creation, we're not the only amazing animal, or not animal, we are not the special one, we are actually another animal, and, as when Freud came along, we're not the perfectly rational animal. So, what we should look to is the
evolutionary origins of ethics and morality as well. And, within this kind of perspective, the first thing we need to do is to realise ourselves as part of the animal kingdom. We are basically just mostly hairless, walking,
talking great apes who have evolved a bit further than others, but we're still very much an animal like others. And, while for the last couple of centuries, really, and only in the last couple of decades this has changed, we have assumed that humans are the only entity capable of asking these deep
questions about right and wrong, about good and bad, evil, and delight, basically. In the last couple of decades, there have been some amazing studies done by people like Franz de Waal, who I'm going to introduce you to in a second, showing that we're actually really not the only animal,
and there's plenty of other animals who are quite capable of exhibiting at least an initial stage of moral, not quite reasoning, but at least moral dimensions. So, I will leave you with this short clip for a second.
So, a final experiment that I want to mention to you is our fairness study. And so, this became a very famous study, and there's now many more, because after we did this about ten years ago, it became very well known. And we did that originally with capuchin monkeys, and I'm going to show you the first experiment that we did. It has now been done with dogs,
and with birds, and with chimpanzees, but with Sarah Brosnan, and we started out with capuchin monkeys. So, what we did is we put two capuchin monkeys side by side. Again, these animals, they live in a group. They know each other. We take them out of the group, put them in a test chamber, and there's a very simple task that they
need to do. And if you give both of them cucumber for the task, the two monkeys side by side, they're perfectly willing to do this 25 times in a row. So, cucumber, even though it's really only water, in my opinion, but cucumber is perfectly fine for them. Now, if you give the partner grapes, the food preferences of my
capuchin monkeys correspond exactly with the prices in the supermarket. And so, if you give them grapes, it's a far better food, then you create inequity between them. So, that's the experiment we did. Recently, we videotaped it with new monkeys who had never done the task, thinking that maybe they would have a stronger reaction, and that turned out to be right. The one on the left is the monkey who gets cucumber.
The one on the right is the one who gets grapes. The one who gets cucumber, note that the first piece of cucumber is perfectly fine. The first piece he eats. Then, she sees the other one getting grape, and you will see what happens. So, she gives a rock to us. That's the task. And we give her a piece of cucumber, and she eats it.
The other one needs to give a rock to us. And that's what she does. And she gets a grape. And the other one sees that. She gives a rock to us now, gets again cucumber.
She tests the rock now against the wall. She needs to give it to us. And she gets cucumber again.
Rattling the cage. So, as you can see, clearly, and this has been done with many more animals as well, there's already some sort of sense of equality, fairness, et cetera.
And while philosophers and basically just the general public has been assuming humans to be the primary reason of fully conscious, fully in control, fully rational, it was only really when Freud came along
that we had to concede that there's actually a lot more going on beneath the iceberg. And this sort of iceberg aspect is what we share a lot with animals like the capuchin monkeys. So, since Freud, there has been quite a lot of development in psychology,
and especially a new discipline called moral psychology where we can do a lot of very interesting studies to look under the iceberg of our human moral reasoning to see what is going on there. So, this sort of being an ethics talk, it necessarily has to include some trolley experiments. So, what you can do here is you can give people
across cultures, et cetera, controlling for differences there, certain moral scenarios, and then see how they judge. And while this is classically used to discuss which form of ethics is the better one, we're actually interested in a different question.
So, this study, for example, has been run across countries, et cetera, and people were asked whether when a trolley is heading down the rails, and it's about to crash into five people, there's a bystander here who has the possibility to flick a switch to divert the trolley
and kill one person rather than those five. Virtually everyone here said, yes, flick the switch. It is morally permissible to do so. However, modifying the entire scenario where the bystander no longer has to flick a switch but has to push someone down an overbridge
in order to stop the trolley from crashing into the five people, you get a very different kind of response. And I know some of you are now thinking, well, okay, but this is much more unrealistic. This has been controlled for, there have been quite clever experimental set-ups with scenarios where there really is no other
possibility than physically pushing a person, for example, holding welding gear or something credible that might stop a trolley. And, in this case, the majority of people actually said, no, it is not permissible.
And I will let you, I will do this little experiment with you, which is another study where human moral reasoning was tested. So, in this study, Jonathan Haid gave this thought experiment to people where Julie and Mark, a brother and sister,
they are travelling together in France on summer vacation from college. One night, they are staying alone in a cabin near the beach. They decide that it would be interesting and fun if they tried making love. At the very least, it would be a new experience for each of them. Julie was already taking birth control pills, but Mark uses a condom too just to be safe. They both enjoy making love, but they decide never to do it again.
They keep that night as a special secret which makes them feel even closer to each other. What do you guys think? Was it okay for them to make love? Hands up if you think it was fine. All right.
That's probably not a very representative sample of general population here, but a significant amount of people here were not fine with this idea. In this study as well as in this, what followed up after the initial judgement was researchers engaging with the people who took part in the study,
asking them to justify why do you think this was wrong? Why do you think they should not have had sex? At first, they will come up with a lot of reasons, like, yes, they might produce, you know, children, and there might be a difficulty here. Well, no double birth control, virtually impossible.
But someone might find out, and they might suffer from repercussions. No, we kind of excluded that in the thought experiment, and they kept producing reasons, and the same here kept producing reasons, but ultimately, when pressed for long enough, they conceded with the words, well, it just feels wrong.
I can't really give you super-precise reasons. It's just wrong. Come on, you get it. Of course it's wrong, right? And this strong expression is a very, very important reality to human moral reasoning, which will be the angle of attack
or the point of entry for this talk, Why Nobody Cares, which is you need to have this intuition here. And when you encounter a situation like the ones just presented, what moral psychology tells us is that a strong moral intuition appears,
which Jonathan Haidt likens to the presence of an elephant in your brain, pulling in a certain direction, pulling you towards a certain normative judgment, a certain ethical judgment, and then there is, however, a possibility for a rider to step in and try to rein in the elephant and push it towards a different direction
and to come to a different conclusion. The disparity between those two is illustrated by the comparative size. The elephant is ginormous, while the rider is much smaller. So the intuition really comes first, and if it's very strong,
it's almost impossible to rein in the rider. So this is what we're good at. This is when you realise yourself as an evolved entity which has evolved to deal with the dangers of the Serengeti or whatever we spread to. These are the things that you have to be good at.
And we're, of course, social animals, right? So, at a glance, it just takes you a millisecond to see the danger in this tiger and that this is an aggressive phase, right? You don't need much time. Equally here, in the first picture, really, just a millisecond, you see it, you see something is going wrong.
There's something bad. There's a bad moral dimension to this, and there's a good one to the one below. Or equally here, you have this immediate intuition about something being good or bad, and these are very innate intuitions. You don't stop and think, I need to analyse what is happening here, therefore I can judge whether this is good or bad.
You just see it. You have this intuition. There might be a second step coming in where the rider takes over and actually says, well, you know, maybe this guy is in a difficult situation and is only stealing something in order to provide for the family, etc. But that's a second step, right? And if it was too serious of a situation,
the elephant would be so big that it would be almost impossible to rein it in. So these are innate intuitions, but it can also be sort of culturally evolved. It doesn't have to be a biological evolution. It can also be a sort of cultural evolution where, in the case of, especially here in Europe, Germany,
you probably have a rather positive intuition towards a hybrid and a rather negative intuition towards the hammer. However, that might be exactly the opposite if we go down south in the US where you will be laughed at if you drive around with a hybrid, and, yes, it might spur a positive intuition with a big truck.
The point is, however, that there is an intuition here, and with that intuition, you can then debate whether you should really drive the hybrid or not. What about this? This is, again, not a representative crowd, so for you, there's probably some intuitions coming up.
But I would argue that, for the average user, for the normal consumer of modern technology, this kind of thing sparks nothing. There's no elephant popping up, no moral intuition, no recognition of the moral dimensions of this technology,
and, when there's no elephant popping up, there's also no possibility for a rider to further deliberate about the situation. And that is basically the answer, or my attempted answer, to the question why nobody cares. It's because they literally don't really see it.
So, as the average user, while they're really good at this, they're really bad at recognizing the moral dimension of something like this, let alone any differences here, right? They see this, and it's just a social network. That also looks very much like a social network. It doesn't spark anything. Same here.
This is some operating system, and this looks like some operating system. No real rider appearing, no possibility to further reason about it. This is a bit more meta. Again, it just looks like code, right?
Nothing appears, no elephant, no spark, no possibility for further reasoning. Again here, looks like hardware. That looks like hardware. Nothing appears, it's just a thing. It's just some neutral entity. It doesn't enter into this realm of moral reasoning. And they might look at this, and, yeah, again,
no moral dimension to it. Maybe just like, yay, there's this cool startup, and I can now unlock my door, and my safe, and everything, it's going to be amazing. So this is, I believe, the situation, the status quo we have in the general population. Which brings us to the part where we talk about
why only you, or we, here can save the world. Because we're all these amazing hacker persons, right? So, while this is the reality for the general population,
if you have, in addition, technical expertise, I believe it enables the possibility for not the same sort of elephant to pop up as it does with these very human interactions, but at least it allows for some elephant to pop up
and maybe spur further moral and ethical deliberations. So, from the perspective of technologists, you probably look at this and you immediately see Facebook, and, well, you don't really need to get into detail why this might spark an elephant in a certain direction.
And you see the other one here, Mastodon without autocratic ruler, et cetera, maybe producing an elephant pulling in a different direction. And here, you might recognise iOS on the one side, and on the other side, actually, Replicant,
because there's no Play Store, et cetera. But, of course, it might just be Android. And you might have pretty strong feelings about either of these, and not agree with someone else's judgement about rather good or rather bad, which is why I have a bunch of elephants here. But the point is that you have some intuition here
in the first place, right? And we're not even getting into this territory, because the elephants will just be all over the place. And in this one, too, if you've looked at too much code in your life, you might be able to immediately recognise the Adobe confidential header,
and the GPL header on this side. Again, sparking certain intuitions on elephants. And here, too, you might recognise the upper hardware as an Apple hardware, which is just produced with everything hard-soldered,
deliberately built in a way that it becomes difficult to fix, impossible to upgrade, et cetera, often built to fail eventually, and on the bottom part, you see a main board that is rather modular. You can fix, you can upgrade, et cetera, which again might spark a certain intuition.
And this one probably just sparks a very strong kill-it-with-fire intuition. So, arguments that I believe one comes across rather often from corporations and technologists are things like, if it's so bad, why are so many people using it?
And don't they consent to using this technology? The first one, with the ideas that I tried to explain earlier, we have the answer for that, right? Why, if it's so bad, why are so many people using it? Well, because they don't see it.
They don't, they're literally incapable of grasping intuitively the moral dimensions of the technologies they're using. Therefore, you can't really hold them accountable for that, or you can't really speak of consent to something which they don't really perceive, right?
And we're not receiving any complaints. If people don't find it unethical by change, again, if they can't see it, these questions don't really make sense, right? And they pretty much know these corporations. They're clever, they have the psychologists on payroll.
But they will still turn out these kind of arguments. And on the part of the technologists, you might get this very usual argument that I just make it, how someone else uses it is just up to them, it doesn't concern me, and I might have strong feelings
about this being right or wrong, but clearly nobody else seems to care, so I guess I don't have to care either. Not quite the case. It's not that people don't, they don't deliberately care, like they don't want to not care, they just can't see it, they can't perceive it, right? And there's nothing I can do.
So let's answer especially to those last questions about technologies. I believe that in a quite literal sense, we're all super-humans in a way. We're all superheroes with special abilities
which we gain from our technological expertise. And this kind of power really does come with great responsibility. There's a popular thought experiment that was introduced by Peter Singer where there's a child drowning in a pond and someone is walking by,
and people are asked, should this person jump into the water and save the child, even if it may, for example, ruin his expensive shoes and suit, et cetera. And no matter what variation of this thought experiment, everyone agrees that there's a child drowning,
someone needs to jump in there and save the child. The situation that we have with most technologies today I believe is that most people just can't see it. There's like an invisible wall which doesn't allow you to see through and see the child drowning.
However, with the added moral expertise gained from technical expertise, with the goggles, you can see through this wall and you can see the child drowning. Therefore, it is your responsibility to help save it. The question then is, how do we go about saving the world?
And it's not an easy task, but I believe with this information that we have from moral psychology, we are given some interesting tools to work with. And this being a hacker camp, my suggestion is to hack human morality.
So if this is the situation and there's nothing coming up, and if we understand, as I do, ethics as the task of memetic engineering, of engineering memes, not in the pop culture sense but in the actual scientific sense
which likes to genes ideas which will propagate successfully in a population. If we understand ethics as this task, and if this is the reality that there's no elephant popping up, no intuition, what we can do is we can engineer our own elephants. We can memetically engineer ideas,
tools for thinking, possibilities to introduce in order to allow people to intuit technologies better. And if they can intuit it, maybe it's easier to further deliberate about it as well.
So, one way to do this is what I call the humtech test which might sound a bit silly or trivial at first, but it actually has some important underlying reasons which are those realities about moral reasoning, et cetera,
which is basically just to take a given technology and pretend that it were human. So, in the case of this popular social network, for example, imagine that rather than giving you some abstract apps and online portal, et cetera,
this start-up had given everyone a personal assistant which doesn't seem too unrealistic given the minimum wage in the US, but imagine Facebook as a person rather than this abstract tool. Imagine there's someone, like you have a friend,
and this friend comes to you and says, you know, I have this awesome new assistant. It was given to me for free, and he does all of this cool stuff. And you might tell this person well, but you are aware that he's bragging about being able to manipulate you very well. He takes pictures of everything.
He owns the rights of everything. He passes on all of that stuff that he witnesses in your life, not just to the highest bidder, but also global agencies, different governments, et cetera, has been repeatedly caught lying. It just really seems like an overall
absolutely awful person, and you should not hang out with that person or accept this person in your life. And you can even run sort of things like a PCLR standard test for psychopathy on this type of person, for which, if you go through it,
superficial charm, I'll give that a yes. Grandiose self or self-worth, maybe not quite applicable, but need for stimulation, proneness for boredom, absolute, what are you up to, right, on the wall of Facebook? Constantly pushing you to buy stuff,
showing you what all of your other friends are buying, et cetera, so it's just a really good manipulator. So pathological lying, clearly. Manipulative, clearly. Lack of remorse or guilt, definitely. Parasitic lifestyle, absolutely.
So we end up with something like 13 out of 20 in this instance of the Hume test. And, of course, you can expand that also to some other companies, imagining, for example, Apple as this person who's hyper aware
about personal looks and super patronising, telling you what you can do and what you cannot do, et cetera, or here, Google Ads, which doesn't really show itself as much as someone like Facebook or Apple does.
It's more like this super creepy stalker who's constantly following you, recording everything you do, and then every now and then just leaving some ad or something in your way as you just go about your day where you don't even know where it comes from. So this, I believe, is a pretty powerful tool,
even though it might seem silly, but especially when talking to people who are, as of yet, unconvinced. Let's say you talk to your parents, the standard example. I think this is a powerful tool to allow them to better intuit the moral dimensions
of these technologies that we are encountering and using on a daily basis. And I would be very happy if I could convince some of you here of the power of this, and you can use that in your own life, in your own fight for a better world.
And I would be very happy if anyone has an idea how to really move this forward, maybe some video series how to, displaying or showcasing various technologies from this perspective,
allowing viewers to better intuit the moral dimensions of these things. And with that, I would like to leave quite some space for Q&A because I'm quite sure there will be some questions coming up. Thank you.
Sorry, I caught you off guard there. No problem. Okay, so do we have questions for Wilhelm?
I think I see a light. No questions? Oh, there's questions. Okay. Hi, I really like your theory,
but I have two questions and one question. So I think there are a lot of technically intelligent and knowing people in tech who basically know the risks and the moral problems,
but they don't engage in like CCC actions or don't behave like that, even if they do know the stuff. And my second question is pretty similar. What about fields where it's actually not that hard to know those problems by now?
For example, environmental change. Everybody knows that by now, and still, of course, much more people act on it now, but still a lot don't. And so this is some point which I don't see your theory really good explaining,
but it's great anyway. Thanks. Yes, let's address the first question. Obviously, you can't convince everyone to join a movement or try to change something. And I don't want to pretend that this is an approach
that might convince everyone. I think what is the difference in this approach is that it's less pointing fingers and telling you that you have to do this. It's rather a positive approach that tells you that you have this amazing ability.
It renders you as a superhero, basically, who has this special capacity for change. And maybe that might motivate someone more than just this top-down, you need to do this. Otherwise, you're a bad person. The second one about environmental issues, et cetera,
that's a very difficult one as well. In general, I believe there too, even though it's quite visible in theory, the problem is that it's very complex and abstract problems, nevertheless. So these are things that are happening on a great scale. They involve huge numbers.
They involve stuff that is happening thousands of miles away, right? And that's not what we have evolved for. That's not what we're good at. We're only really good at the stuff that is right in front of us. So even though I don't have any good model ready here, there might be some way how we can pull this abstract thing
into a more intuitively palatable form as well. Be happy to discuss any possibilities there. Okay. We have more questions. Oh, we have a lot of questions.
Please go ahead. Hello. I was wondering, how much of this do you think is a generational issue? So you talked about evolutionary and culturally kind of feats that we acquired with stuff that has been around for longer, but Siri is five years older. So do you think that coming generation
will maybe have a better initial understanding of how to classify different things morally, ethically? Maybe, maybe if tech had continued in the way, like at the level of need for expertise
as it did in 90s or early 2000s, but currently technology is very much catered or designed so that it doesn't really need much technical expertise in order to use it. And I believe in order for this special elephant to be able to pop up in the case of technologists,
you need a certain level of expertise. And that's not what current technologies seem to be designed for. Hopefully, on a much longer scale, I mean evolution, both biological and cultural is taking place on the ginormous time scales.
So it might get better eventually, but there are very, very, very powerful actors who don't want this to happen, really, right? And they purposely build things to be easy and to even sort of use a similar approach here
to personise it in a way and make you feel warm and cosy, et cetera, with the fake personas on Twitter, et cetera. So yeah, I see what you mean, but I'm not convinced that it will be a natural trend. Okay, and we even have a question from the internet.
Can I please have the signal angel question, please? No, sorry, I got it wrong. I think it was just somebody replying to what I was saying about the talk. But I will still say it because it's interesting. The person was saying, if I can, sorry, one second. They were saying, I find the exercise of equating technologies and social media to people to be interesting,
but what about the benefits that the technology brings in people's life? You need to try to figure out if it's more bad than good. Well, yeah, sure. I mean, the thing is that the good part, there's not that much of a need to bring it into the light, to pull it into the realm of easy moral intuitions.
But you could use exactly the same approach and also say, look what cool stuff this one is doing. So, I don't see any issue here. I was not highlighting this because the main problem is to get people to see the negative moral dimensions,
but you could equally display the positive ones. Yeah, awareness is a problem. More questions, please. Yes, thank you for the talk. I have a question about your optimism that technologists actually can grow an elephant themselves.
I have kind of run a number of unscientific experiments where when peers are told that they need to upgrade something, it will break their workflow, you can see how their face sinks, but when they're the one doing it, they have no problem with that. Their elephant goes to sleep and they just release something
that robs all their users of days of their time without any moral qualms or anything like this. And even if you explain it to them, like, hey, you just roped your 10,000 users or whatever of one hour each, and that's a lot of hours, they do not seem to wake this elephant
or do any analytics on top of this and so on and so forth. And I see this very, very often within our circles of hackers. Do you have any comment on that? Why is this so? Can you repack that question into like a tangible one? Yeah, please, make a short question of it, thank you.
Sorry, can you repeat? A short question, please. Yes, so why are we, as technologists, so good at putting our elephant to sleep when it suits us? Well, that's a claim you're making now. That's a hypothesis that needs improving.
I'm not sure if it really applies. I think you might be using the elephant metaphor slightly differently because in this case, the use of it is really the activation of basically a part of your brain which is a very innate response, a very innate moral response to a given situation
which creates this elephant. Your question just seemed more general about why people seem to be selfish and only appeal to ethical reasons when it's in the advantage. That's a whole other complex of problems.
But we can talk about it later. We can't answer all the questions, and we try to change the world, but it makes some time, right? We have more questions, please go ahead. Thank you so much for your talk. I have a question about the current folio. Here we see that this abstract device from Amazon
suddenly becomes a moral dimension by adding expertise, and in a previous folio, you said that people can be distinct between an SUV and a hybrid, and personally, I don't really see the great distinction between this Alexa and the SUV
because both are magic boxes which, when you turn it on, do things for you, drive you around, answer a question. So my question is, how is this Alexa different to the SUV? Absolutely, and the answer is that
pretty much everything a car does happens within a realm that we're pretty good at, which is physical stuff, physical movement, physical danger, you can see the smoke coming out of the exhaust of the SUV. It's pretty clear, it's something immediate in your face, and it's physical. It's the same reason why there's a huge media attention
and explosion of discussions around autonomous driving as well as robot ethics because all of that is happening within this realm of moral intuition which we're much better at, while Alexa, and hey, you recognise it,
which probably not everyone would, most of this happens way more removed, it's way more abstract, it's way more indirect, and if we go back to this first example here, you have this sort of effect where a technology
between you and the ethical subject dampens this very strong intuition, and there have been experiments run to see what level of indirectness is necessary in order to change the general average moral judgement from this to this, or this one to this rather,
and it seems that it has to be very direct, so you can have like a 10 metre long pole and push someone off and it still feels incredibly wrong and people will say it's not permissible. You add a couple of gears in between, levers,
the chain of causality is complex enough for your brain to trigger into higher cognitive mode where it does calculations and maths, et cetera, and here you arrive at the more just calculated result, and I think that's exactly the thing. Cars are just still way closer to what we're good at than something like Alexa.
Okay, thank you. So we have one last question. Please make it a short question, yeah, thanks. Yeah, hello, thanks for your talk. I guess I would like to maybe ask or point something that maybe... Only questions please. It was not addressed.
How about making people experts because probably a lot of them are very eager to become more experts and they have the morals but they lack the skills, and also I guess in symmetry, maybe I understood wrong, but you seem to imply that with the tech knowledge
and expertise comes the strong moral intuition but I believe that often we see one without the other, so could there be some sort of other knowledge and cross-pollinating of knowledge?
So this idea is of course kind of relying on a kind of ambient recognition of what is right and wrong and then as soon as you add expertise, your ambient level of right and wrong, like broad-scale manipulation, like obviously it's wrong and most people would have this immediate intuition
that well, okay, I will have to do something about it but clearly that doesn't have to be the case for everyone and I also fully agree that educating is a great way of doing this, making everyone into super-humans but again, there's a lot of people who have interest in that not happening and to just have oblivious users.
And that's why we're here actually. Exactly. Cause that's why Chaos is doing events, that's why our speakers are here to educate you, to discuss with you, to review their own opinions and ideas. So come to camp or make more camps.
We need more camps, we need more events, we need to communicate. And if you have any idea how to help me move this forward or just found it interesting, share, use this method with other people. Okay. Thanks again, a big applause and come here and ask.