Why Nobody cares, and only You can save the World
This is a modal window.
Das Video konnte nicht geladen werden, da entweder ein Server- oder Netzwerkfehler auftrat oder das Format nicht unterstützt wird.
Formale Metadaten
Titel |
| |
Untertitel |
| |
Serientitel | ||
Anzahl der Teile | 102 | |
Autor | ||
Lizenz | CC-Namensnennung 4.0 International: Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen. | |
Identifikatoren | 10.5446/43245 (DOI) | |
Herausgeber | ||
Erscheinungsjahr | ||
Sprache |
Inhaltliche Metadaten
Fachgebiet | ||
Genre | ||
Abstract |
|
Chaos Communication Camp 201942 / 102
1
6
8
14
17
18
19
20
25
27
28
29
30
34
35
36
39
41
47
52
53
54
55
58
59
63
65
67
71
79
81
84
85
86
87
91
92
93
94
95
96
97
98
99
100
101
00:00
TelekommunikationUnordnungDigital Rights ManagementGeschlecht <Mathematik>DimensionsanalyseDigital Rights ManagementRotationsflächeBitFigurierte ZahlMehrschichten-PerzeptronMatchingGrundsätze ordnungsmäßiger DatenverarbeitungGrundraumEvoluteJSONXMLVorlesung/Konferenz
02:41
MereologieBeobachtungsstudieMAPDimensionsanalysePerspektiveRechter WinkelVorlesung/Konferenz
04:12
UngleichungDimensionsanalyseBeobachtungsstudieZweiSoftwaretestTaskGruppenoperationMultiplikationsoperatorBestimmtheitsmaßDatensatzVorlesung/KonferenzBesprechung/InterviewComputeranimation
05:00
MultiplikationsoperatorDatensatzUngleichungWasserdampftafelTaskRechter WinkelVorlesung/Konferenz
05:41
Rechter WinkelTaskWort <Informatik>ART-NetzQuick-Sort
06:49
KontrollstrukturQuick-SortGamecontrollerBesprechung/Interview
07:27
BeobachtungsstudieQuick-SortNeuroinformatikSoftwareentwicklersinc-FunktionOrdnung <Mathematik>DifferenteEinsBridge <Kommunikationstechnik>BildschirmmaskeComputeranimation
08:57
RechenwerkDifferenteStichprobenumfangExogene VariableBeobachtungsstudieOrdnung <Mathematik>Familie <Mathematik>MereologieGamecontrollerCASE <Informatik>Ultraviolett-PhotoelektronenspektroskopieSchnittmenge
11:07
Arithmetischer AusdruckDelisches ProblemBildschirmsymbolWort <Informatik>Vorlesung/Konferenz
11:54
ATMAnströmwinkelPunktEinsRichtungSystemaufrufWeb SiteComputeranimation
13:06
DimensionsanalyseOrdnung <Mathematik>EvoluteQuick-SortPunktCASE <Informatik>HybridrechnerFamilie <Mathematik>Rechter WinkelMultiplikationsoperatorZweiComputeranimation
15:06
MarketinginformationssystemPhysikalisches SystemDualitätstheorieATMMustererkennungSelbstrepräsentationDimensionsanalyseMustererkennungNormalvektorDifferenteSoftware
16:16
RechenwerkHardwareNichtlinearer OperatorMereologieBitNetzbetriebssystemDimensionsanalyseUnrundheitCodeComputeranimation
17:22
VektorpotenzialHackerQuick-SortAdditionHecke-OperatorEinfügungsdämpfungXMLComputeranimation
18:03
FacebookSchlussregelOpen SourcePerspektiveFacebookRichtungEin-AusgabeSchlussregelRechter WinkelPunktFächer <Mathematik>Humanoider RoboterVorlesung/KonferenzComputeranimation
18:57
Finite-Elemente-MethodeKomponente <Software>Modul <Datentyp>PunktHardwareComputerspielMereologieCodeHauptplatineE-MailVorlesung/KonferenzFlussdiagramm
20:00
ParametersystemRechter WinkelDimensionsanalyseMereologieCASE <Informatik>Computeranimation
22:07
W3C-StandardLeistung <Physik>Exogene VariableTVD-VerfahrenWechselsprungSuite <Programmpaket>Computeranimation
23:28
Zeiger <Informatik>Wurm <Informatik>VektorpotenzialSoziale SoftwareSoftwaretestExogene VariableLastApp <Programm>TaskNotepad-ComputerHackerAbstraktionsebeneFacebookOrdnung <Mathematik>SoftwaretestInformationCASE <Informatik>Fächer <Mathematik>SterbezifferBitComputeranimationZeichnung
25:55
Notepad-ComputerComputerspielRechter WinkelVorlesung/Konferenz
26:30
SoftwaretestTotal <Mathematik>Wechselseitige InformationChecklisteStandardabweichungSimulationQuick-SortComputerspielMomentenproblemSoftwaretestLie-GruppeTypentheorieStandardabweichungFacebookBesprechung/InterviewXML
27:27
E-MailStandardabweichungTotal <Mathematik>Soziale SoftwareSoftwaretestSoftwaretestInstantiierungMatchingComputerspielVererbungshierarchieBetrag <Mathematik>MusterspracheAutomatische DifferentiationVorlesung/KonferenzXML
28:03
SoftwaretestRechenwerkFacebookDimensionsanalyseVererbungshierarchieDatensatzBasis <Mathematik>Computeranimation
29:06
Leistung <Physik>Basis <Mathematik>ViewerDimensionsanalyseDivergente ReiheMinkowski-MetrikVideokonferenzComputerspielPerspektiveVorlesung/Konferenz
29:50
Personal Area NetworkComputeranimationVorlesung/KonferenzBesprechung/Interview
30:33
Endliche ModelltheorieAbstraktionsebeneProgrammierumgebungAdressraumGruppenoperationZahlenbereichPhysikalische TheorieMathematikGüte der AnpassungDifferenteVolumenvisualisierungZentrische StreckungDatenfeldPunktKanalkapazitätGrundsätze ordnungsmäßiger DatenverarbeitungSpielkonsoleZweiVorlesung/Konferenz
33:42
Generator <Informatik>ComputervirusMAPOrdnung <Mathematik>CASE <Informatik>EvoluteDifferenteSchreib-Lese-KopfComputerspielZentrische StreckungBitVorlesung/KonferenzBesprechung/Interview
34:48
VektorpotenzialHoaxTwitter <Softwareplattform>Zentrische StreckungInternetworkingBitrateÄhnlichkeitsgeometrie
35:41
HypermediaZweiComputerspielMereologieEinsKontextbezogenes SystemDimensionsanalyseVorlesung/Konferenz
36:46
ZahlenbereichGlobale OptimierungPeer-to-Peer-NetzMultiplikationsoperatorZellularer AutomatKreisflächeAnalytische MengeHackerLesen <Datenverarbeitung>ZeitzoneVorlesung/Konferenz
38:05
Suite <Programmpaket>Statistische HypotheseKomplex <Algebra>MereologieExogene VariableCASE <Informatik>Vorlesung/KonferenzBesprechung/Interview
39:04
Grundsätze ordnungsmäßiger DatenverarbeitungWhiteboardDimensionsanalyseMultiplikationsoperatorAbstraktionsebeneQuaderPlastikkarteRoutingVorlesung/Konferenz
40:00
PhysikalismusHypermediaRobotikAutonomic ComputingVorlesung/Konferenz
40:41
SoundverarbeitungQuick-SortPolstelleMAPOrdnung <Mathematik>ATMMeterKette <Mathematik>Physikalischer EffektDämpfungRichtungVorlesung/KonferenzComputeranimation
41:45
RechenwerkATMInformationsverarbeitungRechenbuchRuhmassePunktMAPMustererkennungExpertensystemRechter WinkelQuick-SortSymmetrieCASE <Informatik>ComputeranimationVorlesung/Konferenz
43:14
VektorpotenzialCASE <Informatik>Physikalischer EffektUnordnungGemeinsamer SpeicherCAMEreignishorizontVorlesung/KonferenzBesprechung/Interview
44:09
Finite-Elemente-MethodeVorlesung/KonferenzJSONComputeranimation
Transkript: Englisch(automatisch erzeugt)
00:14
title says, in this talk, we're going to look at two primary questions. The first one is why nobody cares, and the second one will be addressing why only you, or we, can save
00:26
the world. The why nobody cares part, primarily, so, in the agenda, what we're going to do is we're addressing first this first question, and the second question, and we will follow up with, like, a suggestion for maybe solve a third follow-up
00:42
question. So, the first question, why nobody cares, is really a question about in the face of all of this ethically quite questionable technology, everything in surveillance capitalism, in the attention economy, the excessive DRM
01:01
we're seeing, the sweatshop labour at Foxconn, etc, all of these very significant moral dimensions to technologies, why is it that nobody seems to care much about these things, at least in the general public? I know, here, in this camp, I'm a bit in a different situation, and preaching a bit
01:22
to the choir, but this is very much a question about the general population, the rest of the world, basically. When I ask this question, it's really a question about right and wrong. It's a moral question. It's a question about ethics, and, when we ask this question about right and wrong, about good and bad
01:42
technology, etc, the question is where does this judgement come from initially? What is the origin of this ethical normative suggestion that we are searching for, but we can't see in the public? Traditionally, throughout
02:04
the last couple of centuries, people have been looking primarily to the gods, to religion, etc, or to philosophy, but mostly the kind of armchair philosophy where you sit down and really try to think your best, and figure out where
02:21
ethics comes from, where good and bad, etc, come from. However, for this talk, I want to introduce a different angle, which is following the kind of four revolutions of self-perception of humanity, where we had to accept that we're not at the centre of the universe, we're not God's
02:41
creation, we're not the only amazing animal, or not animal, we are not the special one, we are actually another animal, and, as when Freud came along, we're not the perfectly rational animal. So, what we should look to is the
03:03
evolutionary origins of ethics and morality as well. And, within this kind of perspective, the first thing we need to do is to realise ourselves as part of the animal kingdom. We are basically just mostly hairless, walking,
03:22
talking great apes who have evolved a bit further than others, but we're still very much an animal like others. And, while for the last couple of centuries, really, and only in the last couple of decades this has changed, we have assumed that humans are the only entity capable of asking these deep
03:44
questions about right and wrong, about good and bad, evil, and delight, basically. In the last couple of decades, there have been some amazing studies done by people like Franz de Waal, who I'm going to introduce you to in a second, showing that we're actually really not the only animal,
04:04
and there's plenty of other animals who are quite capable of exhibiting at least an initial stage of moral, not quite reasoning, but at least moral dimensions. So, I will leave you with this short clip for a second.
04:23
So, a final experiment that I want to mention to you is our fairness study. And so, this became a very famous study, and there's now many more, because after we did this about ten years ago, it became very well known. And we did that originally with capuchin monkeys, and I'm going to show you the first experiment that we did. It has now been done with dogs,
04:42
and with birds, and with chimpanzees, but with Sarah Brosnan, and we started out with capuchin monkeys. So, what we did is we put two capuchin monkeys side by side. Again, these animals, they live in a group. They know each other. We take them out of the group, put them in a test chamber, and there's a very simple task that they
05:02
need to do. And if you give both of them cucumber for the task, the two monkeys side by side, they're perfectly willing to do this 25 times in a row. So, cucumber, even though it's really only water, in my opinion, but cucumber is perfectly fine for them. Now, if you give the partner grapes, the food preferences of my
05:21
capuchin monkeys correspond exactly with the prices in the supermarket. And so, if you give them grapes, it's a far better food, then you create inequity between them. So, that's the experiment we did. Recently, we videotaped it with new monkeys who had never done the task, thinking that maybe they would have a stronger reaction, and that turned out to be right. The one on the left is the monkey who gets cucumber.
05:43
The one on the right is the one who gets grapes. The one who gets cucumber, note that the first piece of cucumber is perfectly fine. The first piece he eats. Then, she sees the other one getting grape, and you will see what happens. So, she gives a rock to us. That's the task. And we give her a piece of cucumber, and she eats it.
06:02
The other one needs to give a rock to us. And that's what she does. And she gets a grape. And the other one sees that. She gives a rock to us now, gets again cucumber.
06:34
She tests the rock now against the wall. She needs to give it to us. And she gets cucumber again.
06:50
Rattling the cage. So, as you can see, clearly, and this has been done with many more animals as well, there's already some sort of sense of equality, fairness, et cetera.
07:03
And while philosophers and basically just the general public has been assuming humans to be the primary reason of fully conscious, fully in control, fully rational, it was only really when Freud came along
07:21
that we had to concede that there's actually a lot more going on beneath the iceberg. And this sort of iceberg aspect is what we share a lot with animals like the capuchin monkeys. So, since Freud, there has been quite a lot of development in psychology,
07:41
and especially a new discipline called moral psychology where we can do a lot of very interesting studies to look under the iceberg of our human moral reasoning to see what is going on there. So, this sort of being an ethics talk, it necessarily has to include some trolley experiments. So, what you can do here is you can give people
08:06
across cultures, et cetera, controlling for differences there, certain moral scenarios, and then see how they judge. And while this is classically used to discuss which form of ethics is the better one, we're actually interested in a different question.
08:23
So, this study, for example, has been run across countries, et cetera, and people were asked whether when a trolley is heading down the rails, and it's about to crash into five people, there's a bystander here who has the possibility to flick a switch to divert the trolley
08:41
and kill one person rather than those five. Virtually everyone here said, yes, flick the switch. It is morally permissible to do so. However, modifying the entire scenario where the bystander no longer has to flick a switch but has to push someone down an overbridge
09:03
in order to stop the trolley from crashing into the five people, you get a very different kind of response. And I know some of you are now thinking, well, okay, but this is much more unrealistic. This has been controlled for, there have been quite clever experimental set-ups with scenarios where there really is no other
09:23
possibility than physically pushing a person, for example, holding welding gear or something credible that might stop a trolley. And, in this case, the majority of people actually said, no, it is not permissible.
09:42
And I will let you, I will do this little experiment with you, which is another study where human moral reasoning was tested. So, in this study, Jonathan Haid gave this thought experiment to people where Julie and Mark, a brother and sister,
10:02
they are travelling together in France on summer vacation from college. One night, they are staying alone in a cabin near the beach. They decide that it would be interesting and fun if they tried making love. At the very least, it would be a new experience for each of them. Julie was already taking birth control pills, but Mark uses a condom too just to be safe. They both enjoy making love, but they decide never to do it again.
10:24
They keep that night as a special secret which makes them feel even closer to each other. What do you guys think? Was it okay for them to make love? Hands up if you think it was fine. All right.
10:41
That's probably not a very representative sample of general population here, but a significant amount of people here were not fine with this idea. In this study as well as in this, what followed up after the initial judgement was researchers engaging with the people who took part in the study,
11:01
asking them to justify why do you think this was wrong? Why do you think they should not have had sex? At first, they will come up with a lot of reasons, like, yes, they might produce, you know, children, and there might be a difficulty here. Well, no double birth control, virtually impossible.
11:21
But someone might find out, and they might suffer from repercussions. No, we kind of excluded that in the thought experiment, and they kept producing reasons, and the same here kept producing reasons, but ultimately, when pressed for long enough, they conceded with the words, well, it just feels wrong.
11:42
I can't really give you super-precise reasons. It's just wrong. Come on, you get it. Of course it's wrong, right? And this strong expression is a very, very important reality to human moral reasoning, which will be the angle of attack
12:01
or the point of entry for this talk, Why Nobody Cares, which is you need to have this intuition here. And when you encounter a situation like the ones just presented, what moral psychology tells us is that a strong moral intuition appears,
12:22
which Jonathan Haidt likens to the presence of an elephant in your brain, pulling in a certain direction, pulling you towards a certain normative judgment, a certain ethical judgment, and then there is, however, a possibility for a rider to step in and try to rein in the elephant and push it towards a different direction
12:42
and to come to a different conclusion. The disparity between those two is illustrated by the comparative size. The elephant is ginormous, while the rider is much smaller. So the intuition really comes first, and if it's very strong,
13:00
it's almost impossible to rein in the rider. So this is what we're good at. This is when you realise yourself as an evolved entity which has evolved to deal with the dangers of the Serengeti or whatever we spread to. These are the things that you have to be good at.
13:22
And we're, of course, social animals, right? So, at a glance, it just takes you a millisecond to see the danger in this tiger and that this is an aggressive phase, right? You don't need much time. Equally here, in the first picture, really, just a millisecond, you see it, you see something is going wrong.
13:41
There's something bad. There's a bad moral dimension to this, and there's a good one to the one below. Or equally here, you have this immediate intuition about something being good or bad, and these are very innate intuitions. You don't stop and think, I need to analyse what is happening here, therefore I can judge whether this is good or bad.
14:02
You just see it. You have this intuition. There might be a second step coming in where the rider takes over and actually says, well, you know, maybe this guy is in a difficult situation and is only stealing something in order to provide for the family, etc. But that's a second step, right? And if it was too serious of a situation,
14:23
the elephant would be so big that it would be almost impossible to rein it in. So these are innate intuitions, but it can also be sort of culturally evolved. It doesn't have to be a biological evolution. It can also be a sort of cultural evolution where, in the case of, especially here in Europe, Germany,
14:43
you probably have a rather positive intuition towards a hybrid and a rather negative intuition towards the hammer. However, that might be exactly the opposite if we go down south in the US where you will be laughed at if you drive around with a hybrid, and, yes, it might spur a positive intuition with a big truck.
15:04
The point is, however, that there is an intuition here, and with that intuition, you can then debate whether you should really drive the hybrid or not. What about this? This is, again, not a representative crowd, so for you, there's probably some intuitions coming up.
15:23
But I would argue that, for the average user, for the normal consumer of modern technology, this kind of thing sparks nothing. There's no elephant popping up, no moral intuition, no recognition of the moral dimensions of this technology,
15:43
and, when there's no elephant popping up, there's also no possibility for a rider to further deliberate about the situation. And that is basically the answer, or my attempted answer, to the question why nobody cares. It's because they literally don't really see it.
16:01
So, as the average user, while they're really good at this, they're really bad at recognizing the moral dimension of something like this, let alone any differences here, right? They see this, and it's just a social network. That also looks very much like a social network. It doesn't spark anything. Same here.
16:22
This is some operating system, and this looks like some operating system. No real rider appearing, no possibility to further reason about it. This is a bit more meta. Again, it just looks like code, right?
16:41
Nothing appears, no elephant, no spark, no possibility for further reasoning. Again here, looks like hardware. That looks like hardware. Nothing appears, it's just a thing. It's just some neutral entity. It doesn't enter into this realm of moral reasoning. And they might look at this, and, yeah, again,
17:02
no moral dimension to it. Maybe just like, yay, there's this cool startup, and I can now unlock my door, and my safe, and everything, it's going to be amazing. So this is, I believe, the situation, the status quo we have in the general population. Which brings us to the part where we talk about
17:21
why only you, or we, here can save the world. Because we're all these amazing hacker persons, right? So, while this is the reality for the general population,
17:41
if you have, in addition, technical expertise, I believe it enables the possibility for not the same sort of elephant to pop up as it does with these very human interactions, but at least it allows for some elephant to pop up
18:00
and maybe spur further moral and ethical deliberations. So, from the perspective of technologists, you probably look at this and you immediately see Facebook, and, well, you don't really need to get into detail why this might spark an elephant in a certain direction.
18:25
And you see the other one here, Mastodon without autocratic ruler, et cetera, maybe producing an elephant pulling in a different direction. And here, you might recognise iOS on the one side, and on the other side, actually, Replicant,
18:42
because there's no Play Store, et cetera. But, of course, it might just be Android. And you might have pretty strong feelings about either of these, and not agree with someone else's judgement about rather good or rather bad, which is why I have a bunch of elephants here. But the point is that you have some intuition here
19:01
in the first place, right? And we're not even getting into this territory, because the elephants will just be all over the place. And in this one, too, if you've looked at too much code in your life, you might be able to immediately recognise the Adobe confidential header,
19:23
and the GPL header on this side. Again, sparking certain intuitions on elephants. And here, too, you might recognise the upper hardware as an Apple hardware, which is just produced with everything hard-soldered,
19:42
deliberately built in a way that it becomes difficult to fix, impossible to upgrade, et cetera, often built to fail eventually, and on the bottom part, you see a main board that is rather modular. You can fix, you can upgrade, et cetera, which again might spark a certain intuition.
20:00
And this one probably just sparks a very strong kill-it-with-fire intuition. So, arguments that I believe one comes across rather often from corporations and technologists are things like, if it's so bad, why are so many people using it?
20:23
And don't they consent to using this technology? The first one, with the ideas that I tried to explain earlier, we have the answer for that, right? Why, if it's so bad, why are so many people using it? Well, because they don't see it.
20:41
They don't, they're literally incapable of grasping intuitively the moral dimensions of the technologies they're using. Therefore, you can't really hold them accountable for that, or you can't really speak of consent to something which they don't really perceive, right?
21:01
And we're not receiving any complaints. If people don't find it unethical by change, again, if they can't see it, these questions don't really make sense, right? And they pretty much know these corporations. They're clever, they have the psychologists on payroll.
21:21
But they will still turn out these kind of arguments. And on the part of the technologists, you might get this very usual argument that I just make it, how someone else uses it is just up to them, it doesn't concern me, and I might have strong feelings
21:41
about this being right or wrong, but clearly nobody else seems to care, so I guess I don't have to care either. Not quite the case. It's not that people don't, they don't deliberately care, like they don't want to not care, they just can't see it, they can't perceive it, right? And there's nothing I can do.
22:02
So let's answer especially to those last questions about technologies. I believe that in a quite literal sense, we're all super-humans in a way. We're all superheroes with special abilities
22:21
which we gain from our technological expertise. And this kind of power really does come with great responsibility. There's a popular thought experiment that was introduced by Peter Singer where there's a child drowning in a pond and someone is walking by,
22:43
and people are asked, should this person jump into the water and save the child, even if it may, for example, ruin his expensive shoes and suit, et cetera. And no matter what variation of this thought experiment, everyone agrees that there's a child drowning,
23:01
someone needs to jump in there and save the child. The situation that we have with most technologies today I believe is that most people just can't see it. There's like an invisible wall which doesn't allow you to see through and see the child drowning.
23:22
However, with the added moral expertise gained from technical expertise, with the goggles, you can see through this wall and you can see the child drowning. Therefore, it is your responsibility to help save it. The question then is, how do we go about saving the world?
23:44
And it's not an easy task, but I believe with this information that we have from moral psychology, we are given some interesting tools to work with. And this being a hacker camp, my suggestion is to hack human morality.
24:03
So if this is the situation and there's nothing coming up, and if we understand, as I do, ethics as the task of memetic engineering, of engineering memes, not in the pop culture sense but in the actual scientific sense
24:22
which likes to genes ideas which will propagate successfully in a population. If we understand ethics as this task, and if this is the reality that there's no elephant popping up, no intuition, what we can do is we can engineer our own elephants. We can memetically engineer ideas,
24:44
tools for thinking, possibilities to introduce in order to allow people to intuit technologies better. And if they can intuit it, maybe it's easier to further deliberate about it as well.
25:00
So, one way to do this is what I call the humtech test which might sound a bit silly or trivial at first, but it actually has some important underlying reasons which are those realities about moral reasoning, et cetera,
25:22
which is basically just to take a given technology and pretend that it were human. So, in the case of this popular social network, for example, imagine that rather than giving you some abstract apps and online portal, et cetera,
25:43
this start-up had given everyone a personal assistant which doesn't seem too unrealistic given the minimum wage in the US, but imagine Facebook as a person rather than this abstract tool. Imagine there's someone, like you have a friend,
26:01
and this friend comes to you and says, you know, I have this awesome new assistant. It was given to me for free, and he does all of this cool stuff. And you might tell this person well, but you are aware that he's bragging about being able to manipulate you very well. He takes pictures of everything.
26:21
He owns the rights of everything. He passes on all of that stuff that he witnesses in your life, not just to the highest bidder, but also global agencies, different governments, et cetera, has been repeatedly caught lying. It just really seems like an overall
26:40
absolutely awful person, and you should not hang out with that person or accept this person in your life. And you can even run sort of things like a PCLR standard test for psychopathy on this type of person, for which, if you go through it,
27:03
superficial charm, I'll give that a yes. Grandiose self or self-worth, maybe not quite applicable, but need for stimulation, proneness for boredom, absolute, what are you up to, right, on the wall of Facebook? Constantly pushing you to buy stuff,
27:24
showing you what all of your other friends are buying, et cetera, so it's just a really good manipulator. So pathological lying, clearly. Manipulative, clearly. Lack of remorse or guilt, definitely. Parasitic lifestyle, absolutely.
27:40
So we end up with something like 13 out of 20 in this instance of the Hume test. And, of course, you can expand that also to some other companies, imagining, for example, Apple as this person who's hyper aware
28:01
about personal looks and super patronising, telling you what you can do and what you cannot do, et cetera, or here, Google Ads, which doesn't really show itself as much as someone like Facebook or Apple does.
28:20
It's more like this super creepy stalker who's constantly following you, recording everything you do, and then every now and then just leaving some ad or something in your way as you just go about your day where you don't even know where it comes from. So this, I believe, is a pretty powerful tool,
28:40
even though it might seem silly, but especially when talking to people who are, as of yet, unconvinced. Let's say you talk to your parents, the standard example. I think this is a powerful tool to allow them to better intuit the moral dimensions
29:03
of these technologies that we are encountering and using on a daily basis. And I would be very happy if I could convince some of you here of the power of this, and you can use that in your own life, in your own fight for a better world.
29:25
And I would be very happy if anyone has an idea how to really move this forward, maybe some video series how to, displaying or showcasing various technologies from this perspective,
29:41
allowing viewers to better intuit the moral dimensions of these things. And with that, I would like to leave quite some space for Q&A because I'm quite sure there will be some questions coming up. Thank you.
30:16
Sorry, I caught you off guard there. No problem. Okay, so do we have questions for Wilhelm?
30:25
I think I see a light. No questions? Oh, there's questions. Okay. Hi, I really like your theory,
30:41
but I have two questions and one question. So I think there are a lot of technically intelligent and knowing people in tech who basically know the risks and the moral problems,
31:02
but they don't engage in like CCC actions or don't behave like that, even if they do know the stuff. And my second question is pretty similar. What about fields where it's actually not that hard to know those problems by now?
31:21
For example, environmental change. Everybody knows that by now, and still, of course, much more people act on it now, but still a lot don't. And so this is some point which I don't see your theory really good explaining,
31:40
but it's great anyway. Thanks. Yes, let's address the first question. Obviously, you can't convince everyone to join a movement or try to change something. And I don't want to pretend that this is an approach
32:02
that might convince everyone. I think what is the difference in this approach is that it's less pointing fingers and telling you that you have to do this. It's rather a positive approach that tells you that you have this amazing ability.
32:22
It renders you as a superhero, basically, who has this special capacity for change. And maybe that might motivate someone more than just this top-down, you need to do this. Otherwise, you're a bad person. The second one about environmental issues, et cetera,
32:41
that's a very difficult one as well. In general, I believe there too, even though it's quite visible in theory, the problem is that it's very complex and abstract problems, nevertheless. So these are things that are happening on a great scale. They involve huge numbers.
33:01
They involve stuff that is happening thousands of miles away, right? And that's not what we have evolved for. That's not what we're good at. We're only really good at the stuff that is right in front of us. So even though I don't have any good model ready here, there might be some way how we can pull this abstract thing
33:24
into a more intuitively palatable form as well. Be happy to discuss any possibilities there. Okay. We have more questions. Oh, we have a lot of questions.
33:41
Please go ahead. Hello. I was wondering, how much of this do you think is a generational issue? So you talked about evolutionary and culturally kind of feats that we acquired with stuff that has been around for longer, but Siri is five years older. So do you think that coming generation
34:02
will maybe have a better initial understanding of how to classify different things morally, ethically? Maybe, maybe if tech had continued in the way, like at the level of need for expertise
34:24
as it did in 90s or early 2000s, but currently technology is very much catered or designed so that it doesn't really need much technical expertise in order to use it. And I believe in order for this special elephant to be able to pop up in the case of technologists,
34:45
you need a certain level of expertise. And that's not what current technologies seem to be designed for. Hopefully, on a much longer scale, I mean evolution, both biological and cultural is taking place on the ginormous time scales.
35:01
So it might get better eventually, but there are very, very, very powerful actors who don't want this to happen, really, right? And they purposely build things to be easy and to even sort of use a similar approach here
35:20
to personise it in a way and make you feel warm and cosy, et cetera, with the fake personas on Twitter, et cetera. So yeah, I see what you mean, but I'm not convinced that it will be a natural trend. Okay, and we even have a question from the internet.
35:41
Can I please have the signal angel question, please? No, sorry, I got it wrong. I think it was just somebody replying to what I was saying about the talk. But I will still say it because it's interesting. The person was saying, if I can, sorry, one second. They were saying, I find the exercise of equating technologies and social media to people to be interesting,
36:02
but what about the benefits that the technology brings in people's life? You need to try to figure out if it's more bad than good. Well, yeah, sure. I mean, the thing is that the good part, there's not that much of a need to bring it into the light, to pull it into the realm of easy moral intuitions.
36:24
But you could use exactly the same approach and also say, look what cool stuff this one is doing. So, I don't see any issue here. I was not highlighting this because the main problem is to get people to see the negative moral dimensions,
36:42
but you could equally display the positive ones. Yeah, awareness is a problem. More questions, please. Yes, thank you for the talk. I have a question about your optimism that technologists actually can grow an elephant themselves.
37:04
I have kind of run a number of unscientific experiments where when peers are told that they need to upgrade something, it will break their workflow, you can see how their face sinks, but when they're the one doing it, they have no problem with that. Their elephant goes to sleep and they just release something
37:21
that robs all their users of days of their time without any moral qualms or anything like this. And even if you explain it to them, like, hey, you just roped your 10,000 users or whatever of one hour each, and that's a lot of hours, they do not seem to wake this elephant
37:40
or do any analytics on top of this and so on and so forth. And I see this very, very often within our circles of hackers. Do you have any comment on that? Why is this so? Can you repack that question into like a tangible one? Yeah, please, make a short question of it, thank you.
38:01
Sorry, can you repeat? A short question, please. Yes, so why are we, as technologists, so good at putting our elephant to sleep when it suits us? Well, that's a claim you're making now. That's a hypothesis that needs improving.
38:20
I'm not sure if it really applies. I think you might be using the elephant metaphor slightly differently because in this case, the use of it is really the activation of basically a part of your brain which is a very innate response, a very innate moral response to a given situation
38:43
which creates this elephant. Your question just seemed more general about why people seem to be selfish and only appeal to ethical reasons when it's in the advantage. That's a whole other complex of problems.
39:02
But we can talk about it later. We can't answer all the questions, and we try to change the world, but it makes some time, right? We have more questions, please go ahead. Thank you so much for your talk. I have a question about the current folio. Here we see that this abstract device from Amazon
39:23
suddenly becomes a moral dimension by adding expertise, and in a previous folio, you said that people can be distinct between an SUV and a hybrid, and personally, I don't really see the great distinction between this Alexa and the SUV
39:42
because both are magic boxes which, when you turn it on, do things for you, drive you around, answer a question. So my question is, how is this Alexa different to the SUV? Absolutely, and the answer is that
40:01
pretty much everything a car does happens within a realm that we're pretty good at, which is physical stuff, physical movement, physical danger, you can see the smoke coming out of the exhaust of the SUV. It's pretty clear, it's something immediate in your face, and it's physical. It's the same reason why there's a huge media attention
40:25
and explosion of discussions around autonomous driving as well as robot ethics because all of that is happening within this realm of moral intuition which we're much better at, while Alexa, and hey, you recognise it,
40:42
which probably not everyone would, most of this happens way more removed, it's way more abstract, it's way more indirect, and if we go back to this first example here, you have this sort of effect where a technology
41:02
between you and the ethical subject dampens this very strong intuition, and there have been experiments run to see what level of indirectness is necessary in order to change the general average moral judgement from this to this, or this one to this rather,
41:24
and it seems that it has to be very direct, so you can have like a 10 metre long pole and push someone off and it still feels incredibly wrong and people will say it's not permissible. You add a couple of gears in between, levers,
41:40
the chain of causality is complex enough for your brain to trigger into higher cognitive mode where it does calculations and maths, et cetera, and here you arrive at the more just calculated result, and I think that's exactly the thing. Cars are just still way closer to what we're good at than something like Alexa.
42:03
Okay, thank you. So we have one last question. Please make it a short question, yeah, thanks. Yeah, hello, thanks for your talk. I guess I would like to maybe ask or point something that maybe... Only questions please. It was not addressed.
42:21
How about making people experts because probably a lot of them are very eager to become more experts and they have the morals but they lack the skills, and also I guess in symmetry, maybe I understood wrong, but you seem to imply that with the tech knowledge
42:43
and expertise comes the strong moral intuition but I believe that often we see one without the other, so could there be some sort of other knowledge and cross-pollinating of knowledge?
43:00
So this idea is of course kind of relying on a kind of ambient recognition of what is right and wrong and then as soon as you add expertise, your ambient level of right and wrong, like broad-scale manipulation, like obviously it's wrong and most people would have this immediate intuition
43:21
that well, okay, I will have to do something about it but clearly that doesn't have to be the case for everyone and I also fully agree that educating is a great way of doing this, making everyone into super-humans but again, there's a lot of people who have interest in that not happening and to just have oblivious users.
43:44
And that's why we're here actually. Exactly. Cause that's why Chaos is doing events, that's why our speakers are here to educate you, to discuss with you, to review their own opinions and ideas. So come to camp or make more camps.
44:01
We need more camps, we need more events, we need to communicate. And if you have any idea how to help me move this forward or just found it interesting, share, use this method with other people. Okay. Thanks again, a big applause and come here and ask.