AI and Human Agency
This is a modal window.
Das Video konnte nicht geladen werden, da entweder ein Server- oder Netzwerkfehler auftrat oder das Format nicht unterstützt wird.
Formale Metadaten
Titel |
| |
Serientitel | ||
Anzahl der Teile | 11 | |
Autor | ||
Lizenz | CC-Namensnennung 4.0 International: Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen. | |
Identifikatoren | 10.5446/66873 (DOI) | |
Herausgeber | ||
Erscheinungsjahr | ||
Sprache | ||
Produzent | ||
Produktionsjahr | 2023 | |
Produktionsort | Hamburg |
Inhaltliche Metadaten
Fachgebiet | ||
Genre | ||
Abstract |
| |
Schlagwörter |
1
2
4
5
6
7
9
11
00:00
SichtenkonzeptComputeranimation
00:21
DreizehnEntscheidungstheorieMaßerweiterungBesprechung/Interview
00:27
CASE <Informatik>Exogene VariableBesprechung/Interview
00:38
BeobachtungsstudieDatenmissbrauchVirtuelle MaschineEinsSystemverwaltungHypermediaEntscheidungstheorieKontextbezogenes SystemSystemprogrammBesprechung/Interview
01:19
WahrscheinlichkeitstheorieStatistikDomain <Netzwerk>MinimalgradBesprechung/Interview
01:34
Endliche ModelltheorieNotepad-ComputerEntscheidungstheorieComputeranimation
01:40
Kontextbezogenes SystemCASE <Informatik>EntscheidungstheorieBesprechung/Interview
01:47
Kontextbezogenes SystemLikelihood-FunktionEntscheidungstheorieCASE <Informatik>Besprechung/Interview
02:23
Familie <Mathematik>GruppenoperationBesprechung/Interview
02:31
FehlermeldungCASE <Informatik>EntscheidungstheoriePhysikalisches SystemGruppenoperationTypentheorieFamilie <Mathematik>Virtuelle MaschineBesprechung/Interview
03:13
Besprechung/Interview
03:21
Physikalisches SystemBesprechung/Interview
03:26
Domain <Netzwerk>UmwandlungsenthalpieComputeranimation
03:32
BitDomain <Netzwerk>Kontextbezogenes SystemMAPDateiformatEntscheidungstheorieTelekommunikationFokalpunktHypermediaGarbentheorieBesprechung/Interview
04:12
MinimalgradBesprechung/Interview
04:25
Computeranimation
04:31
ProgrammierumgebungResultanteEntscheidungstheorieOrdnung <Mathematik>Endliche ModelltheorieMaßerweiterungFunktion <Mathematik>Rechter WinkelInstantiierungHilfesystemSoftwarePhysikalisches SystemCASE <Informatik>Besprechung/Interview
05:36
Rechter WinkelSoftwareInterface <Schaltung>InstantiierungEreignishorizontEntscheidungstheorieFehlermeldungRandverteilungKontextbezogenes SystemInverser LimesProgrammierumgebungTeilbarkeitBesprechung/Interview
06:05
Besprechung/InterviewComputeranimationZeichnung
Transkript: Englisch(automatisch erzeugt)
00:14
I think what we proposed most importantly is to adopt a certain view on AI,
00:21
namely that we're increasingly delegating decisions to AI and we wanted to investigate how this delegation affects humans in return, to what extent it is expanding or diminishing our opportunities for options, our agency and also our responsibility. This was the lens through which we looked and then we looked very much into detailed case
00:43
studies in medicine, in education, in public administration but also in social media context and try to reduce very specific implications. So it was less that we identified these very generic challenges which are very often listed. We have those as well as cross-cutting themes and of
01:02
course these are the ones that you very often come across such as of course bias and discrimination being one, issues around privacy but also data utilization on the other hand. Some of the topics we also discussed were related to what does it make with us as humans that we're delegating more and more decisions to probabilistic machines because it really changes also our way
01:24
of approaching the world that we're increasingly relying on statistics and probability theory in many domains which is something we haven't done to the same degree previously. We also very much
01:41
said that depending on the context of course the goal is not only to increase the efficiency of decision making but very often the quality. One of the cases which we looked into was in the social welfare context and if you take a very high risk, high impact decision it would be something about the welfare of children and that is in that sense a very good case because
02:03
it indicates how important it is to make good decision and how important it is to increase the decision. So if you think about whether the decision whether a child is suffering abuse or not and you may want to make a prognosis of whether this is happening, the likelihood that the child is suffering abuse in a certain context, you have two mistakes you can make. You can either predict
02:24
that there is abuse and possibly leading to the action that you take a child out of the family but if nothing has happened harm is generated for the child and for the parents because unnecessarily a child is taken from the family. The other case is you're not detecting abuse and the child is being harmed so this indicates that both errors that you can make
02:45
are highly severe and you should try to make better decisions that avoid both types of errors. The problem is of course that very often you can show that these systems are not as good as they are supposed to be doing that would be an issue of accuracy but also that they are heavily biased so that they are very often discriminating against these persons or groups of persons who
03:04
are already very vulnerable or marginalized in society. So in principle yes we should always use systems in such a way that the joint decision between humans employing machines is better than it would be without but also try to avoid let's say being overly optimistic about the
03:22
outcomes of these systems and not understanding the potential downsides. We chose them for different reasons so on the one hand as the ethics council has a specific focus very often on medical and biomedical domains it was clear that medicine would be one domain that we will
03:42
be focusing on. For the others there was a bit of discussion about what we would focus on but in the end I think we also made this decision because the level of usage of AI in these different domains is very different. So if medicine is maybe in between then in the school context we may actually use very little AI technologies as of now whereas all the social
04:02
media context which has a high impact on public communication and the formation of opinions which was on section is very heavily mediated through AI be it through recommender systems, search engines, now chatbots all the like. So these different different sectors that we looked at also
04:20
have very different degrees of being permeated by AI. I think one of the sentences we have in there is that the devil is always in the detail and I think it's very true if you want to assess the impact that delegating a certain decision has it makes it very important to look at the
04:43
very concrete technology but also the institution institutional and organizational environment. Just to give you an example if for instance we had one case which we I think briefly discussed is where software was used in order to help emergency medicine in detecting whether or
05:00
not there was bleeding in the brain right and so the output of the system could either tell you yes or no towards a bleeding or it could tell you a bleeding in that region of the brain with a 75 percent probability. The underlying model would probably be the same it's just the output that is very different and depending on the output this has very different implications on
05:22
the agency of the doctor because let's let's assume that it is a 51 percent chance that there is a lesion this would be yes and no to the same extent that it would be a 99 percent decision. So I think it is very important to think about how exactly to communicate results also maybe margins of errors what are the limitations of technologies so that the people interacting with
05:43
technology can use it in a responsible way and it's not only about the details of the technological interface right it's also about the event environment the organizational context so does for instance somebody using a software to make decisions about probation or jail or child welfare do they have to justify if they are agreeing with the recommendation or do they have
06:03
to justify if they're disagreeing right these these factors have a huge impact on how people are interacting with these technologies and for that reason it's very important to look into these details.