We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

AI and Human Agency

00:00

Formale Metadaten

Titel
AI and Human Agency
Serientitel
Anzahl der Teile
11
Autor
Lizenz
CC-Namensnennung 4.0 International:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache
Produzent
Produktionsjahr2023
ProduktionsortHamburg

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
Conversations on AI Ethics Episode 9: In this episode we talk about AI risk with Prof. Dr. Judith Simon. She explains the main risks AI poses to human agency and outlines the German Ethics Council's strategies to mitigate these negative effects.
Schlagwörter
SichtenkonzeptComputeranimation
DreizehnEntscheidungstheorieMaßerweiterungBesprechung/Interview
CASE <Informatik>Exogene VariableBesprechung/Interview
BeobachtungsstudieDatenmissbrauchVirtuelle MaschineEinsSystemverwaltungHypermediaEntscheidungstheorieKontextbezogenes SystemSystemprogrammBesprechung/Interview
WahrscheinlichkeitstheorieStatistikDomain <Netzwerk>MinimalgradBesprechung/Interview
Endliche ModelltheorieNotepad-ComputerEntscheidungstheorieComputeranimation
Kontextbezogenes SystemCASE <Informatik>EntscheidungstheorieBesprechung/Interview
Kontextbezogenes SystemLikelihood-FunktionEntscheidungstheorieCASE <Informatik>Besprechung/Interview
Familie <Mathematik>GruppenoperationBesprechung/Interview
FehlermeldungCASE <Informatik>EntscheidungstheoriePhysikalisches SystemGruppenoperationTypentheorieFamilie <Mathematik>Virtuelle MaschineBesprechung/Interview
Besprechung/Interview
Physikalisches SystemBesprechung/Interview
Domain <Netzwerk>UmwandlungsenthalpieComputeranimation
BitDomain <Netzwerk>Kontextbezogenes SystemMAPDateiformatEntscheidungstheorieTelekommunikationFokalpunktHypermediaGarbentheorieBesprechung/Interview
MinimalgradBesprechung/Interview
Computeranimation
ProgrammierumgebungResultanteEntscheidungstheorieOrdnung <Mathematik>Endliche ModelltheorieMaßerweiterungFunktion <Mathematik>Rechter WinkelInstantiierungHilfesystemSoftwarePhysikalisches SystemCASE <Informatik>Besprechung/Interview
Rechter WinkelSoftwareInterface <Schaltung>InstantiierungEreignishorizontEntscheidungstheorieFehlermeldungRandverteilungKontextbezogenes SystemInverser LimesProgrammierumgebungTeilbarkeitBesprechung/Interview
Besprechung/InterviewComputeranimationZeichnung
Transkript: Englisch(automatisch erzeugt)
I think what we proposed most importantly is to adopt a certain view on AI,
namely that we're increasingly delegating decisions to AI and we wanted to investigate how this delegation affects humans in return, to what extent it is expanding or diminishing our opportunities for options, our agency and also our responsibility. This was the lens through which we looked and then we looked very much into detailed case
studies in medicine, in education, in public administration but also in social media context and try to reduce very specific implications. So it was less that we identified these very generic challenges which are very often listed. We have those as well as cross-cutting themes and of
course these are the ones that you very often come across such as of course bias and discrimination being one, issues around privacy but also data utilization on the other hand. Some of the topics we also discussed were related to what does it make with us as humans that we're delegating more and more decisions to probabilistic machines because it really changes also our way
of approaching the world that we're increasingly relying on statistics and probability theory in many domains which is something we haven't done to the same degree previously. We also very much
said that depending on the context of course the goal is not only to increase the efficiency of decision making but very often the quality. One of the cases which we looked into was in the social welfare context and if you take a very high risk, high impact decision it would be something about the welfare of children and that is in that sense a very good case because
it indicates how important it is to make good decision and how important it is to increase the decision. So if you think about whether the decision whether a child is suffering abuse or not and you may want to make a prognosis of whether this is happening, the likelihood that the child is suffering abuse in a certain context, you have two mistakes you can make. You can either predict
that there is abuse and possibly leading to the action that you take a child out of the family but if nothing has happened harm is generated for the child and for the parents because unnecessarily a child is taken from the family. The other case is you're not detecting abuse and the child is being harmed so this indicates that both errors that you can make
are highly severe and you should try to make better decisions that avoid both types of errors. The problem is of course that very often you can show that these systems are not as good as they are supposed to be doing that would be an issue of accuracy but also that they are heavily biased so that they are very often discriminating against these persons or groups of persons who
are already very vulnerable or marginalized in society. So in principle yes we should always use systems in such a way that the joint decision between humans employing machines is better than it would be without but also try to avoid let's say being overly optimistic about the
outcomes of these systems and not understanding the potential downsides. We chose them for different reasons so on the one hand as the ethics council has a specific focus very often on medical and biomedical domains it was clear that medicine would be one domain that we will
be focusing on. For the others there was a bit of discussion about what we would focus on but in the end I think we also made this decision because the level of usage of AI in these different domains is very different. So if medicine is maybe in between then in the school context we may actually use very little AI technologies as of now whereas all the social
media context which has a high impact on public communication and the formation of opinions which was on section is very heavily mediated through AI be it through recommender systems, search engines, now chatbots all the like. So these different different sectors that we looked at also
have very different degrees of being permeated by AI. I think one of the sentences we have in there is that the devil is always in the detail and I think it's very true if you want to assess the impact that delegating a certain decision has it makes it very important to look at the
very concrete technology but also the institution institutional and organizational environment. Just to give you an example if for instance we had one case which we I think briefly discussed is where software was used in order to help emergency medicine in detecting whether or
not there was bleeding in the brain right and so the output of the system could either tell you yes or no towards a bleeding or it could tell you a bleeding in that region of the brain with a 75 percent probability. The underlying model would probably be the same it's just the output that is very different and depending on the output this has very different implications on
the agency of the doctor because let's let's assume that it is a 51 percent chance that there is a lesion this would be yes and no to the same extent that it would be a 99 percent decision. So I think it is very important to think about how exactly to communicate results also maybe margins of errors what are the limitations of technologies so that the people interacting with
technology can use it in a responsible way and it's not only about the details of the technological interface right it's also about the event environment the organizational context so does for instance somebody using a software to make decisions about probation or jail or child welfare do they have to justify if they are agreeing with the recommendation or do they have
to justify if they're disagreeing right these these factors have a huge impact on how people are interacting with these technologies and for that reason it's very important to look into these details.