We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

The Myth of Neutrality: How AI is widening social divides

Formale Metadaten

Titel
The Myth of Neutrality: How AI is widening social divides
Serientitel
Anzahl der Teile
115
Autor
Mitwirkende
Lizenz
CC-Namensnennung - keine kommerzielle Nutzung - Weitergabe unter gleichen Bedingungen 4.0 International:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen und nicht-kommerziellen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen und das Werk bzw. diesen Inhalt auch in veränderter Form nur unter den Bedingungen dieser Lizenz weitergeben.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
Imagine you're a Black woman having your face not recognized by a government photo booth, no matter how you position yourself in front of the camera. Imagine you're a woman getting your loan request rejected, while your partner - who has a similar income and credit history - gets his approved. Imagine you're an African American man arrested by the police because your face was mistakenly matched to a guy involved in an armed robbery. In these real-world examples, the people affected might not know that they are being treated unfairly by Artificial Intelligence (AI). And even if they did, they would not be able to do anything about it. While they may be used to handling discrimination by humans, algorithmic discrimination is a different story: you cannot argue with the algorithms and, due to their inherent scalability, you might be confronted with them wherever you go. We often expect AI technology to be neutral, but it's far from it. The reason is that - especially when we are not aware of it - we transfer existing stereotypes into these systems through our current data collection practices, our development processes and by how we apply these technologies within our societies. My talk will shed light on how algorithms become discriminatory, how difficult it is to build "fair and responsible" AI, and what we should do to prevent the systems we build from cementing existing injustices.