We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Despicable machines: how computers can be assholes

Formal Metadata

Title
Despicable machines: how computers can be assholes
Title of Series
Number of Parts
160
Author
License
CC Attribution - NonCommercial - ShareAlike 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Despicable machines: how computers can be assholes [EuroPython 2017 - Talk - 2017-07-13 - Arengo] [Rimini, Italy] When working on a new ML solution to solve a given problem, do you think that you are simply using objective reality to infer a set of unbiased rules that will allow you to predict the future? Do you think that worrying about the morality of your work is something other people should do? If so, this talk is for you. In this brief time, I will try to convince you that you hold great power over how the future world will look like and that you should incorporate thinking about morality into the set of ML tools you use every day. We will take a short journey through several problems, which surfaced over the last few years, as ML and AI generally, became more widely used. We will look at bias present in training data, at some real-world consequences of not considering it (including one or two hair-raising stories) and cutting-edge research on how to counteract this. The outline of the talk is: - Intro the problem: ML algos can be biased! - Two concrete examples. - What's been done so far (i.e. techniques from recently-published papers). - What to do next: unanswered questions