Bestand wählen
Merken

Despicable machines: how computers can be assholes

Zitierlink des Filmsegments
Embed Code

Automatisierte Medienanalyse

Beta
Erkannte Entitäten
Sprachtranskript
but hopefully that that name is already fulfilled just rest of reinforced and just as a side note I know it's the sum of all the ends of the 4th day so I have some gratuitous references to British pop culture in presentation and if you spot anything shout and you get a sweet I try to show that you need for close um OK so before we dive and I just want to credit some people who have really inspired me to think about this and so kind of and love from the first one is so then up to like the that she's a research area and technology and she things we deeply about uh about these things and I can we commensurate to the list of articles not only about this topic but also many things related and to the book and our all my talks that you can see and caffeine Neil wrote a book on the topic is called weapons of mass destruction will probably heard about it already but you know the the bible of of this area and finally government will be able to amazing keynote yesterday morning and no member of our own community and and she also gave that took a toll on minimizing topic I'm going to talk about today at fight it after a couple months ago so I tried to make sort of talk about things from cited different perspective of the areas covered are and summer so you know I do my best but I can't hope to cover everything as well as they I just another thing it's also kind of interesting that you are a woman with different backgrounds and I think it's possible to show how important diversity is and I know that our community and has about a lot but there still worked finitude OK so does anyone know
about its they go this week so this is the cost that it's the computer from uh haystack is that of the galaxy and it was built by a intelligent multidimensional beings who got fed up with existential angst at some point and decided to build a computer to to give them the answer to life the universe and everything and you know they will this computer and then the asks for the answer and the computer for them to come by and 7 million years but 0 what is a bit of waiting for such an important answers so that its they waited 7 million yes they came back and then the computer told them that the answer is 42 but it doesn't know what the question is and so you know this is just a funny thing fiction novel so what does it have to do with reality and I think there is a key connection and 1 of the reasons that his heart is guide to the galaxy is so funny that the test somebody pointed criticism criticisms the all our society and the way we think and I think it brings up some really good points so 1st of all if your question is rubbish you will not get a useful answer and that might seem pretty obvious but keep that in mind because a little bit later will come back to that it's not always obvious to everyone that but also secondly let's think about 4 seconds and why did the super-intelligent multidimensional beings decided the computer to answer these questions like why didn't they instead decides to invest in humanities and slept a well-founded FIL the department so this out right so I think 1 of the reasons that it will that's is because we think computers object you know a philosopher uh has their own personal bias and have their own points of view and basically they they cannot be trusted like mostly write that out there that have the biased in some human way with computers are in know governed by ones and zeros and algorithms of logic gates and have no morals so whatever answers they give us our unbiased and objective and therefore true right does everyone agree with that that the out computer give us the ability of us are always true but for some uh so it I don't have a lot of work to do to convince you of were able to posing questions in a very suggestive way um but it so that there was exaggerating Elizabeth's computers parents always giving us the right answers bots when they give us the wrong answer it's pretty easy to tell like you might have a conclusion you know network that is you know classifying images and it might misclassified an image of the caps has been going so you know you look at that and events it's the obvious mistake and is that search is very easy to spot so we want to modified it looks at the up in any dangers where do you agree that you know you can want competent also if it's wrong you can tell whether it's prominent it of at least 1 of you agree with as what driving at most I made a lot of OK so no 1 agrees often the and OK so and even if you can't always put mistakes uh I guess I know already which would answer what do we agree that these things like racism and sexism are you know not computer thinks that human uh prejudices and computers don't have them equipment right now and this is about the the you don't have the the joke but when a computer gives announces another question is whether it can contain biases like this 1 so you know no 1 I guess no 1 falls into this trap theory which is also the so do you know
who these people are so there was an article published last year laughter by ProPublica isn't the publication and basically basically um they explained how predictive posing up algorithms work and how they get things wrong 1st of all when you say critics of voicing and a few years ago I felt they know it be a job like minority report style thing was that joke phrase for a long time for me but I read that 3 and a short of time of his not a joke it's an actual thing both of these people were arrested in Florida in 2014 the present on the left uh is an we and she was arrested for stealing a kids like so citing on it for a few meters from the 2 and running but she was governors and the guy on the rights his name is brendan he was arrested for shoplifting some stuff that was similar value to the back so they're both arrested smallest at the same time independently and when they were arrested data risk of recidivism so risk of reoffending was assessed by a system called compost so it's something that the that the police in Florida users so they put data doubled the about the person into the system and the system says this person at high risk of offending this person is low response on in this case ratio was given high risk of reference and then was given a low risk assessment and now the articles published in 20 16 so 2 years after all this happened and by that point we already knew that the system but it won't so after 2 years vision didn't refinance woven broker to some um warehouse stole much more our things and as being present so you know this isn't drunk but in know shit happens that we know that computers and perfect so something get they get the get things wrong and let's put aside for now that you know the question whether any kind of algorithmic in accuracy is acceptable for such an important system because there are even bigger problems that the Janus font and many there a racial bias and in the system so people of different races tends to be misclassified that more or less the same rates however white people when they're misclassified are given to long and risk assessments while people when they were classified argument to high-risk systems so there's racial bias in the and it's it's it's a key important people and there's much more to that there is a need the article itself is really interesting there's a separate book post detailing the analysis that they did that also some rebuttals to it's really encourage you to look into is it's it's a very interesting topic their so let's see how it might happen that the system like this can be biased do you know these are
little suites for that these are words vectors so undergoing to quickly explain and for those of you who don't know that in 2015 there was a paper published by Google on natural language processing and they introduce the system called words to make and the idea is to take some corpus of text and you put it into this model and it spits out an embedding and it means that what is that knowledge is a representation where each word is described by a set of coordinates so in this case some like 300 coordinates gender dimensions but know we can think of it as like a three-dimensional space and each of rewards it has a position and if you imagine awards man and awards and came there is a vector that takes you from 1 to the other the cool thing about what effect is that these vectors the relationship between words are meaningful in a way that you can take this vector and instead of the word of applying it to the word meant that can apply to the lumen and instead of getting to King you get to queen and is also that allows you to do this kind of a vector at MIT has a question you right uh axes are arbitrary dodgy vectors in high-dimensional space as so this is just a conceptual and representation in 2 dimensions of the 3 fundamental thing just to get the idea across to the relationships between words then I don't know it's an so what quarterback is taking a words and putting it in some kind of space that's axes don't necessarily have an interpretable meaning it's just modelling relationships so things close to each other for example might be related things far too just far from each other it might not be related then I don't want to make this about were to make we can talk about its uh afterward but not that the topic of the talk the point is that you know this is an all something it it's very very widely used it's extremely useful and is used in many many papers and including at NIPS synapse's prima conference for for AI basically any paper with neural in intertidal that is very much on the and and last year there was a paper that describes how was to break it is also biased so the does that what back this transformed 1 so it's a huge corpus of data and you know there is no the most people I assume writing news sign notes uh bias intentionally but just because of the way that you know our society works with now there are some biases dead and 1 of the things they discovered is that if you and take the word man again and you um you can get to that computer programmer so there's relationship know this vector you can think of its simplifying as saying you know that takes you from that from a bank to like the profession if you take the same vector applied to woman takes to homemaker so you know there's clear by the the boats which kinds of professions people are expected to have even though it's not uh refer question and the the the it absolutely so if the problem of the training data nature right so the point is that even tho your technique might be completely fine if using biased training data and you will get a biased result and is not surprising but it took 3 years after publishing during the what about paper to uh for this paper detecting biased to be font the and the paper also shows a ways of addressing this and the weights are basically uh um the trying to uh more of the details of people so the training data to remove these kinds of biases if you've heard of the metric distance learning and have some idea as to what the space in a way to satisfy some constraints uh some relationships should be kept some other relationships should be thrown away and there are also other things you can do
the an some of them this there um right so you know if you have some training data that you have got it yourself and you know that's there are some features and that's the attitude biased but they might be biased due to the way you got it and you might expect you might explicitly after thinking through as much this if you want to not use these features for classification so you might just before these columns and you might be fine but doesn't always the case it's possible that some other features that in your training set and cold this feature that uses good good good of so even though they do you get to them you know if you think of an example like uh data about people known name and where they live and and gender and race and so on this a to that of gender and race if you look at the role describing no giving someone's name and that's the interests of post-Cold those that have a good chance of figuring this out there and are models are also capable of doing this so if I could spend lots of time making sure that our models are capable of building on this hierarchical representations and uh and running with it so they might still use that to an certain now to the justification even tho we might not want the the and to understand what is at the logic and something to shared the story that then net effect the also mentioned so she went to this conference for HR professionals at some point upon the that everyone was really excited and about the system for taking issues policies and matching the best people to do the job of excited so don't have to stroll troll through your sees them manually to just do this for you and still the some great if you ever did any hiring you know is a huge thing to read all this stuff and and try to figure out who a good fit for what so if we have a system that was automatically of yours so this would depend for their intended for 2nd and at the same time that I'm aware of these 2 papers so that this that it's possible to take the data what someone from social media and predicts that that classify how that the risk of this person getting depression at some point in the future want they produced circle so messed around other both from Twitter and they were actually able to fairly accurately detect whose like to be depression sometime soon before D initial analysis those with an initial diagnosis so it was kind of cool that if possible and I'm sure it can be good use for goods but it's also a little bit scary and I'm sure you can now put the 2 and 2 together imagine that the system that you build for uh no matching people to jobs is based on training data you have people that work for you ready in 0 you know who are the high performance for no performance and you can also extract the features and try to train a system on that there's no reason why you shouldn't use publicly available social media and the scary thing is that the system might then to discriminate and things that you really don't want to discriminate like in likelihood the world of depression likelihood of being pregnant and things like this that you are explicitly trying not to discriminate against it might still be encoded into data and you might not even know about it and and this is kind of a tricky problem to defend against the thing the best tool I know is I thing think catherine mention this hair pilot answered on talk as well is the legal term but given the West's um called the disparate impact and that's the precise deficiencies in the formula and kind of treats your model as a black box and you can figure figure out whether your uh black box is biased results by taking the parameters in what comes out and the thing is that it takes a bunch of effort you need to be conscious of the defined this is possible and actively try to investigate but I'm I'm hope I hope I can convince you that this is an effort was spent an idea the idea is that some errors
can be very unintuitive like I mentioned in all caps being uh misspecified as a penguin but this kind of you know we can kind of understand this and there's again at NIPS but me uh this year and in the summer that we call the a competition run for the 1st time about adversarial examples and the aim is to you know you have a classifier in your aim is to construct a data sample that looks fine too few months about who gets deliberately misclassified by the by the President by your model so you know we can have some special constructed knowledge to your task investigated classifiers and specifically at the same time the other teams will be tried to build models that are robust to this kind of method would be very interesting thing to watch but there's another even more scary problem perhaps that's there are things that that no 1 really intends to happen but they happen anyway because some models uh when they're not interpretable and so the luminous and make mistakes in the way in ways that we can't really comprehend as humans we well know there was a talkative today it was really awesome about interpretable models and there's lots of research going into that area uh thing you know things will probably get better from that perspective but it's still not perfect and I just want to sort of get uh points out something to keep in mind that the arrows that's AI tends to make are often very different kinds of errors that humans to make the
multiply the industry so the idea is that these questions the promise to come back to that did you ever see paper were dismissed from but so uh this is them and this paper will not be reviewed it was just published an archive so you know don't think too much of it but it makes a lot of ways that wasn't covered in mainstream media and I think it's important to talk about it and in this paper they had a dataset of facial images and for each 1 day had labels saying this person's a criminal this person is not just with the try to take face and predict whether someone is a criminal or not and you take the G. is is not really interesting just to Alex that's if you know about the announcement and retrain it a little bit and that are very good accuracy and I know after that was published lots of people where uh Alfredsson rightly so the and they're sort of many of these problems with that but let's think about some like the the role of the on as labeled is known criminals there topics criminals like do among criminals smile like other people with white colored shirts never criminals Latin and I think that's little questionable so you know that again silly example in some ways a book that's also something important argument might make sure that the question you're asking it to make sense
finally yeah this has little part cure it that's right so it from so little and this this is carol to the root of the receptionist and uh she often helps people that try to get admitted to hospital on all get a bank loan and her trick of like a computers has no I like to everything but what else illustrate the point so we exercise size of developing these helpless internationally but not only to make our jobs easier to make us more efficient of making decisions and so on and but now it turns out a lot of time we just deferred decisions to the computer and when the computers has no which is not allowed to do it or I'm able to do that and this is again something to just think about them and keep in mind that whenever you are developing something that's why it's model to help you have just consider the implications of what happens when the system that's wrong and I also relax the cult and again in couplings can yesterday there was something about the guy who helped to develop a system for a bank to write checks you know he was wondering if it was an ethical thing to do because if they didn't do it in the bank would have to innovate and a sort of the organization away but they had this new technical thing and they could just as well as the status quo and I think that is happening right now as well with the eye there are millions of decisions that need to be taken every 2nd we're incapable of doing that as humans so we just give the machines but maybe that's not always the right thing to do so basically what I would like to do is 1st of all read up on this topic purity seem well aware of you know that this is a problem after thinking about it and talking about is your co-workers and friends of meetups and conferences um and I encourage you to really take is the thing it can be re also and if we do this right but can you can kind of catastrophic if we don't so the answer
to why candidates can be assholes in as the because we make so please don't thank you FIL which and the OK so I think we're much here the talk kind of like lining what I was I will ask you like maybe you should turn the question around like you're saying the data there are putting into our models by the need to kind of like conventional models from this biased data and the we should turn it around to thinking about ethical implications and kind of like a low we found out that almost no bias and were actually able to quantify this can publish this and can tell you whether there is this kind of bias in the data of like the people aware of these issues because what we we do engineering models we classifications in the 1st point of all this is the full model and then afterward we find all this in the end you can't well there's all but need your data flow in qualities and maybe that's part of the future yeah I think that the theory of that have more more optimistic way of of posing the problem may be that more constructive and is the mentioned from a kind of uh just a couple talks ago a delicacy it but there was the president the Python package that's uh called you I 5 things like 5 it takes a model and then tells you you know how the model makes decisions so it doesn't and the necessary answer what the question what is by estimates but it might help you interpret it and then the evidence so I think this much work to be done in this area but it's going this original thing is that the right thing to do it and so what is your company recommendations with respect to you the workers like to pick some of the most problematic samples predictive policing and using that to vertically police and predicted greater recidivism and something that further have recently at least in the US is using that in sentencing judgements so making people's princesses longer if you think a more likely threat assessment to the research is this something that you shouldn't happen or just you know what you is recommendation you well on that I mean you know as soon as you go get it if I just point you and the size the criminal criminal justice system the state of Florida is like an and pattern planet is red it's 1 of the worst examples in the world of 100 is and so any model working in that system give you got yourself sure it's no no it's still something that I believe should be pointed out so I mean as soon as it gets into ethical questions some all questions there are no silver bullets so I can't give a recommendation saying do that because even if I tell you do that at most of the right most the time that there be cases where strong so you know I I will hesitate to give you a straightforward recommendations that's why where the best thing I can come up with this think about it if people building compost explicitly felt about racism and tested for is this could be avoided now of course composers embedded in a larger context of the whole justice system of the state and also the West so we know it would prevent everything but like we had the power to change things a a little bit at the time and I think we should try to that it OK there was no on the we find out that some of the biases and try to be supplied by strong features but I don't really like it because if you think like you can in for another your personal and not so I and use it and the 2nd point is you did your in your paper reality and sure so it's a good point about the way I look and I can also follow that for a while but the women it's and I'm not sure if it's right but it some to me is that it's not about taking Pristine Data and messing with it it's taking most of these are and messing with us in a different way there is no need of them is absolutely right but you know you method manually because you have to context you understand what it represents any have opinions about OK I think this would happen so try to make sure it doesn't happen now there's no no it's not very few examples where you can just take data and say it's the perfect example of perfection of the world so to me that's that kind of the answer just try to making a little better but none of them is like the true state the data you have after gathered is not true in an absolute sense you were talking so on and maybe more current market statistics so I think the problem in this case the the all the time that we the correlation the on the population you and that's what actually called the solution you only applies things which is not really part negative content when you've think got a classifier will when you get it wrong and so the the um OK so I mean I'm not quite sure that the question is blogs and sure there are I think thinking about thinking about it from a compression perspective it's it's true sometimes but there are other things that are much more difficult to put in this context like reinforcement learning you know there are other ones aside getting into that's uh no intelligent agents they might also like the relationships between things that they do and the reward the gaps might be very non straightforward the the words they get might be maximized in a way that's is up what which intended to do and this is just um the I think part of the problem but there are many different areas that can be kind of major problem can be discovered when you think about what you from that perspective yeah so I like the idea of mentioned over there OK so he's taking this model in running the this corpus and found out afterward awake there's a lot of time here because the turning background in saying OK so what we need to do to change things in the in the world see the news stories are addressed reflect our last showing highest or things like that so that we can actually turn it back and fix the world that yeah absolutely so I think again no easy answers but the reason why I think this is important is that when you build a solution that bias in the way it doesn't only reflect the world addiction reinforces the bias because of this you know and uh proceeds objectiveness people tend to trust things like that's so you actually are making things worse if you do this by no no by notes really biased models or whatever you can do as the 4th 1 way to prevent so uh no injustice in the world as granted of evidence but obviously we should they don't go out of a bubble and there's also lots of other things to do you get involved in your and don't local newspaper signal have them out or I'm doing a political party or whatever you think is the right thing to do them doesn't necessarily have to do with programs we have time probably more questions that the the only going to be I think this is how it is that people publish a story saying so published articles saying this is all the news on the news on google news is biased in this place that there would you suggest that you try to take these data will be used machine learning applications internally and then analyze the purely apparently to see if they can detect in the house in the no need to change they can do it timely only to maintain the standards in part chocolate bars yeah so I mean I think about all that's a little bit like uh like you you know any kind of QA laid find bites you can try to find a new house so you can wait for customers to tell you is the same thing here you can have some process for trying to discover these things before you leases or you can wait for ProPublica to come to you and say you know that you biased so absolutely I think it's not I don't think is very common right now at the aware of that but I think this kind of testing of the model's whenever using a model you have to test whether it's still performing 1 not we should definitely single incorporating uh this kind of texts into into these processes to you mentioned there was a study done on student will be used to determine the future like the depression firstly how do they must define and people would press the future and this is this interested in which you what you thoughts on and it medical studies including the people you have to modify ethics officers were at that point to make sure that ethical using something like that should be mandatory for models which will affect people's lives in this way OK so to answer the 1st question I'm of future and how have to read the paper or that there are references here if you don't do that it's a good question and the 2nd question was filled doing its ethics committee to like the prove our models when we use them in production visiting and that it I think the and so should the software industry moved to the model that medical industries doing and I have an extensive medical review and I don't know that you know that definitely different requirements of think some software is sort of mission-critical unlike governs medical and for example and it definitely does those responses already if you're Instagram you know it might seem like a wages posting future for the signal can go wrong uh so you might think of as society but then again things like this happen and all of a sudden you're involved so I don't know whether some kind of mandatory officials requirements for doing these things this would be a good idea because it has a set of topics as well you have to iterate much lower and so on but I think where we are now at as of yet in the spectrum because we don't think about it enough I think we should pool stores that site even if we don't want to get all the way there I have a lot of questions about what can we actually do to make the general public more aware that computers actually could be as great question and if you're right you're right stories secure filmmaker NO make animations and I think I think if we are aware of that as a community and it seems like we are getting more that things will see through like there's lots of uh sort of knowledge from it if you want to know about the i in the general public really you know everyone knows theory like Google photos and stuff like this this thing slowly see through and like my most boring but straightforward answers just be aware of that and everything else will follow but if you want to specifically focus on public outreach fingers some but time as an expert in the various so the per question about it they find out whether the person was impressed with people who reported being diagnosed is to the rest of the case study on a lot of people and use that predicts they had lost in winter for example that hey I just got diagnosed is being depressed so that was really got d and all the depressed thank you thank you and less think massively in the forest floor
Subtraktion
Gewichtete Summe
Flächeninhalt
Perspektive
Ruhmasse
Mailing-Liste
Kombinatorische Gruppentheorie
Quick-Sort
Subtraktion
Bit
Punkt
Computerunterstütztes Verfahren
Computer
Kombinatorische Gruppentheorie
Physikalische Theorie
Computeranimation
Eins
Chatbot
Algorithmus
Font
Prozess <Informatik>
Meter
Vererbungshierarchie
Elektronischer Programmführer
Maschinelles Sehen
Grundraum
Bildgebendes Verfahren
Analysis
Serviceorientierte Architektur
Trennungsaxiom
Softwaretest
Einfach zusammenhängender Raum
Parametersystem
Videospiel
Sichtenkonzept
Datennetz
Kategorie <Mathematik>
Zwei
Physikalisches System
Bitrate
Ereignishorizont
Kugelkappe
Verknüpfungsglied
Rechter Winkel
Verkehrsinformation
Resultante
Nebenbedingung
Punkt
Gewicht <Mathematik>
Prozess <Physik>
Wellenpaket
Blackbox
Natürliche Zahl
Hausdorff-Dimension
Selbstrepräsentation
Programm
Euler-Winkel
Benutzeroberfläche
Term
Mathematische Logik
Raum-Zeit
Computeranimation
Ausdruck <Logik>
Informationsmodellierung
Prozess <Informatik>
Vorzeichen <Mathematik>
Abstand
Analysis
Metropolitan area network
Soundverarbeitung
Parametersystem
Topologische Einbettung
Linienelement
Likelihood-Funktion
Gebäude <Mathematik>
Güte der Anpassung
Vektorraum
Physikalisches System
Kontextbezogenes System
Natürliche Sprache
Arithmetisches Mittel
Twitter <Softwareplattform>
Menge
Geschlecht <Mathematik>
Rechter Winkel
Hypermedia
Ruhmasse
Dimension 3
Wort <Informatik>
Fitnessfunktion
Fehlermeldung
Interpretierer
Parametersystem
Bit
Subtraktion
Packprogramm
Kugelkappe
Task
Informationsmodellierung
Prognoseverfahren
Flächeninhalt
Perspektive
Hypermedia
Stichprobenumfang
Zeitrichtung
Lateinisches Quadrat
Bildgebendes Verfahren
Fehlermeldung
Bit
Punkt
Prozess <Physik>
t-Test
Kartesische Koordinaten
Computerunterstütztes Verfahren
Computer
Analysis
Eins
Spezialrechner
Negative Zahl
Perfekte Gruppe
Prozess <Informatik>
Mustersprache
Wurzel <Mathematik>
Softwareindustrie
Quellencodierung
Korrelationsfunktion
Softwaretest
Statistik
Computersicherheit
Machsches Prinzip
Stellenring
Strömungsrichtung
Aliasing
Kontextbezogenes System
Biprodukt
Entscheidungstheorie
Menge
Rechter Winkel
Standardabweichung
Aggregatzustand
Web Site
Selbst organisierendes System
Punktspektrum
Physikalische Theorie
Virtuelle Maschine
Informationsmodellierung
Digitale Photographie
Perspektive
Software
Stichprobenumfang
Endogene Variable
Inhalt <Mathematik>
Speicher <Informatik>
Optimierung
Leistung <Physik>
Beobachtungsstudie
Schätzwert
Expertensystem
Wald <Graphentheorie>
Physikalisches System
Datenfluss
Quick-Sort
Office-Paket
Objekt <Kategorie>
Flächeninhalt
Mereologie
Wort <Informatik>

Metadaten

Formale Metadaten

Titel Despicable machines: how computers can be assholes
Serientitel EuroPython 2017
Autor Gryka, Maciej
Lizenz CC-Namensnennung - keine kommerzielle Nutzung - Weitergabe unter gleichen Bedingungen 3.0 Unported:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen und nicht-kommerziellen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen und das Werk bzw. diesen Inhalt auch in veränderter Form nur unter den Bedingungen dieser Lizenz weitergeben
DOI 10.5446/33745
Herausgeber EuroPython
Erscheinungsjahr 2017
Sprache Englisch

Inhaltliche Metadaten

Fachgebiet Informatik
Abstract Despicable machines: how computers can be assholes [EuroPython 2017 - Talk - 2017-07-13 - Arengo] [Rimini, Italy] When working on a new ML solution to solve a given problem, do you think that you are simply using objective reality to infer a set of unbiased rules that will allow you to predict the future? Do you think that worrying about the morality of your work is something other people should do? If so, this talk is for you. In this brief time, I will try to convince you that you hold great power over how the future world will look like and that you should incorporate thinking about morality into the set of ML tools you use every day. We will take a short journey through several problems, which surfaced over the last few years, as ML and AI generally, became more widely used. We will look at bias present in training data, at some real-world consequences of not considering it (including one or two hair-raising stories) and cutting-edge research on how to counteract this. The outline of the talk is: - Intro the problem: ML algos can be biased! - Two concrete examples. - What's been done so far (i.e. techniques from recently-published papers). - What to do next: unanswered questions

Ähnliche Filme

Loading...
Feedback