We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Explaining behavior of Machine Learning models with eli5 library

Formale Metadaten

Titel
Explaining behavior of Machine Learning models with eli5 library
Serientitel
Anzahl der Teile
160
Autor
Lizenz
CC-Namensnennung - keine kommerzielle Nutzung - Weitergabe unter gleichen Bedingungen 3.0 Unported:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen und nicht-kommerziellen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen und das Werk bzw. diesen Inhalt auch in veränderter Form nur unter den Bedingungen dieser Lizenz weitergeben
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
Explaining behavior of Machine Learning models with eli5 library [EuroPython 2017 - Talk - 2017-07-13 - Anfiteatro 2] [Rimini, Italy] ML estimators don't have to be black boxes. Interpretability has many benefits: it is easier to debug interpretable models, humans trust decisions of such models more. In this talk I’ll give an overview of ML models interpretation and debugging techniques. I’ll cover linear models, decision trees, tree ensembles, arbitrary classifiers using LIME algorithm. The talk focus is on explanation algorithms, because it is important to be aware of pitfalls and limitations of the explanation method to be able to interpret an explanation correctly. I’ll also show how to use these techniques in practice, to debug and explain behavior of estimators from Python ML libraries like scikit-learn and xgboost using open-source eli5 library: https://github.com/TeamHG-Memex/eli5 . Attendees will get both practical and theoretical understanding of these explanation methods. Target audience is ML practitioners who want to 1) get a better quality from their ML pipelines - understanding of why a wrong decision happens is often a first step to improve the quality of an ML solution; 2) explain ML model behavior to clients or stakeholders - inspectable ML pipelines are easier to “sell” to a client; humans trust such models more because they can check if an explanation is consistent with their domain knowledge or gut feeling, understand better shortcomings of the solution and make a more informed decision as a result