We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Production ML Monitoring: Outliers, Drift, Explainers & Statistical Performance

Formal Metadata

Title
Production ML Monitoring: Outliers, Drift, Explainers & Statistical Performance
Title of Series
Number of Parts
115
Author
Contributors
License
CC Attribution - NonCommercial - ShareAlike 4.0 International:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
The lifecycle of a machine learning model only begins once it's in production. In this talk we provide a practical deep dive on best practices, principles, patterns and techniques around production monitoring of machine learning models. We will cover standard microservice monitoring techniques applied into deployed machine learning models, as well as more advanced paradigms to monitor machine learning models with Python leveraging advanced monitoring concepts such as concept drift, outlier detector and explainability. We'll dive into a hands on example, where we will train an image classification machine learning model from scratch using Tensorflow, deploy it, and introduce advanced monitoring components as architectural patterns with hands on examples. These monitoring techniques will include AI Explainers, Outlier Detectors, Concept Drift detectors and Adversarial Detectors. We will also be understanding high level architectural patterns that abstract these complex and advanced monitoring techniques into infrastructural components that will enable for scale, introducing the standardised interfaces required for us to enable monitoring across hundreds or thousands of heterogeneous machine learning models. Benefits to ecosystem This talk will benefit the ecosystem by providing cross-functional knowledge, bringing together best practices from data scientists, software engineers and DevOps to tackle the challenge of machine learning monitoring at scale. During this talk we will shed light into best practices in the python ecosystem that can be adopted towards production machine learning, and we will provide a conceptual and practical hands on deep dive which will allow the community to both, tackle this issues and help further the discussion.