We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Applied MLOps to Maintain Model Freshness on Kubernetes

Formal Metadata

Title
Applied MLOps to Maintain Model Freshness on Kubernetes
Title of Series
Number of Parts
69
Author
Contributors
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
As machine learning becomes more pervasive across industries the need to automate the deployment of the required infrastructure becomes even more important. With data velocity increasing every day it becomes more and more important to keep models fresh. Combined with the ever growing popularity of Kubernetes, a full-cycle, containerized method for maintaining model freshness is needed. In this talk we will present a containerized architecture to handle the lifecycle of an ML model. We will describe our technologies and tools used along with our lessons learned along the way. We will show how fresh training data can be ingested, models can be trained, evaluated, and served in an automated and extensible fashion. Attendees of this talk will come away with a working knowledge of how a machine learning pipeline can be constructed and managed inside Kubernetes. All code presented will be available on GitHub.