We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Poisoned pickles make you ill

Formal Metadata

Title
Poisoned pickles make you ill
Title of Series
Number of Parts
141
Author
Contributors
License
CC Attribution - NonCommercial - ShareAlike 4.0 International:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Don’t you love pickles? In the data science space, the pickle module has become one of the most popular ways to serialise and distribute machine learning models - yet, pickles introduce a wide range of problems. For starters, it is incredibly easy to poison a pickle. Once this happens, a poisoned pickle can be used by an attacker to inject any arbitrary code into your ML pipelines. And what’s even worse: it’s incredibly hard to detect if a pickle has been poisoned! Good news? Help is on the way! You now have access to an increasing number of tools to help you generate higher-quality pickles. And when those are not enough, you can always draw inspiration from the DevOps movement and their trust-or-discard processes. This talk will show you how widespread pickles are and how easy it is to poison models serialised with pickle, but also how easy it is to start protecting them from attacks.