While the papers are piling in arxiv on adversarial machine learning, and companies are committed to AI safety, what would a system that assess the safety of ML system look like in practice? Compare a ML system to a bridge under construction. Engineers along with regulatory authorities routinely and comprehensively assess the safety of the structure to attest the bridge’s reliability and ability to function under duress before opening it to the public. Can we as security data scientists provide similar guarantees for ML systems? This talk lays the challenges, open questions in creating a framework to quantitatively assess safety of ML systems. The opportunities, when such a framework is put to effect, are plentiful – for a start, we can gain trust with the population at large that ML systems aren’t brittle; that they just come in varying, quantifiable degrees of safety.
Ram Shankar is a Data Cowboy on the Azure Security Data Science team at Microsoft, where his primary focus is modeling massive amounts of security logs to surface malicious activity. His work has appeared in industry conferences like BlueHat, DerbyCon, MIRCon, Strata+Hadoop World Practice of Machine Learning as well as academic conferences like NIPS, IEEE Usenix, ACM - CCS. Ram graduated from Carnegie Mellon University with a Masters in Electrical and Computer Engineering. If you work in the intersection of Machine Learning and Security, he wants to learn about your work! |