Driven by their accessibility and ubiquity, deep learning has seen rapid growth into a variety of fields, in recent years, including many safety-critical areas. With the rising demands for computational power and speed in machine learning, there is a growing need for hardware architectures optimized for deep learning and other machine learning models, specifically in tightly constrained edge based systems. Unfortunately, the modern fabless business model of manufacturing hardware, while economic, leads to deficiencies in security through the supply chain. In addition, the embedded, distributed, unsupervised, and physically exposed nature of edge devices would make various hardware or physical attacks on edge devices as critical threats. In this talk, I will first introduce the landscape of adversarial machine learning on the edge. I will discuss several new attacks on neural networks from the hardware or physical perspective. I will then present our method for inserting a backdoor into neural networks. Our method is distinct from prior attacks in that it was generated to neither alter the weights nor inputs of a neural network. But rather, it inserts a backdoor by altering the functionality of operations implemented by the network on those parameters during the production of the neural network. Joseph Clements works with Dr. Yingjie Lao’s Secure and Innovative Computing Research Group conducting research on Adversarial AI in edge based Deep Learning technologies. In the fall semester of 2017, Joseph joined Clemson University’s Holcombe Department of Electrical and Computer Engineering in pursuit of his PhD. He graduated with a bachelor’s degree in computer engineering from the University of South Alabama in May of 2016. There, he engaged in research with Dr. Mark Yampolskiy on the security of additive manufacturing and cyber-physical systems. His research interests include machine learning and artificial intelligence, security and VLSI design. |