We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

CAAD VILLAGE - GeekPwn - The Uprising Geekpwn AI/Robotics Cybersecurity Contest U.S. 2018 - Hardware Trojan Attacks on Neural Networks

Formal Metadata

Title
CAAD VILLAGE - GeekPwn - The Uprising Geekpwn AI/Robotics Cybersecurity Contest U.S. 2018 - Hardware Trojan Attacks on Neural Networks
Title of Series
Number of Parts
322
Author
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Driven by their accessibility and ubiquity, deep learning has seen rapid growth into a variety of fields, in recent years, including many safety-critical areas. With the rising demands for computational power and speed in machine learning, there is a growing need for hardware architectures optimized for deep learning and other machine learning models, specifically in tightly constrained edge based systems. Unfortunately, the modern fabless business model of manufacturing hardware, while economic, leads to deficiencies in security through the supply chain. In addition, the embedded, distributed, unsupervised, and physically exposed nature of edge devices would make various hardware or physical attacks on edge devices as critical threats. In this talk, I will first introduce the landscape of adversarial machine learning on the edge. I will discuss several new attacks on neural networks from the hardware or physical perspective. I will then present our method for inserting a backdoor into neural networks. Our method is distinct from prior attacks in that it was generated to neither alter the weights nor inputs of a neural network. But rather, it inserts a backdoor by altering the functionality of operations implemented by the network on those parameters during the production of the neural network. Joseph Clements works with Dr. Yingjie Lao’s Secure and Innovative Computing Research Group conducting research on Adversarial AI in edge based Deep Learning technologies. In the fall semester of 2017, Joseph joined Clemson University’s Holcombe Department of Electrical and Computer Engineering in pursuit of his PhD. He graduated with a bachelor’s degree in computer engineering from the University of South Alabama in May of 2016. There, he engaged in research with Dr. Mark Yampolskiy on the security of additive manufacturing and cyber-physical systems. His research interests include machine learning and artificial intelligence, security and VLSI design.