We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

AI VILLAGE - Beyond Adversarial Learning – Security Risks in AI Implementations

Formale Metadaten

Titel
AI VILLAGE - Beyond Adversarial Learning – Security Risks in AI Implementations
Serientitel
Anzahl der Teile
322
Autor
Lizenz
CC-Namensnennung 3.0 Unported:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
A year after we discovered and reported a bunch of CVEs related to deep learning frameworks, many security and AI researchers have started to pay more attention to the software security of AI systems. Unfortunately, many deep learning developers are still unaware of the risks buried in AI software implementations. For example, by inspecting a set of newly developed AI applications, such as image classification and voice recognition, we found that they make strong assumptions about the input format used by training and classifications. Attackers can easily manipulate the classification and recognition without putting any effort in adversarial learning. In fact the potential danger introduced by software bugs and lack of input validation is much more severe than a weakness in a deep learning model. This talks will show threat examples that produce various attack effects from evading classifications, to data leakage, and even to whole system compromises. We hope by demonstrate such threats and risks, we can draw developers’ attention to software implementations and call for community collaborative effort to improve software security of deep learning frameworks and AI applications.