We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

#bbuzz: Adversarial Attacks on Deep Leaning Models in NLP

Formal Metadata

Title
#bbuzz: Adversarial Attacks on Deep Leaning Models in NLP
Title of Series
Number of Parts
48
Author
Contributors
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
This talk is about how adversarial attacks can manipulate our deep learning modules and create drastic variations in the context of data. It focuses on Large Textual data and how functions of Natural Language Processing will be trained over wrong information. These attacks compromise the deep learning models and alter the meaning of the data. It is very critical to protect our models from such attacks to protect our data. The talk describes of the measures which we can implement for our Natural Language Processing models.