We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Understanding and Implementing Recurrent Neural Networks using Python

Formal Metadata

Title
Understanding and Implementing Recurrent Neural Networks using Python
Title of Series
Number of Parts
132
Author
License
CC Attribution - NonCommercial - ShareAlike 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Recurrent Neural Networks (RNNs) have become famous over time due to their property of retaining internal memory. These neural nets are widely used in recognizing patterns in sequences of data, like numerical timer series data, images, handwritten text, spoken words, genome sequences, and much more. Since these nets possess memory, there is a certain analogy that we can make to the human brain in order to learn how RNNs work. RNNs can be thought of as a network of neurons with feedback connections, unlike feedforward connections which exist in other types of Artificial Neural Networks. The flow of talk will be as follows: - Self Introduction - Introduction to Deep Learning - Artificial Neural Networks (ANNs) - Diving DEEP into Recurrent Neural Networks (RNNs) - Comparing Feedforward Networks with Feedback Networks - Quick walkthrough: Implementing RNNs using Python (Keras) - Understanding Backpropagation Through Time (BPTT) and Vanishing Gradient Problem - Towards more sophisticated RNNs: Gated Recurrent Units (GRUs)/Long Short-Term Memory (LSTMs) - End of talk - Questions and Answers Session