We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Theory of Deep Convolutional Neural Networks and Distributed Learning

Formale Metadaten

Titel
Theory of Deep Convolutional Neural Networks and Distributed Learning
Serientitel
Anzahl der Teile
10
Autor
Lizenz
CC-Namensnennung - keine kommerzielle Nutzung - keine Bearbeitung 4.0 International:
Sie dürfen das Werk bzw. den Inhalt in unveränderter Form zu jedem legalen und nicht-kommerziellen Zweck nutzen, vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
Deep learning has been widely applied and brought breakthroughs in speech recognition, computer vision, and many other domains. The involved deep neural network architectures and computational issues have been well studied in machine learning. But there lacks a theoreti- cal foundation for understanding the approximation or generalization ability of deep learning methods with network architectures such as deep convolutional neural networks with convo- lutional structures. This talk describes a mathematical theory of deep convolutional neural networks (CNNs). In particular, we show the universality of a deep CNN, meaning that it can be used to approximate any continuous function to an arbitrary accuracy when the depth of the neural network is large enough. Our quantitative estimate, given tightly in terms of the number of free parameters to be computed, verifies the efficiency of deep CNNs in dealing with large dimensional data. Some related distributed learning algorithms will also be discussed.