We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Deep Neural Networks motivated by PDEs

Formale Metadaten

Titel
Deep Neural Networks motivated by PDEs
Serientitel
Anzahl der Teile
22
Autor
Lizenz
CC-Namensnennung - keine kommerzielle Nutzung - keine Bearbeitung 4.0 International:
Sie dürfen das Werk bzw. den Inhalt in unveränderter Form zu jedem legalen und nicht-kommerziellen Zweck nutzen, vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
One of the most promising areas in artificial intelligence is deep learning, a form of machine learning that uses neural networks containing many hidden layers. Recent success has led to breakthroughs in applications such as speech and image recognition. However, more theoretical insight is needed to create a rigorous scientific basis for designing and training deep neural networks, increasing their scalability, and providing insight into their reasoning. This talk bridges the gap between partial differential equations (PDEs) and neural networks and presents a new mathematical paradigm that simplifies designing, training, and analyzing deep neural networks. It shows that training deep neural networks can be cast as a dynamic optimal control problem similar to path-planning and optimal mass transport. The talk outlines how this interpretation can improve the effectiveness of deep neural networks. First, the talk introduces new types of neural networks inspired by to parabolic, hyperbolic, and reaction-diffusion PDEs. Second, the talk outlines how to accelerate training by exploiting multi-scale structures or reversibility properties of the underlying PDEs. Finally, recent advances on efficient parametrizations and derivative-free training algorithms will be presented.