We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Statistical theory for deep neural networks - lecture 2

Formal Metadata

Title
Statistical theory for deep neural networks - lecture 2
Title of Series
Number of Parts
9
Author
Contributors
License
CC Attribution - NonCommercial - NoDerivatives 2.0 Generic:
You are free to use, copy, distribute and transmit the work or content in unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Recently a lot of progress has been made regarding the theoretical understanding of machine learning methods in particular deep learning. One of the very promising directions is the statistical approach, which interprets machine learning as a collection of statistical methods and builds on existing techniques in mathematical statistics to derive theoretical error bounds and to understand phenomena such as overparametrization. The lecture series surveys this field and describes future challenges.