We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Deep learning - what's missing?

Formal Metadata

Title
Deep learning - what's missing?
Title of Series
Number of Parts
4
Author
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
There have certainly been some spectacular improvements in machine learning over the last couple of years, and one has to wonder, what comes next? The speaker will talk about recent breakthroughs but also focus on their intrinsic limitations in order to make some guesses about where the frontiers might lie. For example, the current paradigm of supervised learning is an important advance – but would unsupervised learning be more interesting if we could make it work? Neural nets have become much better at modelling some aspects of complex temporal data such as human language – but what about the aspects they’re ill-disposed to learn? Traditional neural networks learn fixed mappings from inputs to outputs – what if they could learn to implement the actual algorithms themselves?