We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Distributional Robustness and Regularization in Statistical Learning

Formale Metadaten

Titel
Distributional Robustness and Regularization in Statistical Learning
Serientitel
Anzahl der Teile
39
Autor
Mitwirkende
Lizenz
CC-Namensnennung - keine kommerzielle Nutzung - keine Bearbeitung 4.0 International:
Sie dürfen das Werk bzw. den Inhalt in unveränderter Form zu jedem legalen und nicht-kommerziellen Zweck nutzen, vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
A central problem in statistical learning is to design prediction algorithms that not only perform well on training data, but also perform well on new and unseen, but similar, data. We approach this problem by formulating a distributionally robust stochastic optimization (DRSO) problem, which seeks a solution that minimizes the worst-case expected loss over a family of distributions that are close to the empirical distribution as measured by Wasserstein distance. We establish a connection between such Wasserstein DRSO and regularization. Specifically, we identify a broad class of loss functions, for which the Wasserstein DRSO is asymptotically equivalent to a regularization problem with a gradient-norm penalty. Such relation provides a new interpretation for approaches that use regularization, including a variety of statistical learning problems and discrete choice models. The connection also suggests a principled way to regularize high-dimensional, non-convex problems, which is demonstrated with the training of Wasserstein generative adversarial networks (WGANs) in deep learning.