We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Golden Ratio Algorithms for Variational Inequalities

Formal Metadata

Title
Golden Ratio Algorithms for Variational Inequalities
Title of Series
Number of Parts
30
Author
License
CC Attribution - NonCommercial - NoDerivatives 4.0 International:
You are free to use, copy, distribute and transmit the work or content in unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
We present several novel methods for solving general (pseudo-) monotone variational inequalities. The first method uses fixed stepsize and is similar to the proximal reflected gradient method: it also requires only one value of operator and one prox-operator per iteration. However, its extension — the dynamic version — has a notable distinction. In every iteration it defines a stepsize, based on a local information about operator, without running any linesearch procedure. Thus, the iteration costs of this method is almost the same as in the first one with a fixed stepsize, but it converges without the Lipschitz assumption on the operator. We further discuss possible generalizations of the methods, in particular for solving large-scale nonlinear saddle point problems. Some numerical experiments are reported.