In this talk, we present a new framework for establishing error bounds for a class of structured convex optimization problems, in which the objective function is the sum of a smooth convex function and a general closed proper convex function. Such a class encapsulates fairly general constrained minimization problems, as well as various regularized loss minimization formulations in machine learning, signal processing, and statistics. Our framework not only allows us to recover a number of existing error bound results in a unified and transparent manner but also leads to new error bounds for problems whose objective functions do not necessarily have a polyhedral epigraph. As an illustration, we show that a class of nuclear-norm regularized loss minimization problems admits an error bound under a strict complementarity-type regularity condition. We then complement this result by constructing an example to show that the said error bound could fail to hold without the regularity condition. If time permits, we discuss the algorithmic implications of our results, particularly on the convergence rate analysis of a family of successive quadratic approximation methods for solving the aforementioned class of problems. |