Speaker: Yangyang Xu (Rensselaer Polytechnic Institute)
Time: May 8, 2019, 16:15-17:15
Location: Conference Room 415, Hui Yuan 3#
Abstract
On solving a convex-concave bilinear saddle-point problem (SPP), there have been many works studying the complexity results of first-order methods. These results are all about upper complexity bounds, which can determine at most how many iterations would guarantee a solution of desired accuracy. In this talk, I will show the opposite direction by deriving lower complexity bounds of first-order methods on large-scale SPPs. The results apply to the methods whose iterates are in the linear span of past first-order information, as well as more general methods that produce their iterates in an arbitrary manner based on first-order information. I will first show that different from gradient method on unconstrained problems, first-order methods on affinely constrained problems generally cannot be accelerated from the known convergence rate $O(1/t)$ to $O(1/t^2)$, and in addition, $O(1/t)$ is optimal for convex problems. Then I will extend the result to general SPPs. It turns out that the derived lower complexity bounds match with several established upper complexity bounds in the literature, and thus they are tight and indicate the optimality of several existing first-order methods.