Abstract: We establish the optimal nonergodic sublinear convergence rate of the proximal point algorithm for maximal monotone inclusion problems. First, the optimal bound is formulated by the performance estimation framework, resulting in an infinite dimensional nonconvex optimization problem, which is then equivalently reformulated as a finite dimensional semidefinite programming (SDP) problem. By constructing a feasible solution to the dual SDP, we obtain an upper bound on the optimal nonergodic sublinear rate. Finally, an example in two dimensional space is constructed to provide a lower bound on the optimal nonergodic sublinear rate. Since the lower bound provided by the example matches exactly the upper bound obtained by the dual SDP, we have thus established the worst case nonergodic sublinear convergence rate which is optimal in terms of both the order as well as the constants involved. Our result sharpens the understanding of the fundamental proximal point algorithm.
This is a joint work with Prof. Guoyong Gu from Nanjing University.
Bio: Junfeng Yang, Professor, Department of Mathematics, Nanjing University. He is mainly interested in designing, analyzing and implementing robust and efficient algorithms for solving optimization problems arising from signal and image processing, compressive sensing and sparse optimization, and so on. Together with collaborators, he has developed Matlab packages FTVd for image deblurring and YALL1 for L1 problems in compressive sensing.