邮箱 English

往期活动

Riemannian Proximal Gradient Methods

We consider solving nonconvex and nonsmooth optimization problems with Riemannian manifold constraints. Such problems have received considerable attention due to many important applications such as sparse PCA, sparse blind deconvolution,  robust matrix completion. Many of the applications yield composite objectives. In the Euclidean setting, proximal  gradient method and its variants have been viewed as excellent methods for solving nonconvex nonsmooth problems with composite cost functions. However, in the Riemannian setting, the related work is still limited. In this talk, we briefly review exisitng non-smooth optimization methods on Riemannian manifolds, in particular, the proximal gradient method on manifold. We develop and analyze a Riemannian proximal gradient method and its variant with acceleration. It is shown that the global convergence is obtained for the Riemannian proximal gradient method under mild assumptions. The O(1/k) convergence rate is estiblished for the method under more assumptions, and the sufficient conditions for O(1/k^2) convergence rates for its variant are discussed. A pratical algorithm is also proposed. Two models in sparse PCA are used to demonstrate the performance of the proposed method. This is joint work with Ke Wei at Fudan University