Past

Unifying Non-Convex Low-Rank Matrix Recovery Algorithms by Riemannian Gradient Descent

Abstract 

The problem of low-rank matrix recovery from linear samples arises from numerous practical applications in machine learning, imaging, signal processing, computer vision, etc. Non-convex algorithms are usually very efficient and effective for low-rank matrix recovery with a theoretical guarantee, despite of possible local minima. In this talk, non-convex low-rank matrix recovery algorithms are unified under the framework of Riemannian gradient descent. We show that many popular non-convex low-rank matrix recovery algorithms are special cases of Riemannian gradient descent with different Riemannian metrics and retraction operators. Moreover, we identify the best choice of metrics and construct the most efficient non-convex algorithms for low-rank matrix recovery, by considering properties of sampling operators for different tasks such as matrix completion and phase retrieval.