All

Trust-region Method Based on Inaccurate First-order Information

  • Speaker: ZHANG Zaikun (HKPU)

  • Time: May 22, 2019, 15:30-16:30

  • Location: Conference Room 415, Hui Yuan 3#

As a fundamental tool in nonlinear optimization, trust-region method is considered to be classical and well understood. Its global convergence was established by Powell more than 40 years ago, and its worst-case complexity has been well studied recently. However, in the era of Data Science, we often find ourselves in scenarios that are not fully covered by such classical analysis. For example, the information required by the method may not always be available or reliable. Worse still, we may even feed the method with completely wrong information without being aware. These scenarios urge us to have a new look at the old method and understand it when classical assumptions fail. 
We will discuss the behavior of trust-region method assuming that the objective function is smooth yet its gradient information available to us is inaccurate or even completely wrong. Both deterministic and stochastic cases will be investigated. It turns out that trust-region method is remarkably robust with respect to gradient inaccuracy. The method converges even if the gradient is evaluated with only one correct significant digit, and even if the gradient evaluation encounters random failures with probability 1/2. Indeed, in both situations, the worst case complexity of the method is essentially the same as when the gradient evaluation is always accurate. This talk is based on joint works with Serge Gratton (University of Toulouse, INPT/ENSEEIHT) Clement W. Royer (Wisconsin-Madison) and Luis Nunes Vicente (Lehigh University).