往期活动

Stochastic Collocation Algorithms Using L-Minimization for Bayesian Solution of Inverse Problems

The Bayesian methodology has proven to be a convenient framework to solve inverse roblems from available data with limited information. The main computational challenges in the ayesian solution of inverse problems arise from the need for repeated evaluations of the forward odel, as required by Markov chain Monte Carlo (MCMC) methods for posterior sampling. In this talk, we present an efficient technique for constructing stochastic surrogate models to accelerate the Bayesian inference approach for statistical inverse problems. The stochastic collocation algorithms using l1-minimization (L1-SCM), based on generalized polynomial chaos, are used to construct a polynomial approximation of the forward solution over the support of the prior distribution. This approximation then defines a surrogate posterior probability density that is inexpensive to evaluate. A rigorous error analysis shows that the convergence rate of the posterior density is at least twice as fast as the convergence rate of the gPC expansion for the forward solution. We demonstrate our theoretical results on two nonlinear inverse problems, involving the inference of parameters appearing in partial differential equations. For both of these examples, the L1-SCM achieves better accuracy than that obtained using the sparse grid stochastic collocation method. The results also indicate that the new algorithm can provide greater efficiency by several orders of magnitude compared to a standard MCMC.