Current

Failure-informed adaptive sampling for PINNs

Abstract 

Physics-informed neural networks (PINNs) have emerged as an effective technique for solving PDEs in a wide range of domains. Recent research has demonstrated, however, that the performance of PINNs can vary dramatically with different sampling procedures, and that using a fixed set of training points can be detrimental to the convergence of PINNs to the correct solution. In this talk, we present an adaptive approach termed failure-informed PINNs(FI-PINNs), which is inspired by the viewpoint of reliability analysis. The basic idea is to define a failure probability by using the residual, which represents the reliability of the PINNs. With the aim of placing more samples in the failure region and fewer samples in the safe region, FI-PINNs employs a failure-informed enrichment technique to incrementally add new collocation points to the training set adaptively. Using the new collocation points, the accuracy of the PINNs model is then improved. The failure probability, similar to classical adaptive finite element methods, acts as an error indicator that guides the refinement of the training set. When compared to the conventional PINNs method and the residual-based adaptive refinement method, the developed algorithm can significantly improve accuracy, especially for low regularity and high-dimensional problems. We prove rigorous bounds on the error incurred by the proposed FI-PINNs and illustrate its performance through several problems.