邮箱 English

往期活动

Deep Generative Model for Out-of-distribution Detection

Abstract: The data having a significantly different distribution with the in-distribution (ID) data is called out-of-distribution (OOD) data. To make reliable and safe decisions, the deep learning models in the real-world applications require to identify whether the input data is OOD data. However, it is well known that the neural network easily identifies an OOD sample as an ID data with overconfidence. It poses a significant challenge to artificial intelligence's reliability and security, making OOD detection a crucial issue. Several methods based on depth classifiers to detect OOD data have been proposed. Unfortunately, these methods only treat the supervised models and are not suitable for unsupervised models. Variational auto-encoders (VAEs) are an influential and generally-used class of likelihood-based generative models in unsupervised learning. The likelihood-based generative models have been reported to be highly robust to the OOD inputs and can be a detector by assuming that the model assigns higher likelihoods to the samples from the ID dataset than an OOD dataset. However, recent works reported a phenomenon that VAE recognizes some OOD samples as ID by assigning a higher likelihood to the OOD inputs compared to the one from ID. Some studies show that the complexity of the inputs affects the density estimate of deep generative models significantly by designing controlled experiments on deep generative models with different image complexity levels and advise the deep generative models with reliable uncertainty estimation is critical to a deep understanding of OOD inputs. Meanwhile, noise contrastive prior (NCP) is an emerging promising method for obtaining uncertainty, with the advantages of easy to scale, trainability, and compatibility with extensive models. In this talk, we will introduce two potential solutions to address the OOD detection and uncertainty estimation issues in VAEs. One is VAE with noise contrastive prior (NCP), namely Improved Noise Contrastive Prior Variational Auto-encoder (INCPVAE); another is VAE with two independent priors that belong to the training dataset and simple dataset, namely Bigeminal Priors Variational Auto-encoder (BPVAE). We conducted a series of quantitative experiments to validate the performance on OOD detection and uncertainty estimation. These results suggest that The INCPVAE and BPVAE outperform standard VAEs for the OOD detection task on the FashionMNIST and CIFAR10 datasets.