Abstract: Modern learning algorithms, such as deep learning, have gained great successes in real applications. However, some of their empirical behaviors may not be interpreted within the classical statistical learning framework. For example, deep learning algorithms achieve small testing error even when the training error is zero, i.e., over-fitting. Another phenomenon is observed in image recognition applications where a hardly noticeable change of data may lead to a dramatic increase in misclassification rates. Inspired by these observations, we attempt to illustrate new theoretical insights for data-interpolation and adversarial testing using the very simple nearest neighbor algorithms. In particular, we prove statistical optimality of interpolated nearest neighbor algorithms. More surprisingly, it is discovered that the classification performance, under a proper interpolation, is even better than the best kNN in terms of multiplicative constant. As for adversarial testing, we demonstrate that different adversarial mechanisms lead to different phase transition phenomena of the misclassification rate in terms of its upper bound. Additionally, our technical analysis invented to deal with adversarial samples can also be applied to other variants of kNN, e.g. pre-processed 1NN and distributed-NN.