Towards Optimal Naive Bayes Nearest Neighbor


   

Regis Behmo (LIAMA/EcoleCentrale)

   

Paul Marcombes (LIAMA/IMAGINE, LIGM, Universite Paris-Est)

   

Arnak Dalalyan (IMAGINE, LIGM, Universite Paris-Est)

   

Véronique Prinet (CASIA/LIAMA)


Abstract

Naive Bayes Nearest Neighbor (NBNN) is a feature-based image classifier that achieves impressive degree of accuracy by exploiting `Image-to- Class' distances and by avoiding quantization of local image descriptors [1]. It is based on the hypothesis that each local descriptor is drawn from a class-dependent probability measure. The density of the latter is estimated by the non-parametric kernel estimator, which is further simplified under the assumption that the normalization factor is class-independent. While leading to significant simplification, the assumption underlying the original NBNN is too restrictive and considerably degrades its generalization ability.
The goal of this paper is to address this issue. As we relax the incriminated assumption we are faced with a parameter selection problem that we solve by hinge-loss minimization. We also show that our modified formulation naturally generalizes to optimal combinations of feature types. Experiments conducted on several datasets show that the gain over the original NBNN may attain up to 20 percentage points. We also take advantage of the linearity of optimal NBNN to perform classification by detection through efficient sub-window search [2], with yet another performance gain. As a result, our classifier outperforms -- in terms of misclassification error -- methods based on support vector machine and bags of quantized features on some datasets.

Related publications


[1] Boiman, O., Shechtman, E., Irani, M.: In defense of nearest-neighbor based image classification. In: CVPR. (2008)
[2] Lampert, C., Blaschko, M., Hofmann, T.: Beyond sliding windows: Object localization by efficient subwindow search. In: CVPR. (2008)
         
C++ Code