摘要:Most of the Nearest Neighbor (NN) based image annotation (or classification) methods cannot achieve satisfactory performance, due to the fact that information loss is inevitable when extracting visual features from images, such as constructing bag of visual words based features. In this paper, we propose a novel NN method based on semantic distance, which improves classification performance by compensating for the information loss and minimizing the semantic gap between intra-class variations and inter-class similarities. We first deal with distance metric using image semantic information. Then we construct NN-based classifier which utilizes the distance metric to compute the similarity between any two images. Experimental results based on image annotation task of ImageCLEF2012 show that the proposed method outperforms the traditional classifiers. More importantly, our method is extremely simple, efficient, and competitive in comparison with the state of the art learning-based image classifiers.