Naive Bayes Nearest Neighbor (NBNN) has been proposed as a powerful, learning-free, non-parametric approach for object classification. Its good performance is mainly due to the avoidance of a vector quantization step, and the use of image-to-class comparisons, yielding good generalization. In this paper we study the replacement of the nearest neighbor part with more elaborate and robust (sparse) representations, as well as trading performance for speed for practical purposes. The representations investigated are k-Nearest Neighbors (kNN), Iterative Nearest Neighbors (INN) solving a constrained least squares (LS) problem, Local Linear Embedding (LLE), a Sparse Representation obtained by l 1-regularized LS (SRl1), and a Collaborative Representation obtained as the solution of a l 2-regularized LS problem (CRl2). In particular, NIMBLE and K-DES descriptors proved viable alternatives to SIFT and, the NBSRl1 and NBINN classifiers provide significant improvements over NBNN, obtaining competitive results on Scene-15, Caltech-101, and PASCAL VOC 2007 datasets, while remaining learning-free approaches (i.e., no parameters need to be learned).