This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.

Search for Publication

Year(s) from:  to 
Keywords (separated by spaces):

Quantized Kernel Learning for Feature Matching

D. Qin, X. Chen, M. Guillaumin and L. Van Gool
Conference on Neural Information Processing Systems (NIPS 2014)
December 2014, in press


Matching local visual features is a crucial problem in computer vision and its accuracy greatly depends on the choice of similarity measure. As it is generally very difficult to design by hand a similarity or a kernel perfectly adapted to the data of interest, learning it automatically with as few assumptions as possible is preferable. However, available techniques for kernel learning suffer from several limitations, such as restrictive parametrization or scalability. In this paper, we introduce a simple and flexible family of non-linear kernels which we refer to as Quantized Kernels (QK). QKs are arbitrary kernels in the index space of a data quantizer, i.e., piecewise constant similarities in the original feature space. Quantization allows to compress features and keep the learning tractable. As a result, we obtain state-of-the-art matching performance on a standard benchmark dataset with just a few bits to represent each feature dimension. QKs also have explicit non-linear, low-dimensional feature mappings that grant access to Euclidean geometry for uncompressed features.

Download in pdf format
  author = {D. Qin and X. Chen and M. Guillaumin and L. Van Gool},
  title = {Quantized Kernel Learning for Feature Matching},
  booktitle = {Conference on Neural Information Processing Systems (NIPS 2014)},
  year = {2014},
  month = {December},
  keywords = {},
  note = {in press}