arXiv Analytics

Sign in

arXiv:1711.04047 [cs.CV]AbstractReferencesReviewsResources

DeepKSPD: Learning Kernel-matrix-based SPD Representation for Fine-grained Image Recognition

Melih Engin, Lei Wang, Luping Zhou, Xinwang Liu

Published 2017-11-11Version 1

Being symmetric positive-definite (SPD), covariance matrix has traditionally been used to represent a set of local descriptors in visual recognition. Recent study shows that kernel matrix can give considerably better representation by modelling the nonlinearity in the local descriptor set. Nevertheless, neither the descriptors nor the kernel matrix is deeply learned. Worse, they are considered separately, hindering the pursuit of an optimal SPD representation. This work proposes a deep network that jointly learns local descriptors, kernel-matrix-based SPD representation, and the classifier via an end-to-end training process. We derive the derivatives for the mapping from a local descriptor set to the SPD representation to carry out backpropagation. Also, we exploit the Daleckii-Krein formula in operator theory to give a concise and unified result on differentiating SPD matrix functions, including the matrix logarithm to handle the Riemannian geometry of kernel matrix. Experiments not only show the superiority of kernel-matrix-based SPD representation with deep local descriptors, but also verify the advantage of the proposed deep network in pursuing better SPD representations for fine-grained image recognition tasks.

Related articles: Most relevant | Search more
arXiv:1801.04261 [cs.CV] (Published 2018-01-12)
Deep saliency: What is learnt by a deep network about saliency?
arXiv:2001.07323 [cs.CV] (Published 2020-01-21)
Face Verification via learning the kernel matrix
arXiv:1605.06878 [cs.CV] (Published 2016-05-23)
Mask-CNN: Localizing Parts and Selecting Descriptors for Fine-Grained Image Recognition