arXiv Analytics

Sign in

arXiv:1810.04652 [cs.CV]AbstractReferencesReviewsResources

Learning Embeddings for Product Visual Search with Triplet Loss and Online Sampling

Eric Dodds, Huy Nguyen, Simao Herdade, Jack Culpepper, Andrew Kae, Pierre Garrigues

Published 2018-10-10Version 1

In this paper, we propose learning an embedding function for content-based image retrieval within the e-commerce domain using the triplet loss and an online sampling method that constructs triplets from within a minibatch. We compare our method to several strong baselines as well as recent works on the DeepFashion and Stanford Online Product datasets. Our approach significantly outperforms the state-of-the-art on the DeepFashion dataset. With a modification to favor sampling minibatches from a single product category, the same approach demonstrates competitive results when compared to the state-of-the-art for the Stanford Online Products dataset.

Related articles: Most relevant | Search more
arXiv:1703.07737 [cs.CV] (Published 2017-03-22)
In Defense of the Triplet Loss for Person Re-Identification
arXiv:1703.07464 [cs.CV] (Published 2017-03-21)
No Fuss Distance Metric Learning using Proxies
arXiv:1804.06061 [cs.CV] (Published 2018-04-17)
Improving Deep Binary Embedding Networks by Order-aware Reweighting of Triplets