arXiv Analytics

Sign in

arXiv:1810.04652 [cs.CV]AbstractReferencesReviewsResources

Learning Embeddings for Product Visual Search with Triplet Loss and Online Sampling

Eric Dodds, Huy Nguyen, Simao Herdade, Jack Culpepper, Andrew Kae, Pierre Garrigues

Published 2018-10-10Version 1

In this paper, we propose learning an embedding function for content-based image retrieval within the e-commerce domain using the triplet loss and an online sampling method that constructs triplets from within a minibatch. We compare our method to several strong baselines as well as recent works on the DeepFashion and Stanford Online Product datasets. Our approach significantly outperforms the state-of-the-art on the DeepFashion dataset. With a modification to favor sampling minibatches from a single product category, the same approach demonstrates competitive results when compared to the state-of-the-art for the Stanford Online Products dataset.

Related articles: Most relevant | Search more
arXiv:1703.07737 [cs.CV] (Published 2017-03-22)
In Defense of the Triplet Loss for Person Re-Identification
arXiv:1912.08275 [cs.CV] (Published 2019-12-17)
A Probabilistic approach for Learning Embeddings without Supervision
arXiv:2009.10295 [cs.CV] (Published 2020-09-22)
Beyond Triplet Loss: Person Re-identification with Fine-grained Difference-aware Pairwise Loss