arXiv Analytics

Sign in

arXiv:1804.09530 [cs.CL]AbstractReferencesReviewsResources

Strong Baselines for Neural Semi-supervised Learning under Domain Shift

Sebastian Ruder, Barbara Plank

Published 2018-04-25Version 1

Novel neural models have been proposed in recent years for learning under domain shift. Most models, however, only evaluate on a single task, on proprietary datasets, or compare to weak baselines, which makes comparison of models difficult. In this paper, we re-evaluate classic general-purpose bootstrapping approaches in the context of neural networks under domain shifts vs. recent neural approaches and propose a novel multi-task tri-training method that reduces the time and space complexity of classic tri-training. Extensive experiments on two benchmarks are negative: while our novel method establishes a new state-of-the-art for sentiment analysis, it does not fare consistently the best. More importantly, we arrive at the somewhat surprising conclusion that classic tri-training, with some additions, outperforms the state of the art. We conclude that classic approaches constitute an important and strong baseline.

Related articles: Most relevant | Search more
arXiv:2203.11317 [cs.CL] (Published 2022-03-21)
The Change that Matters in Discourse Parsing: Estimating the Impact of Domain Shift on Parser Error
arXiv:2305.13725 [cs.CL] (Published 2023-05-23)
Conversational Recommendation as Retrieval: A Simple, Strong Baseline
arXiv:2006.09462 [cs.CL] (Published 2020-06-16)
Selective Question Answering under Domain Shift