arXiv Analytics

Sign in

arXiv:2205.06262 [cs.CL]AbstractReferencesReviewsResources

FETA: A Benchmark for Few-Sample Task Transfer in Open-Domain Dialogue

Alon Albalak, Yi-Lin Tuan, Pegah Jandaghi, Connor Pryor, Luke Yoffe, Deepak Ramachandran, Lise Getoor, Jay Pujara, William Yang Wang

Published 2022-05-12Version 1

Task transfer, transferring knowledge contained in related tasks, holds the promise of reducing the quantity of labeled data required to fine-tune language models. Dialogue understanding encompasses many diverse tasks, yet task transfer has not been thoroughly studied in conversational AI. This work explores conversational task transfer by introducing FETA: a benchmark for few-sample task transfer in open-domain dialogue. FETA contains two underlying sets of conversations upon which there are 10 and 7 tasks annotated, enabling the study of intra-dataset task transfer; task transfer without domain adaptation. We utilize three popular language models and three learning algorithms to analyze the transferability between 132 source-target task pairs and create a baseline for future work. We run experiments in the single- and multi-source settings and report valuable findings, e.g., most performance trends are model-specific, and span extraction and multiple-choice tasks benefit the most from task transfer. In addition to task transfer, FETA can be a valuable resource for future research into the efficiency and generalizability of pre-training datasets and model architectures, as well as for learning settings such as continual and multitask learning.

Comments: code available at
Categories: cs.CL
Related articles: Most relevant | Search more
arXiv:2105.13710 [cs.CL] (Published 2021-05-28)
OTTers: One-turn Topic Transitions for Open-Domain Dialogue
arXiv:2109.04137 [cs.CL] (Published 2021-09-09)
Fusing task-oriented and open-domain dialogues in conversational agents
arXiv:2101.11718 [cs.CL] (Published 2021-01-27)
BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation