{ "id": "2406.06977", "version": "v1", "published": "2024-06-11T06:18:22.000Z", "updated": "2024-06-11T06:18:22.000Z", "title": "Cross-domain-aware Worker Selection with Training for Crowdsourced Annotation", "authors": [ "Yushi Sun", "Jiachuan Wang", "Peng Cheng", "Libin Zheng", "Lei Chen", "Jian Yin" ], "comment": "Accepted by ICDE 2024", "categories": [ "cs.LG", "cs.DB" ], "abstract": "Annotation through crowdsourcing draws incremental attention, which relies on an effective selection scheme given a pool of workers. Existing methods propose to select workers based on their performance on tasks with ground truth, while two important points are missed. 1) The historical performances of workers in other tasks. In real-world scenarios, workers need to solve a new task whose correlation with previous tasks is not well-known before the training, which is called cross-domain. 2) The dynamic worker performance as workers will learn from the ground truth. In this paper, we consider both factors in designing an allocation scheme named cross-domain-aware worker selection with training approach. Our approach proposes two estimation modules to both statistically analyze the cross-domain correlation and simulate the learning gain of workers dynamically. A framework with a theoretical analysis of the worker elimination process is given. To validate the effectiveness of our methods, we collect two novel real-world datasets and generate synthetic datasets. The experiment results show that our method outperforms the baselines on both real-world and synthetic datasets.", "revisions": [ { "version": "v1", "updated": "2024-06-11T06:18:22.000Z" } ], "analyses": { "keywords": [ "crowdsourced annotation", "synthetic datasets", "scheme named cross-domain-aware worker selection", "ground truth", "allocation scheme named cross-domain-aware worker" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }