{ "id": "2109.11377", "version": "v1", "published": "2021-09-23T13:47:16.000Z", "updated": "2021-09-23T13:47:16.000Z", "title": "WRENCH: A Comprehensive Benchmark for Weak Supervision", "authors": [ "Jieyu Zhang", "Yue Yu", "Yinghao Li", "Yujing Wang", "Yaming Yang", "Mao Yang", "Alexander Ratner" ], "categories": [ "cs.LG", "cs.AI", "cs.CL", "stat.ML" ], "abstract": "Recent \\emph{Weak Supervision (WS)} approaches have had widespread success in easing the bottleneck of labeling training data for machine learning by synthesizing labels from multiple potentially noisy supervision sources. However, proper measurement and analysis of these approaches remain a challenge. First, datasets used in existing works are often private and/or custom, limiting standardization. Second, WS datasets with the same name and base data often vary in terms of the labels and weak supervision sources used, a significant \"hidden\" source of evaluation variance. Finally, WS studies often diverge in terms of the evaluation protocol and ablations used. To address these problems, we introduce a benchmark platform, \\benchmark, for a thorough and standardized evaluation of WS approaches. It consists of 22 varied real-world datasets for classification and sequence tagging; a range of real, synthetic, and procedurally-generated weak supervision sources; and a modular, extensible framework for WS evaluation, including implementations for popular WS methods. We use \\benchmark to conduct extensive comparisons over more than 100 method variants to demonstrate its efficacy as a benchmark platform. The code is available at \\url{https://github.com/JieyuZ2/wrench}.", "revisions": [ { "version": "v1", "updated": "2021-09-23T13:47:16.000Z" } ], "analyses": { "keywords": [ "comprehensive benchmark", "evaluation", "benchmark platform", "multiple potentially noisy supervision sources", "procedurally-generated weak supervision sources" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }