arXiv Analytics

Sign in

arXiv:2408.14817 [cs.LG]AbstractReferencesReviewsResources

A Comprehensive Benchmark of Machine and Deep Learning Across Diverse Tabular Datasets

Assaf Shmuel, Oren Glickman, Teddy Lazebnik

Published 2024-08-27Version 1

The analysis of tabular datasets is highly prevalent both in scientific research and real-world applications of Machine Learning (ML). Unlike many other ML tasks, Deep Learning (DL) models often do not outperform traditional methods in this area. Previous comparative benchmarks have shown that DL performance is frequently equivalent or even inferior to models such as Gradient Boosting Machines (GBMs). In this study, we introduce a comprehensive benchmark aimed at better characterizing the types of datasets where DL models excel. Although several important benchmarks for tabular datasets already exist, our contribution lies in the variety and depth of our comparison: we evaluate 111 datasets with 20 different models, including both regression and classification tasks. These datasets vary in scale and include both those with and without categorical variables. Importantly, our benchmark contains a sufficient number of datasets where DL models perform best, allowing for a thorough analysis of the conditions under which DL models excel. Building on the results of this benchmark, we train a model that predicts scenarios where DL models outperform alternative methods with 86.1% accuracy (AUC 0.78). We present insights derived from this characterization and compare these findings to previous benchmarks.

Related articles: Most relevant | Search more
arXiv:1706.10239 [cs.LG] (Published 2017-06-30)
Towards Understanding Generalization of Deep Learning: Perspective of Loss Landscapes
arXiv:2406.03280 [cs.LG] (Published 2024-06-05, updated 2024-06-07)
FusionBench: A Comprehensive Benchmark of Deep Model Fusion
arXiv:1710.10686 [cs.LG] (Published 2017-10-29)
Regularization for Deep Learning: A Taxonomy