arXiv Analytics

Sign in

arXiv:1711.02295 [cs.IR]AbstractReferencesReviewsResources

Quality-Efficiency Trade-offs in Machine Learning for Text Processing

Ricardo Baeza-Yates, Zeinab Liaghat

Published 2017-11-07Version 1

Data mining, machine learning, and natural language processing are powerful techniques that can be used together to extract information from large texts. Depending on the task or problem at hand, there are many different approaches that can be used. The methods available are continuously being optimized, but not all these methods have been tested and compared in a set of problems that can be solved using supervised machine learning algorithms. The question is what happens to the quality of the methods if we increase the training data size from, say, 100 MB to over 1 GB? Moreover, are quality gains worth it when the rate of data processing diminishes? Can we trade quality for time efficiency and recover the quality loss by just being able to process more data? We attempt to answer these questions in a general way for text processing tasks, considering the trade-offs involving training data size, learning time, and quality obtained. We propose a performance trade-off framework and apply it to three important text processing problems: Named Entity Recognition, Sentiment Analysis and Document Classification. These problems were also chosen because they have different levels of object granularity: words, paragraphs, and documents. For each problem, we selected several supervised machine learning algorithms and we evaluated the trade-offs of them on large publicly available data sets (news, reviews, patents). To explore these trade-offs, we use different data subsets of increasing size ranging from 50 MB to several GB. We also consider the impact of the data set and the evaluation technique. We find that the results do not change significantly and that most of the time the best algorithms is the fastest. However, we also show that the results for small data (say less than 100 MB) are different from the results for big data and in those cases the best algorithm is much harder to determine.

Comments: Ten pages, long version of paper that will be presented at IEEE Big Data 2017 (8 pages)
Categories: cs.IR, cs.CL, cs.LG
Related articles: Most relevant | Search more
arXiv:1911.10130 [cs.IR] (Published 2019-11-22)
A Data Set of Internet Claims and Comparison of their Sentiments with Credibility
arXiv:2008.13527 [cs.IR] (Published 2020-08-20)
Review Regularized Neural Collaborative Filtering
arXiv:1306.5170 [cs.IR] (Published 2013-06-21)
Clinical Relationships Extraction Techniques from Patient Narratives