arXiv Analytics

Sign in

arXiv:1911.07309 [cs.LG]AbstractReferencesReviewsResources

Coverage Testing of Deep Learning Models using Dataset Characterization

Senthil Mani, Anush Sankaran, Srikanth Tamilselvam, Akshay Sethi

Published 2019-11-17Version 1

Deep Neural Networks (DNNs), with its promising performance, are being increasingly used in safety critical applications such as autonomous driving, cancer detection, and secure authentication. With growing importance in deep learning, there is a requirement for a more standardized framework to evaluate and test deep learning models. The primary challenge involved in automated generation of extensive test cases are: (i) neural networks are difficult to interpret and debug and (ii) availability of human annotators to generate specialized test points. In this research, we explain the necessity to measure the quality of a dataset and propose a test case generation system guided by the dataset properties. From a testing perspective, four different dataset quality dimensions are proposed: (i) equivalence partitioning, (ii) centroid positioning, (iii) boundary conditioning, and (iv) pair-wise boundary conditioning. The proposed system is evaluated on well known image classification datasets such as MNIST, Fashion-MNIST, CIFAR10, CIFAR100, and SVHN against popular deep learning models such as LeNet, ResNet-20, VGG-19. Further, we conduct various experiments to demonstrate the effectiveness of systematic test case generation system for evaluating deep learning models.

Related articles: Most relevant | Search more
arXiv:1712.08645 [cs.LG] (Published 2017-12-22)
Dropout Feature Ranking for Deep Learning Models
arXiv:2011.06295 [cs.LG] (Published 2020-11-12)
When deep learning models on GPU can be accelerated by taking advantage of unstructured sparsity
arXiv:2111.07513 [cs.LG] (Published 2021-11-15, updated 2022-03-22)
A Comparative Study on Basic Elements of Deep Learning Models for Spatial-Temporal Traffic Forecasting