arXiv Analytics

Sign in

arXiv:2408.02839 [stat.ML]AbstractReferencesReviewsResources

Optimizing Cox Models with Stochastic Gradient Descent: Theoretical Foundations and Practical Guidances

Lang Zeng, Weijing Tang, Zhao Ren, Ying Ding

Published 2024-08-05Version 1

Optimizing Cox regression and its neural network variants poses substantial computational challenges in large-scale studies. Stochastic gradient descent (SGD), known for its scalability in model optimization, has recently been adapted to optimize Cox models. Unlike its conventional application, which typically targets a sum of independent individual loss, SGD for Cox models updates parameters based on the partial likelihood of a subset of data. Despite its empirical success, the theoretical foundation for optimizing Cox partial likelihood with SGD is largely underexplored. In this work, we demonstrate that the SGD estimator targets an objective function that is batch-size-dependent. We establish that the SGD estimator for the Cox neural network (Cox-NN) is consistent and achieves the optimal minimax convergence rate up to a polylogarithmic factor. For Cox regression, we further prove the $\sqrt{n}$-consistency and asymptotic normality of the SGD estimator, with variance depending on the batch size. Furthermore, we quantify the impact of batch size on Cox-NN training and its effect on the SGD estimator's asymptotic efficiency in Cox regression. These findings are validated by extensive numerical experiments and provide guidance for selecting batch sizes in SGD applications. Finally, we demonstrate the effectiveness of SGD in a real-world application where GD is unfeasible due to the large scale of data.

Related articles: Most relevant | Search more
arXiv:2207.04922 [stat.ML] (Published 2022-07-11)
On uniform-in-time diffusion approximation for stochastic gradient descent
arXiv:1712.07424 [stat.ML] (Published 2017-12-20)
ADINE: An Adaptive Momentum Method for Stochastic Gradient Descent
arXiv:2407.07670 [stat.ML] (Published 2024-07-10)
Stochastic Gradient Descent for Two-layer Neural Networks