arXiv Analytics

Sign in

arXiv:2402.12264 [cs.LG]AbstractReferencesReviewsResources

Uncertainty quantification in fine-tuned LLMs using LoRA ensembles

Oleksandr Balabanov, Hampus Linander

Published 2024-02-19Version 1

Fine-tuning large language models can improve task specific performance, although a general understanding of what the fine-tuned model has learned, forgotten and how to trust its predictions is still missing. We derive principled uncertainty quantification for fine-tuned LLMs with posterior approximations using computationally efficient low-rank adaptation ensembles. We analyze three common multiple-choice datasets using low-rank adaptation ensembles based on Mistral-7b, and draw quantitative and qualitative conclusions on their perceived complexity and model efficacy on the different target domains during and after fine-tuning. In particular, backed by the numerical experiments, we hypothesise about signals from entropic uncertainty measures for data domains that are inherently difficult for a given architecture to learn.

Related articles: Most relevant | Search more
arXiv:2306.01001 [cs.LG] (Published 2023-05-31)
DiffLoad: Uncertainty Quantification in Load Forecasting with Diffusion Model
arXiv:2311.05795 [cs.LG] (Published 2023-11-10)
Improvements on Uncertainty Quantification for Node Classification via Distance-Based Regularization
arXiv:2309.13192 [cs.LG] (Published 2023-09-22)
Towards Green AI in Fine-tuning Large Language Models via Adaptive Backpropagation