arXiv Analytics

Sign in

arXiv:2403.04964 [cs.AI]AbstractReferencesReviewsResources

Tell me the truth: A system to measure the trustworthiness of Large Language Models

Carlo Lipizzi

Published 2024-03-08Version 1

Large Language Models (LLM) have taken the front seat in most of the news since November 2023, when ChatGPT was introduced. After more than one year, one of the major reasons companies are resistant to adopting them is the limited confidence they have in the trustworthiness of those systems. In a study by (Baymard, 2023), ChatGPT-4 showed an 80.1% false-positive error rate in identifying usability issues on websites. A Jan. '24 study by JAMA Pediatrics found that ChatGPT has an accuracy rate of 17% percent when diagnosing pediatric medical cases (Barile et al., 2024). But then, what is "trust"? Trust is a relative, subject condition that can change based on culture, domain, individuals. And then, given a domain, how can the trustworthiness of a system be measured? In this paper, I present a systematic approach to measure trustworthiness based on a predefined ground truth, represented as a knowledge graph of the domain. The approach is a process with humans in the loop to validate the representation of the domain and to fine-tune the system. Measuring the trustworthiness would be essential for all the entities operating in critical environments, such as healthcare, defense, finance, but it would be very relevant for all the users of LLMs.

Related articles: Most relevant | Search more
arXiv:2304.00008 [cs.AI] (Published 2023-03-27)
On the Creativity of Large Language Models
arXiv:2305.00050 [cs.AI] (Published 2023-04-28)
Causal Reasoning and Large Language Models: Opening a New Frontier for Causality
arXiv:2305.12487 [cs.AI] (Published 2023-05-21)
Augmenting Autotelic Agents with Large Language Models