arXiv:2311.10098 [cs.AI]AbstractReferencesReviewsResources
Automated Parliaments: A Solution to Decision Uncertainty and Misalignment in Language Models
Thomas Forster, Jonathan Ouwerx, Shak Ragoler
Published 2023-10-31Version 1
As AI takes on a greater role in the modern world, it is essential to ensure that AI models can overcome decision uncertainty and remain aligned with human morality and interests. This research paper proposes a method for improving the decision-making of language models (LMs) via Automated Parliaments (APs) - constructs made of AI delegates each representing a certain perspective. Delegates themselves consist of three AI models: generators, modifiers, and evaluators. We specify two mechanisms for producing optimal solutions: the Simultaneous Modification mechanism for response creation and an evaluation mechanism for fairly assessing solutions. The overall process begins when each generator creates a response aligned with its delegate's theory. The modifiers alter all other responses to make them more self-aligned. The evaluators collectively assess the best end response. Finally, the modifiers and generators learn from feedback from the evaluators. In our research, we tested the evaluation mechanism, comparing the use of single-value zero-shot prompting and AP few-shot prompting in evaluating morally contentious scenarios. We found that the AP architecture saw a 57.3% reduction in its loss value compared to the baseline. We conclude by discussing some potential applications of APs and specifically their potential impact when implemented as Automated Moral Parliaments.