학술논문

Evaluating Language Model Agency through Negotiations
Document Type
Working Paper
Source
Subject
Computer Science - Computation and Language
Computer Science - Artificial Intelligence
Computer Science - Machine Learning
Language
Abstract
We introduce an approach to evaluate language model (LM) agency using negotiation games. This approach better reflects real-world use cases and addresses some of the shortcomings of alternative LM benchmarks. Negotiation games enable us to study multi-turn, and cross-model interactions, modulate complexity, and side-step accidental evaluation data leakage. We use our approach to test six widely used and publicly accessible LMs, evaluating performance and alignment in both self-play and cross-play settings. Noteworthy findings include: (i) only closed-source models tested here were able to complete these tasks; (ii) cooperative bargaining games proved to be most challenging to the models; and (iii) even the most powerful models sometimes "lose" to weaker opponents
Comment: Accepted to ICLR 2024, code and link to project data are made available at https://github.com/epfl-dlab/LAMEN