학술논문

Reverse-Engineering Decoding Strategies Given Blackbox Access to a Language Generation System
Document Type
Working Paper
Source
Subject
Computer Science - Machine Learning
Computer Science - Computation and Language
Computer Science - Cryptography and Security
Language
Abstract
Neural language models are increasingly deployed into APIs and websites that allow a user to pass in a prompt and receive generated text. Many of these systems do not reveal generation parameters. In this paper, we present methods to reverse-engineer the decoding method used to generate text (i.e., top-$k$ or nucleus sampling). Our ability to discover which decoding strategy was used has implications for detecting generated text. Additionally, the process of discovering the decoding strategy can reveal biases caused by selecting decoding settings which severely truncate a model's predicted distributions. We perform our attack on several families of open-source language models, as well as on production systems (e.g., ChatGPT).
Comment: 6 pages, 4 figures, 3 tables. Also, 5 page appendix. Accepted to INLG 2023