학술논문

Unsupervised Calibration through Prior Adaptation for Text Classification using Large Language Models
Document Type
Working Paper
Source
In Proceedings of the RANLP 2023 Student Research Workshop
Subject
Computer Science - Computation and Language
Computer Science - Machine Learning
Language
Abstract
A wide variety of natural language tasks are currently being addressed with large-scale language models (LLMs). These models are usually trained with a very large amount of unsupervised text data and adapted to perform a downstream natural language task using methods like fine-tuning, calibration or in-context learning. In this work, we propose an approach to adapt the prior class distribution to perform text classification tasks without the need for labelled samples and only few in-domain sample queries. The proposed approach treats the LLM as a black box, adding a stage where the model posteriors are calibrated to the task. Results show that these methods outperform the un-adapted model for different number of training shots in the prompt and a previous approach were calibration is performed without using any adaptation data.