학술논문

Parameter-Efficient Instruction Tuning of Large Language Models For Extreme Financial Numeral Labelling
Document Type
Working Paper
Source
Subject
Computer Science - Computation and Language
Computer Science - Computational Engineering, Finance, and Science
Computer Science - Machine Learning
Language
Abstract
We study the problem of automatically annotating relevant numerals (GAAP metrics) occurring in the financial documents with their corresponding XBRL tags. Different from prior works, we investigate the feasibility of solving this extreme classification problem using a generative paradigm through instruction tuning of Large Language Models (LLMs). To this end, we leverage metric metadata information to frame our target outputs while proposing a parameter efficient solution for the task using LoRA. We perform experiments on two recently released financial numeric labeling datasets. Our proposed model, FLAN-FinXC, achieves new state-of-the-art performances on both the datasets, outperforming several strong baselines. We explain the better scores of our proposed model by demonstrating its capability for zero-shot as well as the least frequently occurring tags. Also, even when we fail to predict the XBRL tags correctly, our generated output has substantial overlap with the ground-truth in majority of the cases.
Comment: This work has been accepted to appear at North American Chapter of the Association for Computational Linguistics (NAACL), 2024