학술논문

Improving antibody language models with native pairing
Document Type
Working Paper
Source
Subject
Quantitative Biology - Biomolecules
Language
Abstract
Current antibody language models are limited by their use of unpaired antibody sequence data and the biases in publicly available antibody sequence datasets, which are skewed toward antibodies against a relatively small number of pathogens. A recently published dataset (by Jaffe, et al) of approximately 1.6 x 10^6 natively paired human antibody sequences from healthy donors represents by far the largest dataset of its kind and offers a unique opportunity to evaluate how antibody language models can be improved by training with natively paired antibody sequence data. We trained two Baseline Antibody Language Models (BALM), using natively paired (BALM-paired) or unpaired (BALM-unpaired) sequences from the Jaffe dataset. We provide evidence that training with natively paired sequences substantially improves model performance and that this improvement results from the model learning immunologically relevant features that span the light and heavy chains. We also show that ESM-2, a state-of-the-art general protein language model, learns similar cross-chain features when fine-tuned with natively paired antibody sequence data.