학술논문

Making the Most of Text Semantics to Improve Biomedical Vision--Language Processing
Document Type
Working Paper
Source
Computer Vision - ECCV 2022, LNCS vol 13696, pp 1-21
Subject
Computer Science - Computer Vision and Pattern Recognition
Computer Science - Computation and Language
Language
Abstract
Multi-modal data abounds in biomedicine, such as radiology images and reports. Interpreting this data at scale is essential for improving clinical care and accelerating clinical research. Biomedical text with its complex semantics poses additional challenges in vision--language modelling compared to the general domain, and previous work has used insufficiently adapted models that lack domain-specific language understanding. In this paper, we show that principled textual semantic modelling can substantially improve contrastive learning in self-supervised vision--language processing. We release a language model that achieves state-of-the-art results in radiology natural language inference through its improved vocabulary and novel language pretraining objective leveraging semantics and discourse characteristics in radiology reports. Further, we propose a self-supervised joint vision--language approach with a focus on better text modelling. It establishes new state of the art results on a wide range of publicly available benchmarks, in part by leveraging our new domain-specific language model. We release a new dataset with locally-aligned phrase grounding annotations by radiologists to facilitate the study of complex semantic modelling in biomedical vision--language processing. A broad evaluation, including on this new dataset, shows that our contrastive learning approach, aided by textual-semantic modelling, outperforms prior methods in segmentation tasks, despite only using a global-alignment objective.
Comment: To appear in ECCV 2022. Code: https://aka.ms/biovil-code Dataset: https://aka.ms/ms-cxr Demo Notebook: https://aka.ms/biovil-demo-notebook