학술논문

Memory Guided Transformer With Spatio-Semantic Visual Extractor for Medical Report Generation
Document Type
Periodical
Source
IEEE Journal of Biomedical and Health Informatics IEEE J. Biomed. Health Inform. Biomedical and Health Informatics, IEEE Journal of. 28(5):3079-3089 May, 2024
Subject
Bioengineering
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Signal Processing and Analysis
Transformers
Radiology
Semantics
Feature extraction
Visualization
Decoding
Medical diagnostic imaging
Medical report generation
Deformable network
Semantic network
Spatio-semantic visual extractor
Language
ISSN
2168-2194
2168-2208
Abstract
Medicalimaging-based report writing for effective diagnosis in radiology is time-consuming and can be error-prone by inexperienced radiologists. Automatic reporting helps radiologists avoid missed diagnoses and saves valuable time. Recently, transformer-based medical report generation has become prominent in capturing long-term dependencies of sequential data with its attention mechanism. Nevertheless, input features obtained from traditional visual extractor of conventional transformers do not capture spatial and semantic information of an image. So, the transformer is unable to capture fine-grained details and may not produce detailed descriptive reports of radiology images. Therefore, we propose a spatio-semantic visual extractor (SSVE) to capture multi-scale spatial and semantic information from radiology images. Here, we incorporate two types of networks in ResNet 101 backbone architecture, i.e. (i) deformable network at the intermediate layer of ResNet 101 that utilizes deformable convolutions in order to obtain spatially invariant features, and (ii) semantic network at the final layer of backbone architecture which uses dilated convolutions to extract rich multi-scale semantic information. Further, these network representations are fused to encode fine-grained details of radiology images. The performance of our proposed model outperforms existing works on two radiology report datasets, i.e., IU X-ray and MIMIC-CXR.