학술논문

A Joint Spatial and Magnification Based Attention Framework for Large Scale Histopathology Classification
Document Type
Conference
Source
2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) CVPRW Computer Vision and Pattern Recognition Workshops (CVPRW), 2021 IEEE/CVF Conference on. :3771-3779 Jun, 2021
Subject
Computing and Processing
Training
Deep learning
Histopathology
Microscopy
Tools
Probability distribution
Pattern recognition
Language
ISSN
2160-7516
Abstract
Deep learning has achieved great success in processing large size medical images such as histopathology slides. However, conventional deep learning methods cannot handle the enormous image sizes; instead, they split the image into patches which are exhaustively processed, usually through multi-instance learning approaches. Moreover and especially in histopathology, determining the most appropriate magnification to generate these patches is also exhaustive: a model needs to traverse all the possible magnifications to select the optimal one. These limitations make the application of deep learning on large medical images and in particular histopathological images markedly inefficient. To tackle these problems, we propose a novel spatial and magnification based attention sampling strategy. First, we use a down-sampled large size image to estimate an attention map that represents a spatial probability distribution of informative patches at different magnifications. Then a small number of patches are cropped from the large size medical image at certain magnifications based on the obtained attention. The final label of the large size image is predicted solely by these patches using an end-to-end training strategy. Our experiments on two different histopathology datasets, the publicly available BACH and a subset of the TCGA-PRAD dataset, demonstrate that the proposed method runs 2.5 times faster with automatic magnification selection in training and at least 1.6 times faster than using all patches in inference as the most of state-of-the-art methods do, without loosing in performance.