학술논문

On Guiding Visual Attention with Language Specification
Document Type
Conference
Source
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) CVPR Computer Vision and Pattern Recognition (CVPR), 2022 IEEE/CVF Conference on. :18071-18081 Jun, 2022
Subject
Computing and Processing
Visualization
Computer vision
Image recognition
Training data
Feature extraction
Pattern recognition
Numerical models
Vision + language; Computer vision for social good; Explainable computer vision; Transparency
fairness
accountability
privacy and ethics in vision; Visual reasoning
Language
ISSN
2575-7075
Abstract
While real world challenges typically define visual categories with language words or phrases, most visual classification methods define categories with numerical indices. However, the language specification of the classes provides an especially useful prior for biased and noisy datasets, where it can help disambiguate what features are task-relevant. Recently, large-scale multimodal models have been shown to recognize a wide variety of high-level concepts from a language specification even without additional image training data, but they are often unable to distinguish classes for more fine-grained tasks. CNNs, in contrast, can extract subtle image features that are required for fine-grained discrimination, but will overfit to any bias or noise in datasets. Our insight is to use high-level language specification as advice for constraining the classification evidence to task-relevant features, instead of distractors. To do this, we ground task-relevant words or phrases with attention maps from a pretrained large-scale model. We then use this grounding to supervise a classifier's spatial attention away from distracting context. We show that supervising spatial attention in this way improves performance on classification tasks with biased and noisy data, including ~3 −15% worst-group accuracy improvements and ~41-45% relative improvements on fairness metrics.