학술논문

IOSL: Incremental Open Set Learning
Document Type
Periodical
Author
Source
IEEE Transactions on Circuits and Systems for Video Technology IEEE Trans. Circuits Syst. Video Technol. Circuits and Systems for Video Technology, IEEE Transactions on. 34(4):2235-2248 Apr, 2024
Subject
Components, Circuits, Devices and Systems
Communication, Networking and Broadcast Technologies
Computing and Processing
Signal Processing and Analysis
Task analysis
Training
Prototypes
Robots
Adaptation models
Feature extraction
Extraterrestrial measurements
Open set recognition
class incremental learning
Language
ISSN
1051-8215
1558-2205
Abstract
Class incremental learning (CIL) has drawn wide attention in academic researches. However, most existing methods cannot be applied to some practical scenarios in which unknown classes occur during the inference stage. To solve this problem, we target a more challenging and realistic setting: Incremental Open Set Learning (IOSL), which needs to reject unknown classes from test data while incrementally learning new classes. IOSL has two coupled key challenges: 1) overcoming the catastrophic forgetting of old classes when learning new classes incrementally due to the rarity of old training samples; and 2) minimizing the empirical classification risk on known classes and the open space risk on unknown classes. To address these challenges, we propose an incremental open-set learning method with a “future-look” ability. This ability reserves embedding space for incrementally arriving new classes and potential unknown classes simultaneously to alleviate the catastrophic forgetting indirectly and recognize unknown classes well. Specifically, a normalized prototype learning strategy is designed to minimize the empirical classification risk and implicitly reserve some space. Moreover, we design an extra classes synthesizing module to explicitly reserve more suitable space. This further minimizes the empirical classification risk while reducing the open space risk. Furthermore, we develop an adaptive metric learning loss to mitigate the class imbalance between old and new classes, which focuses on exploiting exemplars fully and selects an adaptive margin for pairs of old and new classes. Extensive experiments on representative classification datasets validate the superiority of our method.