학술논문

Improving Self-Supervised Learning for Out-Of-Distribution Task via Auxiliary Classifier
Document Type
Conference
Source
2022 IEEE International Conference on Image Processing (ICIP) Image Processing (ICIP), 2022 IEEE International Conference on. :3036-3040 Oct, 2022
Subject
Computing and Processing
Signal Processing and Analysis
Training
Head
Codes
Image processing
Semantics
Self-supervised learning
Multitasking
out of distribution
self-supervised learning
auxiliary classifier
Language
ISSN
2381-8549
Abstract
In real world scenarios, out-of-distribution (OOD) datasets may have a large distributional shift from training datasets. This phenomena generally occurs when a trained classifier is deployed on varying dynamic environments, which causes a significant drop in performance. To tackle this issue, we are proposing an end-to-end deep multi-task network in this work. Observing a strong relationship between rotation prediction (self-supervised) accuracy and semantic classification accuracy on OOD tasks, we introduce an additional auxiliary classification head in our multi-task network along with semantic classification and rotation prediction head. To observe the influence of this addition classifier in improving the rotation prediction head, our proposed learning method is framed into bi-level optimisation problem where the upper-level is trained to update the parameters for semantic classification and rotation prediction head. In the lower-level optimisation, only the auxiliary classification head is updated through semantic classification head by fixing the parameters of the semantic classification head. The proposed method has been validated through three unseen OOD datasets where it exhibits a clear improvement in semantic classification accuracy than other two baseline methods. Our code is available on GitHub https://github.com/harshita-555/OSSL