KOR

e-Article

PnP-GA+: Plug-and-Play Domain Adaptation for Gaze Estimation Using Model Variants
Document Type
Periodical
Source
IEEE Transactions on Pattern Analysis and Machine Intelligence IEEE Trans. Pattern Anal. Mach. Intell. Pattern Analysis and Machine Intelligence, IEEE Transactions on. 46(5):3707-3721 May, 2024
Subject
Computing and Processing
Bioengineering
Estimation
Adaptation models
Task analysis
Solid modeling
Data models
Feature extraction
Training
Gaze estimation
unsupervised
domain adaptation
group assembling
Language
ISSN
0162-8828
2160-9292
1939-3539
Abstract
Appearance-based gaze estimation has garnered increasing attention in recent years. However, deep learning-based gaze estimation models still suffer from suboptimal performance when deployed in new domains, e.g., unseen environments or individuals. In our previous work, we took this challenge for the first time by introducing a plug-and-play method (PnP-GA) to adapt the gaze estimation model to new domains. The core concept of PnP-GA is to leverage the diversity brought by a group of model variants to enhance the adaptability to diverse environments. In this article, we propose the PnP-GA+ by extending our approach to explore the impact of assembling model variants using three additional perspectives: color space, data augmentation, and model structure. Moreover, we propose an intra-group attention module that dynamically optimizes pseudo-labeling during adaptation. Experimental results demonstrate that by directly plugging several existing gaze estimation networks into the PnP-GA+ framework, it outperforms state-of-the-art domain adaptation approaches on four standard gaze domain adaptation tasks on public datasets. Our method consistently enhances cross-domain performance, and its versatility is improved through various ways of assembling the model group.