학술논문

Domain Mixture: An Overlooked Scenario in Domain Adaptation
Document Type
Conference
Source
2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA) Machine Learning And Applications (ICMLA), 2019 18th IEEE International Conference On. :22-27 Dec, 2019
Subject
Computing and Processing
Engineering Profession
Robotics and Control Systems
Signal Processing and Analysis
Training
Standards
Feature extraction
Robots
Machine learning
Task analysis
Optimization
Domain Adaptation
Domain Mixture
Convolutional Neural Network
Gradient Reversal
Language
Abstract
An image based object classification system trained on one domain usually shows decreased performance for other domains if data distributions differ significantly. There exist various domain adaptation approaches that improve generalization between domains. However, those approaches consider during transfer only the restricted setting where supervised samples of all competing classes are available from the source domain. We investigate here the more open and so far overlooked scenario, where during training only a subset of all competing classes is shown in one domain and another subset in another domain. We show the unexpected tendency of a deep learning classifier to use the domain origin as a prominent feature, which is resulting in a poor performance when testing on samples of unseen domain-class combinations. With an existing domain adaptation method this issue can be overcome, while additional unsupervised data of all unseen domain-class combinations is not essential. First results of this overlooked scenario are extensively discussed on a modified MNIST benchmark.