학술논문

1-to-N Large Margin Classifier
Document Type
Conference
Source
2020 33rd SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI) SIBGRAPI Graphics, Patterns and Images (SIBGRAPI), 2020 33rd SIBGRAPI Conference on. :316-323 Nov, 2020
Subject
Computing and Processing
Neural networks
Robustness
Mathematical model
Entropy
Training
Support vector machines
Task analysis
Margin classifier
Noise label data
Adversarial attacks
Language
ISSN
2377-5416
Abstract
Cross entropy with softmax is the standard loss function for classification in neural networks. However, this function can suffer from limitations on discriminative power, lack of generalization, and propensity to overfitting. In order to address these limitations, several approaches propose to enforce a margin on the top of the neural network specifically at the softmax function. In this work, we present a novel formulation that aims to produce generalization and noise label robustness not only by imposing a margin at the top of the neural network, but also by using the entire structure of the mini-batch data. Based on the distance used for SVM to obtain maximal margin, we propose a broader distance definition called 1-to-N distance and an approximated probability function as the basis for our proposed loss function. We perform empirical experimentation on MNIST, CIFAR-10, and ImageNet32 datasets to demonstrate that our loss function has better generalization and noise label robustness properties than the traditional cross entropy method, showing improvements in the following tasks: generalization robustness, robustness in noise label data, and robustness against adversarial examples attacks.