학술논문

Interpretable Neural Networks based classifiers for categorical inputs
Document Type
Working Paper
Source
Subject
Computer Science - Machine Learning
Statistics - Machine Learning
I.5
Language
Abstract
Because of the pervasive usage of Neural Networks in human sensitive applications, their interpretability is becoming an increasingly important topic in machine learning. In this work we introduce a simple way to interpret the output function of a neural network classifier that take as input categorical variables. By exploiting a mapping between a neural network classifier and a physical energy model, we show that in these cases each layer of the network, and the logits layer in particular, can be expanded as a sum of terms that account for the contribution to the classification of each input pattern. For instance, at the first order, the expansion considers just the linear relation between input features and output while at the second order pairwise dependencies between input features are also accounted for. The analysis of the contributions of each pattern, after an appropriate gauge transformation, is presented in two cases where the effectiveness of the method can be appreciated.
Comment: 11 pages, 4 figures