학술논문

Assessing Neural Network Representations During Training Using Noise-Resilient Diffusion Spectral Entropy
Document Type
Conference
Source
2024 58th Annual Conference on Information Sciences and Systems (CISS) Information Sciences and Systems (CISS), 2024 58th Annual Conference on. :1-6 Mar, 2024
Subject
Aerospace
Communication, Networking and Broadcast Technologies
Robotics and Control Systems
Signal Processing and Analysis
Training
Geometry
Neural networks
Supervised learning
Estimation
Predictive models
Entropy
entropy
mutual information
neural network manifold
information theory
diffusion geometry
Language
ISSN
2837-178X
Abstract
Entropy and mutual information in neural networks provide rich information on the learning process, but they have proven difficult to compute reliably in high dimensions. Indeed, in noisy and high-dimensional data, traditional estimates in ambient dimensions approach a fixed entropy and are prohibitively hard to compute. To address these issues, we leverage data geometry to access the underlying manifold and reliably compute these information-theoretic measures. Specifically, we define diffusion spectral entropy (DSE) in neural representations of a dataset as well as diffusion spectral mutual information (DSMI) between different variables representing data. First, we show that they form noise-resistant measures of intrinsic dimensionality and relationship strength in high-dimensional simulated data that outperform classic Shannon entropy, nonparametric estimation, and mutual information neural estimation (MINE). We then study the evolution of representations in classification networks with supervised learning, self-supervision, or overfitting. We observe that (1) DSE of neural representations increases during training; (2) DSMI with the class label increases during generalizable learning but stays stagnant during overfitting; (3) DSMI with the input signal shows differing trends: on MNIST it increases, while on CIFAR-10 and STL-10 it decreases. Finally, we show that DSE can be used to guide better network initialization and that DSMI can be used to predict downstream classification accuracy across 962 models on ImageNet.