학술논문

Revealing interpretable object representations from human visual cortex and artificial neural networks
Document Type
Conference
Source
2023 11th International Winter Conference on Brain-Computer Interface (BCI) Brain-Computer Interface (BCI), 2023 11th International Winter Conference on. :1-3 Feb, 2023
Subject
Bioengineering
Computing and Processing
Robotics and Control Systems
Signal Processing and Analysis
Visualization
Brain
Semantics
Artificial neural networks
Learning (artificial intelligence)
Predictive models
Brain-computer interfaces
representation learning
object representations
similarity
interpretability
behavior
fMRI
deep learning
Language
ISSN
2572-7672
Abstract
Predictive models are often limited by their strong focus on prediction accuracy, leading to potential for shortcut learning and limited out-of-set generalization. Recent interpretability methods have focused primarily on understanding the contribution of individual features or image regions to classification performance, but have placed less emphasis on the larger set of representational motifs that are being learned by predictive models. In this talk, I will highlight recent work from our own group aimed at revealing interpretable object representations from human behavior, patterns of brain activity, and artificial neural networks. Our approach operates at the level of triplet similarities and yields low-dimensional human interpretable embeddings with excellent reconstruction accuracy, providing both perceptual as well as semantic representational dimensions. By providing a trade-off between complexity, interpretability and performance, this approach can reveal important contributions to prediction performance that may be useful for improving future predictive models.