학술논문

Population encoding of stimulus features along the visual hierarchy.
Document Type
article
Source
Proceedings of the National Academy of Sciences of USA. 121(4)
Subject
computational neuroscience
deep networks
encoding manifold
retina
visual cortex
Animals
Mice
Visual Cortex
Visual Perception
Neural Networks
Computer
Neurons
Retina
Photic Stimulation
Language
Abstract
The retina and primary visual cortex (V1) both exhibit diverse neural populations sensitive to diverse visual features. Yet it remains unclear how neural populations in each area partition stimulus space to span these features. One possibility is that neural populations are organized into discrete groups of neurons, with each group signaling a particular constellation of features. Alternatively, neurons could be continuously distributed across feature-encoding space. To distinguish these possibilities, we presented a battery of visual stimuli to the mouse retina and V1 while measuring neural responses with multi-electrode arrays. Using machine learning approaches, we developed a manifold embedding technique that captures how neural populations partition feature space and how visual responses correlate with physiological and anatomical properties of individual neurons. We show that retinal populations discretely encode features, while V1 populations provide a more continuous representation. Applying the same analysis approach to convolutional neural networks that model visual processing, we demonstrate that they partition features much more similarly to the retina, indicating they are more like big retinas than little brains.