University of Cambridge > > Isaac Newton Institute Seminar Series > Artificial neurons meet real neurons: pattern selectivity in V4 via deep learning

Artificial neurons meet real neurons: pattern selectivity in V4 via deep learning

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact INI IT.

SNAW01 - Graph limits and statistics

Co-authors: Yuansi Chen (UCB), Reza Abbasi Asl (UCB), Adam Bloniarz (UCB), Jack Gallant (UCB)

Vision in humans and in non-human primates is mediated by a constellation of hierarchically organized visual areas. One important area is V4, a large retinotopically-organized area located intermediate between primary visual cortex and high-level areas in the inferior temporal lobe. V4 neurons have highly nonlinear response properties. Consequently, it has been difficult to construct quantitative models that accurately describe how visual information is represented in V4. To better understand the filtering properties of V4 neurons we recorded from 71 well isolated cells stimulated with natural images. We fit predictive models of neuron spike rates using transformations of natural images learned by a convolutional neural network (CNN). The CNN was trained for image classification on the ImageNet dataset. To derive a model for each neuron, we first propagate each of the stimulus images forward to an inner layer of the CNN . We use the activations of the inner layer as the featu re (predictor) vector in a high dimensional regression, where the response rate of the V4 neuron is taken as the response vector. Thus, the final model for each neuron consists of a multilayer nonlinear transformation provided by the CNN , and one final linear layer of weights provided by regression. We find that models using the first two layers of three well-known CNNs provide better predictions of responses of V4 neurons than those obtained using a conventional Gabor-like wavelet model. To characterize the spatial and pattern selectivity of each V4 neuron, we both explicitly optimize the input image to maximize the predicted spike rate, and visualize the selected filters of the CNN . We also perform dimensionality reduction by sparse PCA to visualize the population of neurons. Finally, we show the stability of our analysis across the three CNNs, and conclude that the V4 neurons are tuned to a remarkable diversity of shapes such as curves, blobs, checkerboard patterns, and V1-like gratings. 

This talk is part of the Isaac Newton Institute Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2023, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity