University of Cambridge > Talks.cam > The Craik Journal Club > End-to-end topographic networks as models of cortical map formation and human visual behaviour

End-to-end topographic networks as models of cortical map formation and human visual behaviour

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Adam Triabhall.

This week we will discuss and debate a very recent paper by Lu and colleagues (2025).

Abstract: “A prominent feature of the primate visual system is its topographic organization. For understanding its origins, its computational role and its behavioural implications, computational models are of central importance. Yet, vision is commonly modelled using convolutional neural networks, which are hard-wired to learn identical features across space and thus lack topography. Here we overcome this limitation by introducing all-topographic neural networks (All-TNNs). All-TNNs develop several features reminiscent of primate topography, including smooth orientation and category selectivity maps, and enhanced processing of regions with task-relevant information. In addition, All-TNNs operate on a low energy budget, suggesting a metabolic benefit of smooth topographic organization. To test our model against behaviour, we collected a dataset of human spatial biases in object recognition and found that All-TNNs significantly outperform control models. All-TNNs thereby offer a promising candidate for modelling primate visual topography and its role in downstream behaviour” (Lu et al., 2025).

Reference: Lu, Z., Doerig, A., Bosch, V., Krahmer, B., Kaiser, D., Cichy, R. M., & Kietzmann, T. C. (2025). End-to-end topographic networks as models of cortical map formation and human visual behaviour. Nature Human Behaviour. https://doi.org/10.1038/s41562-025-02220-7

This talk is part of the The Craik Journal Club series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2025 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity