University of Cambridge > Talks.cam > Craik Club > Cortical mechanisms underlying integration of local visual cues to form global representations

Cortical mechanisms underlying integration of local visual cues to form global representations

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact John Mollon.

Human and non‐human primates effortlessly see both global and local features of objects in great detail. However, how the cortex integrates local visual cues to form global representations along visual hierarchies remains mysterious, particularly considering a long-standing paradox in vision as neurally encoded complexity increases along the visual hierarchy, the known acuity or resolving power dramatically decreases. Putting it simply, how do we simultaneously recognize the face of our child, while still resolving the individual hairs of her or his eyelashes? Many models of visual processing follow the idea that low-level resolution and position information is discarded to yield high-level representations (including cutting edge deep learning models). These are themes that are fundamental to conceiving how the brain does sensory transformation, and current ideas require hypothetical complicated recurrent loops to connect global and local information across the visual hierarchy.

Combining large-scale brain imaging to record the transformation of information across multiple visual areas simultaneously with electrophysiological single-unit recordings (V1, V2, and V4, MT/MST), we demonstrated a bottom-up cascade of cortical visual processing mechanisms underlying integration of local information to form global representations in the primate ventral and dorsal visual streams. Our local-global stimuli are tyre-type text bars, illusory contours, and the Pinna-Brelstaff concentric rings. Recently, we reveal an unexpected neural clustering preserving visual acuity from V1 into V4, enabling a detailed spatiotemporal separation of local and global features along the object-processing hierarchy, suggesting that higher acuities are retained to later stages where more detailed cognitive behaviour occurs. The study reinforces the point that neurons in V4 (and most likely also in infero-temporal cortex) do not necessarily need to have only low visual acuity, which may begin to resolve the long-standing paradox concerning fine visual discrimination. Thus, our research will prompt further studies to probe how preservation of low-level information is useful for higher-level vision and provide new ideas to inspire the next generation of deep neural network architectures.

This talk is part of the Craik Club series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity