University of Cambridge > Talks.cam > Computational Neuroscience > Multitasking and compositionality in brain and in neural networks

Multitasking and compositionality in brain and in neural networks

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact .

This talk will synthesize findings from two key papers that investigate the modular and compositional nature of neural computation. We will explore two distinct perspectives on how neural networks achieve flexible behavior in multitask settings by reusing learned computational primitives.

First, we will review Ito et al. 2022 (Compositional generalization through abstract representations in human and artificial neural networks), who used fMRI and a highly compositional task to identify abstract representations (i.e. orthogonalization) as a neural substrate for compositional generalization in humans. They demonstrated that pretraining artificial neural networks (ANNs) on basic task “primitives” induces similar abstract representations, enabling zero-shot generalization and human-like performance.

Next, we’ll turn to the study by Driscoll et al., 2024 (Flexible multitask computation in recurrent networks utilizes shared dynamical motifs.) In this work, the authors show that primitives of neural computation—like attractors and decision boundaries—can arise naturally in a monolithic recurrent neural network, without being explicitly engineered for. Authors support this by locating and tracking attractors in the network’s phase space, clustering tasks based on their neural activity, and demonstrating that simulated “lesions” selectively disrupt tasks in line with those clusters. We will take a critical look at these findings in our journal club, discussing the methods’ applicability as well as their strengths and limitations.

This talk is part of the Computational Neuroscience series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2025 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity