University of Cambridge > > Computational Neuroscience > From the olfactory cocktail party to markerless tracking

From the olfactory cocktail party to markerless tracking

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Dr Máté Lengyel.

The olfactory system, like other sensory systems, can detect specific stimuli of interest amidst complex, varying backgrounds. To gain insight into the neural mechanisms underlying this ability, we estimated a model for mixture responses that incorporated nonlinear interactions and trial-to-trial variability and explored potential decoding mechanisms that can mimic mouse performance when given glomerular responses as input. We find that a linear decoder could match mouse performance using just a small subset of the glomeruli. However, when such a decoder is trained only with single odors, it generalizes poorly to mixture stimuli. We show that mice similarly fail to generalize, suggesting that they learn this segregation task discriminatively by adjusting task-specific decision boundaries without taking advantage of a demixed representation of odors (Mathis et al. 2016). I will present ongoing experiments designed to challenge this model, for which mice were trained in a semisupervised fashion.

Motivated by the weak constraints for elucidating the neural mechanisms in this olfactory perception task, we are increasingly turning our attention to more challenging behaviors – like trail tracking. Mice naturally follow odor trails and one can easily gather large amounts of video data. However, reliably extracting particular aspects of a behavior, like the position of the snout, can be difficult. In motor control studies reflective markers are often used to assist with computer-based tracking, yet markers are highly intrusive for smaller animals. I will present a Deep Learning based method for markerless tracking and demonstrate the versatility of this framework in three different tasks: trail-tracking, social behaviors and skilled forelimb reaching. This algorithm is trained in an end-to-end fashion based on training data with labels for specific anatomical points of interest. Crucially, only a small set of frames is required for training and the algorithm generalizes to test data in a quantitatively comparable way to human annotators.

This talk is part of the Computational Neuroscience series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity