University of Cambridge > > Machine Learning @ CUED > Measuring Alignment Between Perceptual Systems: An Analysis Through The Lens of Shared Invariances

Measuring Alignment Between Perceptual Systems: An Analysis Through The Lens of Shared Invariances

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Adrian Weller.

Learning the right invariances is key in learning meaningful representations of data. In this talk I will talk about two of our recent works on measuring if invariances learned by one perceptual model (eg: neural network/human) “align” with another. In the first part of the talk I will talk about alignment of invariances between a neural network and a human. I will talk about challenges and pitfalls in measuring this alignment and will also show some intriguing results about how different choices in the deep learning pipeline (architecture, data augmentation, loss function, and training paradigm) lead to varying levels of alignment. In the second part, I will talk about measuring alignment in invariances between two neural networks. Existing measures that might appear suited for this task (e.g., representation similarity measures) are only narrowly focused on comparing two representations and actually fail to meaningfully capture shared invariances between the models that generate these representations. I will present our proposal on how to repurpose existing representation similarity methods to faithfully measure shared invariance and will show some results on how this varies with the choice of network architectures, loss functions, random weight initialization, and datasets used in the training process. I will close by discussing possible directions for future work including using our proposed measure as a constraint during training.

This talk is part of the Machine Learning @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity