University of Cambridge > Talks.cam > Machine Learning Reading Group @ CUED > Learning Symmetries in Neural Networks

Learning Symmetries in Neural Networks

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Isaac Reid.

Zoom link available upon request (it is sent out on our mailing list, eng-mlg-rcc [at] lists.cam.ac.uk). Sign up to our mailing list for easier reminders via lists.cam.ac.uk.

While the importance of incorporating symmetries into NNs has been well understood for some time, until recently, the standard approach has been to incorporate these inductive biases into the architecture of the neural networks (e.g., CNNs have translation, or even rotation, invariance). Unfortunately, this requires prior knowledge of which symmetries are present in the dataset. To motivate why this might be a problem, consider a model trained to classify digits. If this model is fully rotationally invariant, it cannot distinguish between some 6s and 9s. But, there is certainly some rotation invariance due to natural variations in handwriting. Thus, we need to learn how invariant our classifier should be to rotations. This reading group will explore methods to learn such invariances directly from the data. We will tackle invariance learning in both the supervised and unsupervised settings.

Required reading: None

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity