University of Cambridge > Talks.cam > Theory - Chemistry Research Interest Group > On Loss Function Landscapes of Neural Neworks

On Loss Function Landscapes of Neural Neworks

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Lisa Masters.

First Year PhD Report

Machine learning is considered to be one of the most popular statistical methods of our time. Yet, machine learning models are commonly viewed as black-boxes, where decision- making is opaque and hard to interpret by humans. This view can be partially attributed to the way that machine learning is applied in practice. Usually, some at best locally optimal set of weights is found and accepted as the solution to the learning problem. Our work challenges this procedure and shows how, by looking at large parts of the loss function landscape (LFL) instead of a single minimum, machine learning can be made more transparent and interpretable. Our aim is to better understand why machine learning works so well, and we believe this is best done by understanding the surface of the function that is optimised during the learning procedure. This talk will introduce machine learning in general and the current state of our research specifically. Firstly, it will be shown how properties of the LFL can be exploited to increase accuracy and interpretability of machine learning tasks by over 20% when combining the expressive power of multiple minima in a single classifier. Secondly, it will be shown how geometric properties of the LFL can be employed to guide loss function selection in neural networks. Lastly, some of the most interesting and promising future aspects of this field will be outlined.

This talk is part of the Theory - Chemistry Research Interest Group series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2021 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity