University of Cambridge > Talks.cam > Isaac Newton Institute Seminar Series > Soft Constraints and Uncertainty Representation as a Principle for Intelligent Systems

Soft Constraints and Uncertainty Representation as a Principle for Intelligent Systems

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact nobody.

RCL - Representing, calibrating & leveraging prediction uncertainty from statistics to machine learning

Deep neural networks are often seen as different from other model classes by defying conventional notions of generalization. Popular examples of anomalous generalization behaviour include benign overfitting, double descent, and the success of overparametrization. We argue that these phenomena are not distinct to neural networks, or particularly mysterious. Moreover, this generalization behaviour can be intuitively understood, and rigorously characterized, using long-standing generalization frameworks such as PAC -Bayes and countable hypothesis bounds. We present soft inductive biases, and uncertainty representation, as a key unifying principle in explaining these phenomena: rather than restricting the hypothesis space to avoid overfitting, embrace a flexible hypothesis space, with a soft preference for simpler solutions that are consistent with the data. This principle can be encoded in many model classes, and thus deep learning is not as mysterious or different from other model classes as it might seem. However, we also highlight how deep learning is relatively distinct in other ways, such as its ability for representation learning, phenomena such as mode connectivity, and its relative universality.

This talk is part of the Isaac Newton Institute Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2025 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity