University of Cambridge > Talks.cam > Isaac Newton Institute Seminar Series > Overconfidence in Neural Networks: Fixing Uncertainty with Structured Priors and Post-hoc Calibration

Overconfidence in Neural Networks: Fixing Uncertainty with Structured Priors and Post-hoc Calibration

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact nobody.

RCLW02 - Calibrating prediction uncertainty : statistics and machine learning perspectives

Modern neural networks often produce overconfident predictions, particularly when facing out-of-distribution data or under computational constraints. In this talk, I will explore recent advances in uncertainty-aware modelling that tackle this issue from two complementary angles. First, I present how periodic activation functions can induce stationary priors in Bayesian neural networks, drawing a direct connection to Gaussian process models and enabling models that better “know what they don’t know”. Second, I discuss a lightweight, post-hoc method to correct overconfidence in dynamic neural networks—models that adapt their computational effort based on input complexity—by probabilistically modelling their final layers to account for uncertainty. Together, these contributions provide principled tools to improve calibration, reliability, and resource-aware decision-making in modern deep learning systems.

This talk is part of the Isaac Newton Institute Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2025 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity