University of Cambridge > > Mathematics and Machine Learning > Reading group: Underpinning techniques of most widely used DNN architectures

Reading group: Underpinning techniques of most widely used DNN architectures

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Nicolai Baldin.

In the first session we will look more closely into common techniques of widely used NN architectures like batch normalisation, dropout and stochastic optimisers. We shall also touch upon regularisation ideas and various activation functions.

It will be roughly based upon the following papers:

1. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift pdf

2. Dropout: A Simple Way to Prevent Neural Networks from Overfitting pdf

3. Adam: A Method for Stochastic Optimization pdf

The first session will be given by the organisers but participants are expected to be familiar with the papers. More information about the reading group can be found at

This talk is part of the Mathematics and Machine Learning series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity