University of Cambridge > > CCIMI Seminars > Inverse consistency and global convergence of ResNets

Inverse consistency and global convergence of ResNets

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Willem Diepeveen.

In this talk, I will discuss two very different applications related by the simple idea of invertible transformations.

- The first topic is an analysis of inverse consistency penalty in image matching in conjunction with the use of neural networks. We show that neural networks favours the emergence of smooth transformation for the inverse consistency. Experimentally, we show that this behaviour is fairly stable with respect to the chosen architecture. This is joint work with H. Greer, R. Kwitt and M. Niethammer.

-The second topic is an analysis of global convergence of residual networks when the residual block is parametrized via reproducing kernel Hilbert space vector field. We prove that the resulting problem satisfies the so-called Polyak-Lojasiewicz property, for instance ensuring global convergence if the iterates are bounded. We show that this property applies in a continuous limit as well as in the fully discrete setting. This is joint work with R. Barboni and G. Peyré.

Join Zoom Meeting

Meeting ID: 948 1221 9444 Passcode: 485548

This talk is part of the CCIMI Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity