University of Cambridge > > Machine Learning Reading Group @ CUED > An Introduction to Transformer Neural Processes

An Introduction to Transformer Neural Processes

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Isaac Reid.

Zoom link available upon request (it is sent out on our mailing list, eng-mlg-rcc [at] Sign up to our mailing list for easier reminders via

Neural processes (NPs) have significantly improved since their inception. A principal factor in their effectiveness has been advancements in the architecture of permutation-invariant set functions—a notable example of which being transformer-based architectures. In this reading group session, we will introduce participants to Transformer Neural Processes (TNPs). We do not assume prior knowledge of NPs or transformers.

Useful background reading: 1) Transformer Neural Processes: Uncertainty-Aware Meta Learning Via Sequence Modelling. Nguyen and Grover (2022). 2) Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks. Lee et al. (2018). 3) Latent Bottlenecked Attentive Neural Processes. Feng et al. (2022). 4) Attentive Neural Processes. Kim et al. (2019).

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity