University of Cambridge > Talks.cam > Machine Learning Reading Group @ CUED > An Introduction to Transformer Neural Processes

An Introduction to Transformer Neural Processes

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Isaac Reid.

Zoom link available upon request (it is sent out on our mailing list, eng-mlg-rcc [at] lists.cam.ac.uk). Sign up to our mailing list for easier reminders via lists.cam.ac.uk.

Neural processes (NPs) have significantly improved since their inception. A principal factor in their effectiveness has been advancements in the architecture of permutation-invariant set functions—a notable example of which being transformer-based architectures. In this reading group session, we will introduce participants to Transformer Neural Processes (TNPs). We do not assume prior knowledge of NPs or transformers.

Useful background reading: 1) Transformer Neural Processes: Uncertainty-Aware Meta Learning Via Sequence Modelling. Nguyen and Grover (2022). https://arxiv.org/abs/2207.04179 2) Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks. Lee et al. (2018). https://arxiv.org/abs/1810.00825 3) Latent Bottlenecked Attentive Neural Processes. Feng et al. (2022). https://arxiv.org/abs/2211.08458 4) Attentive Neural Processes. Kim et al. (2019). https://arxiv.org/abs/1901.05761.

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity