University of Cambridge > Talks.cam > Statistics > Posterior sampling via autoregressive generation

Posterior sampling via autoregressive generation

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Qingyuan Zhao.

Uncertainty quantification remains a critical challenge when using deep learning models, particularly in complex decision-making settings. We propose a new framework for learning bandit algorithms from massive historical data, by combining classical ideas from multiple imputation with autoregressive generative sequence modeling. We demonstrate our approach in a cold-start recommendation problem where, first, we use historical data to pretrain an autoregressive model to predict sequences of repeated feedback/rewards (e.g., responses to news articles shown to different users over time). In learning to make accurate predictions, the model implicitly learns an informed prior based on rich action features (e.g., article headlines) and how to sharpen beliefs as more rewards are gathered (e.g., clicks as each article is recommended). At decision-time, the algorithm autoregressively samples (imputes) a hypothetical sequence of rewards for each action and chooses the action with the largest average imputed reward. Far from a heuristic, our approach is an implementation of Thompson sampling (with a learned prior), a prominent active exploration algorithm. We prove our pretraining sequence loss directly controls online decision-making performance, and we demonstrate our framework on a news recommendation task where we integrate end-to-end fine-tuning of a pretrained language model to process news article headline text to improve performance.

This talk is part of the Statistics series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity