University of Cambridge > Talks.cam > Machine Learning @ CUED > Backprop through the Void: Optimizing Control Variates for Black-Box Gradient Estimation.

Backprop through the Void: Optimizing Control Variates for Black-Box Gradient Estimation.

Add to your list(s) Download to your calendar using vCal

  • UserGeoff Roeder (University of Toronto) World_link
  • ClockMonday 27 November 2017, 11:00-12:00
  • HouseCBL Seminar Room.

If you have a question about this talk, please contact .

Gradient-based optimization is the foundation of deep learning and reinforcement learning. Even when the mechanism being optimized is unknown or not differentiable, optimization using high-variance or biased gradient estimates is still often the best strategy. We introduce a general framework for learning low-variance, unbiased gradient estimators for black-box functions of random variables. Our method uses gradients of a neural network trained jointly with model parameters or policies, and is applicable in both discrete and continuous settings. We demonstrate this framework for training discrete latent-variable models. We also give an unbiased, action-conditional extension of the advantage actor-critic reinforcement learning algorithm.

This talk is part of the Machine Learning @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity