University of Cambridge > Talks.cam > NLIP Seminar Series > Integrating Combinatorial Solvers and Neural Models

Integrating Combinatorial Solvers and Neural Models

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Richard Diehl Martinez.

Neural models—including language models such as ChatGPT—can exhibit remarkable abilities; paradoxically, they also struggle with algorithmic tasks where much simpler models excel. To solve these issues, we propose Implicit Maximum Likelihood Estimation (IMLE), a framework for end-to-end learning of models combining algorithmic combinatorial solvers and differentiable neural components, which allows us to incorporate planning and reasoning algorithms in neural architectures by just adding a simple decorator [1, 2].

[1] Implicit MLE : Backpropagating Through Discrete Exponential Family Distributions. https://arxiv.org/abs/2106.01798, NeurIPS 2021 [2] Adaptive Perturbation-Based Gradient Estimation for Discrete Latent Variable Models. https://arxiv.org/abs/2209.04862, AAAI 2023

Join Zoom Meeting

https://cam-ac-uk.zoom.us/j/86071371348?pwd=OVlqdDhZNHlGbzV5RUZrSzM1cUlhUT09

Meeting ID: 860 7137 1348

Passcode: 387918

This talk is part of the NLIP Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity