University of Cambridge > > Machine Learning Reading Group @ CUED > Task Alignment

Task Alignment

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact .

Zoom link available upon request (it is sent out on our mailing list, eng-mlg-rcc [at] Sign up to our mailing list for easier reminders via

Imagine the implications of submitting an essay or a code routine generated by an LLM without double-checking. Deploying a task-specific model without verifying or modifying is even more consequential. As with any other software, ML models cannot be perfect and need constant monitoring and patching. Yet, the problem of making targeted bug fixes to ML models received little attention and love. We will discuss representative papers of the four broad solutions and their limitations. We will conclude with a critical evaluation of the current progress and future directions. The attached image is a non-exhaustive summary of related work. The papers that we will cover (likely) are further down below.


Parameter editing approaches. Shibani Santurkar, Dimitris Tsipras, Mahalaxmi Elango, David Bau, Antonio Torralba, and Aleksander Madry. Editing a classifier by rewriting its prediction rules. Advances in Neural Information Processing Systems, 34:23359–23373, 2021. Transparent model approaches. Pang Wei Koh, Thao Nguyen, Yew Siang Tang, Stephen Mussmann, Emma Pierson, Been Kim, and Percy Liang. Concept bottleneck models. In International conference on machine learning, pages 5338–5348. PMLR , 2020. Bhavana Dalvi Mishra, Oyvind Tafjord, and Peter Clark. Towards teachable reasoning systems: Using a dynamic memory of user feedback for continual system improvement. arXiv preprint arXiv:2204.13074, 2022. Dense data annotation approaches Ross, Andrew Slavin, Michael C. Hughes, and Finale Doshi-Velez. “Right for the right reasons: Training differentiable models by constraining their explanations.” arXiv preprint arXiv:1703.03717 (2017). Sukrut Rao, Moritz Böhle, Amin Parchami-Araghi, and Bernt Schiele. Studying how to efficiently and effectively guide models with explanations. In Proceedings of the IEEE /CVF International Conference on Computer Vision, pages 1922–1933, 2023. Data augmentation approaches Shiori Sagawa, Pang Wei Koh, Tatsunori B Hashimoto, and Percy Liang. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. arXiv preprint arXiv:1911.08731, 2019.

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity