COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > NLIP Seminar Series > Interpretability in NLP: Moving Beyond Vision
Interpretability in NLP: Moving Beyond VisionAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Huiyuan Xie. Note unusual time Join Zoom Meeting https://cl-cam-ac-uk.zoom.us/j/92766937414?pwd=bHJ1TDRqbHRHN0l0eEdPUkxHNlVYQT09 Meeting ID: 927 6693 7414 Passcode: 751190 Deep neural network models have been extremely successful for natural language processing (NLP) applications in recent years, but one complaint they often suffer from is their lack of interpretability. On the other hand, the field of computer vision has navigated their own way of improving interpretability for deep learning models, most notably with post-hoc interpretation methods such as saliency. In this talk, we investigate the possibility of deploying these interpretation methods to natural language processing applications. Our study covers common NLP applications such as language modeling and neural machine translation, and we stress the necessity of quantitative evaluations of interpretations apart from qualitative evaluations. We show that this adaptation is feasible at least in some scenarios, while also pointing out some shortcomings of the current practice that may shed light on future research directions. This talk is part of the NLIP Seminar Series series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsCheck out new form of music sound . Program verification reading group.Other talksFull-stack CFD on AWS CANCELLED - Growing a sustainable bioeconomy Kidney cancer: the most lethal urological malignancy Ocean Plastics Electrochemo-poromechanical interactions: the case of ionic polymer metal composites CTLA-4-mediated control of follicular helper T cells and autoimmunity |