COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |

## NAACL practice talksAdd to your list(s) Download to your calendar using vCal - Simon Baker (LTL) & Marek Rei (NLIP), University of Cambridge
- Friday 25 May 2018, 12:00-13:00
- FW26, Computer Laboratory.
If you have a question about this talk, please contact Andrew Caines.
Marek Rei & Anders Søgaard Can attention- or gradient-based visualization techniques be used to infer token-level labels for binary sequence tagging problems, using networks trained only on sentence-level labels? We construct a neural network architecture based on soft attention, train it as a binary sentence classifier and evaluate against token-level annotation on four different datasets. Inferring token labels from a network provides a method for quantitatively evaluating what the model is learning, along with generating useful feedback in assistance systems. Our results indicate that attention-based methods are able to predict token-level labels more accurately, compared to gradient-based methods, sometimes even rivaling the supervised oracle network.
Yiannos A. Stathopoulos, Simon Baker, Marek Rei & Simone Teufel Information about the meaning of mathematical variables in text is useful in NLP /IR tasks such as symbol disambiguation, topic modeling and mathematical information retrieval (MIR). We introduce variable typing, the task of assigning one mathematical type (multi-word technical terms referring to mathematical concepts) to each variable in a sentence of mathematical text. As part of this work, we also introduce a new annotated data set composed of 33,524 data points extracted from scientific documents published on arXiv. Our intrinsic evaluation demonstrates that our data set is sufficient to successfully train and evaluate current classifiers from three different model architectures. The best performing model is evaluated on an extrinsic task: MIR , by producing a typed formula index. Our results show that the best performing MIR models make use of our typed index, compared to a formula index only containing raw symbols, thereby demonstrating the usefulness of variable typing. This talk is part of the NLIP Seminar Series series. ## This talk is included in these lists:- All Talks (aka the CURE list)
- Cambridge Centre for Data-Driven Discovery (C2D3)
- Cambridge Forum of Science and Humanities
- Cambridge Language Sciences
- Cambridge talks
- Chris Davis' list
- Department of Computer Science and Technology talks and seminars
- FW26, Computer Laboratory
- Graduate-Seminars
- Guy Emerson's list
- Interested Talks
- Language Sciences for Graduate Students
- NLIP Seminar Series
- PMRFPS's
- Simon Baker's List
- Trust & Technology Initiative - interesting events
- bld31
- ndk22's list
- ob366-ai4er
- rp587
Note that ex-directory lists are not shown. |
## Other listsModern European History Workshop Giving What We Can: Cambridge Pembroke Politics## Other talksAdmissions and early intervention: insights from newer approaches RevolutioN Trains Computer Vision Anti-cancer drugs for transmissible cancers in Tasmanian devils In search of amethysts, black gold and yellow gold |