COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > NLIP Seminar Series > Latent TAG Derivations for Semantic Role Labeling
Latent TAG Derivations for Semantic Role LabelingAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Laura Rimell. (Joint work with Yudong Liu and Gholamreza Haffari) Semantic Role Labeling (SRL) is a natural language processing task that aims to identify and label all the arguments for each predicate occurring in a sentence. SRL is difficult because arguments can appear in different syntactic positions relative to the predicate due to syntactic alternations. Furthermore, complex syntactic embedding can create long-distance dependencies between predicate and argument. As in other natural language learning tasks, identifying discriminative features plays an important role and all state-of-the-art SRL systems use high-quality statistical parsers as a source of features in order to identify and classify semantic roles. In statistical parsing the use of latent information (such as state-splitting of non-terminals in a context-free grammar) has led to substantial improvements in parsing accuracy. However, apart from the sentence simplification approach of Vickrey and Koller (2008), latent information has not been exploited for semantic role labeling. In our work, we take the output of a statistical parser and then decompose the phrase structure tree into a large number of hidden Tree-adjoining grammar (TAG) derivations. Each hidden or latent TAG derivation represents a different way of representing the structural dependency relationship between the predicate and argument. We hypothesize that positive and negative examples of individual semantic roles can be reliably distinguished by possibly different latent TAG features. Motivated by this insight we show that latent support vector machines (LSVMs) can be used for the SRL task by exploiting these latent TAG features. In experiments on the PropBank-CoNLL 2005 data set, our method significantly outperforms the state of the art (even compared to models using global constraints or global inference over multiple parses). We show that latent SVMs offer an interesting new framework for NLP tasks, and using experimental analysis we examine how and why the method is effective at exploiting the latent TAG features in order to improve the precision of identifying and classifying semantic roles. This talk is part of the NLIP Seminar Series series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsType the title of a new list here Talks in Architecture PDN TalksOther talksScale and anisotropic effects in necking of metallic tensile specimens Nonstationary Gaussian process emulators with covariance mixtures BOOK LAUNCH: Studying Arctic Fields: Cultures, Practices, and Environmental Sciences Feeding your genes: The impact of nitrogen availability on gene and genome sequence evolution MicroRNAs as circulating biomarkers in cancer Making a Crowdsourced Task Attractive: Measuring Workers Pre-task Interactions 'Walking through Language – Building Memory Palaces in Virtual Reality' Animal Migration A feast of languages: multilingualism in neuro-typical and atypical populations Protein Folding, Evolution and Interactions Symposium Lecture Supper: James Stuart: Radical liberalism, ‘non-gremial students’ and continuing education TBA |