BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Latent TAG Derivations for Semantic Role Labeling - Anoop Sarkar\,
  Simon Fraser University
DTSTART:20100312T120000Z
DTEND:20100312T130000Z
UID:TALK23616@talks.cam.ac.uk
CONTACT:Laura Rimell
DESCRIPTION:(Joint work with Yudong Liu and Gholamreza Haffari) \n\nSemant
 ic Role Labeling (SRL) is a natural language processing task that aims to 
 identify and label all the arguments for each predicate occurring in a sen
 tence. SRL is difficult because arguments can appear in different syntacti
 c positions relative to the predicate due to syntactic alternations. Furth
 ermore\, complex syntactic embedding can create long-distance dependencies
  between predicate and argument. As in other natural language learning tas
 ks\, identifying discriminative features plays an important role and all s
 tate-of-the-art SRL systems use high-quality statistical parsers as a sour
 ce of features in order to identify and classify semantic roles. \n\nIn st
 atistical parsing the use of latent information (such as state-splitting o
 f non-terminals in a context-free grammar) has led to substantial improvem
 ents in parsing accuracy. However\, apart from the sentence simplification
  approach of Vickrey and Koller (2008)\, latent information has not been e
 xploited for semantic role labeling. In our work\, we take the output of a
  statistical parser and then decompose the phrase structure tree into a la
 rge number of hidden Tree-adjoining grammar (TAG) derivations. Each hidden
  or latent TAG derivation represents a different way of representing the s
 tructural dependency relationship between the predicate and argument. \n\n
 We hypothesize that positive and negative examples of individual semantic 
 roles can be reliably distinguished by possibly different latent TAG featu
 res. Motivated by this insight we show that latent support vector machines
  (LSVMs) can be used for the SRL task by exploiting these latent TAG featu
 res. In experiments on the PropBank-CoNLL 2005 data set\, our method signi
 ficantly outperforms the state of the art (even compared to models using g
 lobal constraints or global inference over multiple parses). We show that 
 latent SVMs offer an interesting new framework for NLP tasks\, and using e
 xperimental analysis we examine how and why the method is effective at exp
 loiting the latent TAG features in order to improve the precision of ident
 ifying and classifying semantic roles.
LOCATION:SW01\, Computer Laboratory
END:VEVENT
END:VCALENDAR
