BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:EACL potpourri - NLIP PhDs and postdocs
DTSTART:20170224T120000Z
DTEND:20170224T130000Z
UID:TALK71217@talks.cam.ac.uk
CONTACT:Kris Cao
DESCRIPTION:3 15 minute presentations on accepted EACL short papers:\n\n--
 ------------------------------------------------\n\nLearning to Negate Adj
 ectives with Bilinear Models\n\nLaura Rimell\, Amandla Mabona\, Luana Bula
 t\, Douwe Kiela\n\nWe learn a mapping that negates adjectives\nby predicti
 ng an adjective’s antonym in\nan arbitrary word embedding model. We\nsho
 w that both linear models and neural\nnetworks improve on this task when t
 hey\nhave access to a vector representing the semantic\ndomain of the inpu
 t word\, e.g. a\ncentroid of temperature words when predicting\nthe antony
 m of ‘cold’. We introduce\na continuous class-conditional bilinear\nne
 ural network which is able to negate\nadjectives with high precision.\n\n-
 -------------------------------------------------\n\nModelling metaphor wi
 th attribute-based semantics\n\nLuana Bulat\, Ekaterina Shutova\, Stephen 
 Clark\n\n\nOne of the key problems in computational metaphor modelling is 
 finding the optimal level of abstraction of semantic representations\, suc
 h that these are able to capture and generalise metaphorical mechanisms. I
 n this paper we present the first metaphor identification method that uses
  representations constructed from property norms. Such norms have been pre
 viously shown to provide a cognitively plausible representation of concept
 s in terms of semantic properties. Our results demonstrate that such prope
 rty-based semantic representations provide a superior model of cross-domai
 n knowledge projection in metaphors\, outperforming standard distributiona
 l models on a metaphor identification task. \n\n--------------------------
 ------------------------\n\nLatent Variable Dialogue Models and their Dive
 rsity\n\nKris Cao and Stephen Clark\n\nWe present a dialogue generation mo
 del that directly captures the variability in possible responses to a give
 n input\, which reduces the `boring output' issue of deterministic dialogu
 e models. Experiments show that our model generates more diverse outputs t
 han baseline models\, and also generates more consistently acceptable outp
 ut than sampling from a deterministic encoder-decoder model.
LOCATION:FW26\, Computer Laboratory
END:VEVENT
END:VCALENDAR
