BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Text-to-text Generation Beyond Machine Translation - Shashi Naraya
 n\, University of Edinburgh
DTSTART:20171006T110000Z
DTEND:20171006T120000Z
UID:TALK84631@talks.cam.ac.uk
CONTACT:Amandla Mabona
DESCRIPTION:In recent years we have witnessed the achievements of\nsequenc
 e-to-sequence encoder-decoder models for machine translation.\nIt is no su
 rprise that these models are also setting a trend in\nvarious other genera
 tion tasks such as dialogue generation\, image\ncaption generation\, sente
 nce compression\, paraphrase generation\,\nsentence simplification and doc
 ument summarization. Yet\, despite their\nimpressive results\, these deep 
 learning sequence models are often\napplied off-the-shelf to these text-to
 -text generation tasks.\n\nIn this talk I will discuss two examples\, sent
 ence simplification and\ndocument summarization\, that explore the hypothe
 sis that tailoring the\nmodel with knowledge of the task structure and lin
 guistic requirements\nleads to better performance. In the first part\, I w
 ill propose a new\nsentence simplification task (split-and-rephrase) where
  the aim is to\nsplit a complex sentence into a meaning preserving sequenc
 e of shorter\nsentences. I will show that the semantically-motivated split
  model is\na key factor in generating fluent and meaning preserving rephra
 sings.\nIn the second part\, I will discuss the shortcomings of\nsequence-
 to-sequence abstractive methods for document summarization\nand show that 
 an extractive summarization system trained to globally\noptimize a common 
 summarization evaluation metric outperforms\nstate-of-the-art extractive a
 nd abstractive systems in both automatic\nand extensive human evaluations.
 \n\nBIO: Shashi Narayan is a postdoctoral researcher in the School of\nInf
 ormatics at the University of Edinburgh. He obtained his PhD in\nComputer 
 Science at the University of Lorraine\, INRIA under Claire\nGardent in 201
 4. His research focuses on natural language generation\nand understanding 
 with an aim to develop general frameworks for\ngeneration from underlying 
 meaning representation or for text\nrewriting such as summarization\, text
  simplification and paraphrase\ngeneration. He also has experience with pa
 rsing and other structured\nprediction problems.
LOCATION:FW26\, Computer Laboratory
END:VEVENT
END:VCALENDAR
