Inducing Meaning from Text
- đ¤ Speaker: Dan Jurafsky (Stanford University)
- đ Date & Time: Tuesday 06 May 2008, 13:00 - 14:00
- đ Venue: LR5, Engineering Department, Baker Building
Abstract
Online models of word meaning (like dictionaries and thesauri) or world knowledge (like scripts or narratives) are crucial for natural language understanding. Could we learn these meanings automatically from text? I first report on joint work with Rion Snow and Andrew Ng on inducing the meaning of words from text on the Web in the context of augmenting WordNet, a large online thesaurus of English. These include a semi-supervised method for learning when a new word is a `hypernym’ or in the ‘is-a’ relation with another word, a new probabilistic algorithm for combining evidence from multiple relation detectors, and a algorithm for clustering the induced word senses. I then report on joint work with Nate Chambers on inducing `narratives’, a script-like sequence of events that follow a protagonist. This work includes inducing the relations between events, ordering the relations and clustering them into prototype narratives.
Series This talk is part of the Machine Intelligence Laboratory Speech Seminars series.
Included in Lists
- Cambridge Forum of Science and Humanities
- Cambridge Language Sciences
- Cambridge talks
- Chris Davis' list
- CUED Speech Group Seminars
- Guy Emerson's list
- Information Engineering Division seminar list
- LR5, Engineering Department, Baker Building
- Machine Intelligence Laboratory Speech Seminars
- PhD related
- Trust & Technology Initiative - interesting events
- yk449
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)

Dan Jurafsky (Stanford University)
Tuesday 06 May 2008, 13:00-14:00