Senses can help vector space models of lexical substitution
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Dimitri Kartsaklis.
The role of senses in NLP applications has been questioned due to the high performance of vector space models in semantic tasks. These models deliver state-of-the-art performance without explicitly accounting for senses which have even been shown to be harmful for some tasks. In this talk, I will show how sense representations tailored to the task can improve the results of vector-based lexical substitution models. I will discuss two aspects related to paraphrase substitution, namely their clusterability into senses and their substitutability in context. Finally, I will present preliminary results on core sense detection through a multi-view approach to paraphrase semantic analysis.
This talk is part of the Language Technology Lab Seminars series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|