Improving & Better Understanding Word Vector Representations
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Tamara Polajnar.
Data-driven learning of distributional word vector representations is a technique of central importance in natural language processing. In this talk, we will explore several questions and their solutions that are aimed at improving and better understanding distributional word vectors. Can word vectors benefit from information stored in semantic lexicons?
Can these word vectors look similar to features typically used in NLP ? Do the vector dimensions have certain meaning associated with them or are they uninterpretable? Is it necessary to develop word vectors using distributional context?
This talk is part of the NLIP Seminar Series series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|