Language in 3D: semantic tensor space
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Thomas Lippincott.
Distributional similarity methods have proven to be a valuable tool for the induction of semantic similarity. Up till now, most algorithms use two-way co-occurrence data to compute the meaning of words. Co-occurrence frequencies, however, need not be pairwise. One can easily imagine situations where it is desirable to investigate co-occurrence frequencies of three modes and beyond. In this presentation, we will investigate the use of tensors (the generalization of matrices) for the induction of language models based on multi-way co-occurrences. Using tensors – combined with appropriate factorization models – we are able to build semantically richer language models, that are useful in applications such as selectional preference induction and word sense discrimination.
This talk is part of the NLIP Seminar Series series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|