Reading and Reasoning with Vector Representations
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Mohammad Taher Pilehvar.
In recent years, vector representations of knowledge have become popular in NLP and beyond. They have at least two core benefits: reasoning with (low-dimensional) vectors tends to lead to better generalisation, and usually scales very well. But they raise their own set of questions: What type of inferences do they support? How can they capture asymmetry? How can explicit background knowledge be injected into vector-based architectures? How can we provide “proofs” that justify predictions? In this talk, I sketch some initial answers to some of these questions based on our recent work. In particular, I will illustrate how a vector space can simulate the workings of logic.
This talk is part of the Language Technology Lab Seminars series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|