University of Cambridge > > Language Technology Lab Seminars > Computational Models of the Influence of Context on Sentence Acceptability

Computational Models of the Influence of Context on Sentence Acceptability

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Qianchu Liu.

We study the influence of context on sentence acceptability. First we compare the crowd source acceptability ratings of sentences judged in isolation, with a relevant context, and with an irrelevant context. Our results show that context induces a cognitive load for humans, which compresses the distribution of ratings. Moreover, in relevant contexts we observe a discourse coherence effect which uniformly raises acceptability. We then test unidirectional and bidirectional neural language models for their ability to predict acceptability ratings. The bidirectional models give very promising results, with the best model achieving a new state-of-the-art for unsupervised acceptability prediction. The two sets of experiments provide insights into the cognitive aspects of sentence processing, and central issues in the computational modelling of text and discourse. (Joint work with Jey Han Lau, The University of Melbourne; Carlos Armendariz, Queen Mary University of London; Matthew Purver, Queen Mary University of London; and Chang Shu,University of Nottingham Ningbo China)

This talk is part of the Language Technology Lab Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2023, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity