University of Cambridge > > Natural Language Processing Reading Group > Automatic Evaluation of Linguistic Quality in Multi-Document Summarization

Automatic Evaluation of Linguistic Quality in Multi-Document Summarization

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Marek Rei.

Helen will present the following paper:

Automatic Evaluation of Linguistic Quality in Multi-Document Summarization. ACL2010 . Emily Pitler, Annie Louis, Ani Nenkova.

To date, few attempts have been made to develop and validate methods for automatic evaluation of linguistic quality in text summarization. We present the first systematic assessment of several diverse classes of metrics designed to capture various aspects of well-written text. We train and test linguistic quality models on consecutive years of NIST evaluation data in order to show the generality of results. For grammaticality, the best results come from a set of syntactic features. Focus, coherence and referential clarity are best evaluated by a class of features measuring local coherence on the basis of cosine similarity between sentences, coreference information, and summarization specific features. Our best results are 90% accuracy for pairwise comparisons of competing systems over a test set of several inputs and 70% for ranking summaries of a specific input.

This talk is part of the Natural Language Processing Reading Group series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2023, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity