Exploiting Large Corpora for Parsing
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Tamara Polajnar.
Parsers have come a long way since the advent of research in NLP , with much of the field settling on supervised techniques trained on linguistically annotated corpora. However, in recent years, accuracy improvements have stalled at around 92% to 94%, and parsing mistakes account for a substantial proportion of errors made in downstream applications.
This talk will discuss some of my previous work in nbest CCG parsing and reranking, and then go on to potential methods of overcoming this parsing bottleneck using large external corpora. Much of the work is in-progress and speculative as part of my PhD at the University of Sydney, and I welcome any feedback or suggestions.
This talk is part of the NLIP Seminar Series series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|