University of Cambridge > > NLIP Seminar Series >  Neural Architectures for Sequence Labelling

Neural Architectures for Sequence Labelling

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Anita Verő.

Many NLP tasks, including named entity recognition (NER), part-of-speech (POS) tagging, shallow parsing and error detection can be framed as types of sequence labelling. The development of accurate and efficient sequence labelling models is thereby useful for a wide range of downstream applications. Work in this area has traditionally involved task-specific feature engineering – for example, integrating gazetteers for named entity recognition, or using features from a morphological analyser in POS -tagging. Recent developments in neural architectures and representation learning have opened the door to models that can discover useful features automatically from the data. Such sequence labelling systems are applicable to many tasks, using only the surface text as input, yet are able to achieve competitive results.

In this talk, we investigate various methods for further improving neural sequence labelling models. We start with a sequence labelling model that combines bidirectional LST Ms and CRFs, and then explore two extensions: 1) character-based representations, for capturing sub-word features and character patterns 2) semi-supervised multitask objectives, providing the network with additional training signals for learning useful general-purpose features. We evaluate the impact of these architectures on datasets that cover four different tasks: NER , POS-tagging, chunking and error detection in learner texts.

This talk is part of the NLIP Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2022, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity