|COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring.|
Strong Structural Priors for Neural Network Architectures
If you have a question about this talk, please contact Kris Cao.
Many current state-of-the-art methods in natural language processing and information extraction rely on representation learning. Despite the success and wide adoption of neural networks in the field, we still face major challenges such as (i) efficiently estimating model parameters for domains where annotation is costly and only few training examples are available, (ii) interpretable representations that allow inspection and debugging of deep neural networks, as well as (iii) ways to incorporate commonsense knowledge and task-specific prior knowledge. To tackle these issues, advanced neural network architectures such as differentiable memory, attention, data structures and even Turing machines, program interpreters and theorem provers have been proposed very recently. In this talk I will give an overview of our work on such strong structural priors for sequence modeling, knowledge base completion and program induction.
This talk is part of the NLIP Seminar Series series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
Other listsCambridge University Commonwealth Society Cambridge Startup Class Peterhouse Theory Group
Other talksDecoding Molecular Plasticity in the Dark Proteome Norwegian Romani - the 'languageness' of a Para-Romani variety The inaugural Professor C.A. Bayly Seminar: Visuality and the moral citizen in late socialist Vietnam Within-host spatiotemporal dynamics of systemic Salmonella infection during and after antibiotic treatment Utility Regulation, Domestic Consumers and Affordability: Current Debates in the Light of Economic History LARMOR LECTURE - title to be confirmed