COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Speech Seminars > Lagrangian relaxation for inference in natural language processing
Lagrangian relaxation for inference in natural language processingAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Bill Byrne. There will be sandwiches. There has been a long history in combinatorial optimization of methods that exploit structure in complex problems, using methods such as dual decomposition or Lagrangian relaxation. These methods leverage the observation that complex inference problems can often be decomposed into efficiently solvable sub-problems. Thus far, however, these methods are not widely used in NLP . In this talk I’ll describe recent work on inference algorithms for NLP based on Lagrangian relaxation. In the first part of the talk I’ll describe work on non-projective parsing. In the second part of the talk I’ll describe an exact decoding algorithm for syntax-based statistical translation. If time permits, I’ll also briefly describe algorithms for dynamic programming intersections (e.g., the intersection of a PCFG and an HMM ), and for phrase-based translation. For all of the problems that we consider, the resulting algorithms produce exact solutions, with certificates of optimality, on the vast majority of examples; the algorithms are efficient for problems that are either NP-hard (as is the case for non-projective parsing, or for phrase-based translation), or for problems that are solvable in polynomial time using dynamic programming, but where the traditional exact algorithms are far too expensive to be practical. While the focus of this talk is on NLP problems, there are close connections to inference methods, in particular belief propagation, for graphical models. Our work was inspired by recent work that has used dual decomposition as an alternative to belief propagation in Markov random fields. This is joint work with Yin-Wen Chang, Tommi Jaakkola, Terry Koo, Sasha Rush, and David Sontag. This talk is part of the Speech Seminars series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsCMS Seminars from business and industry talks Isaac Newton Institute Distinguished SeminarsOther talksXZ: X-ray spectroscopic redshifts of obscured AGN Localization and chiral splitting in scattering amplitudes The Beginning of Our Universe and what we don't know about Physics Foster Talk - CANCELLED - Redox Oscillations in the Circadian Clockwork Disease Migration Oncological imaging: introduction and non-radionuclide techniques The evolution of photosynthetic efficiency Sustainability of livestock production: water, welfare and woodland The Rise of Augmented Intelligence in Edge Networks Amino acid sensing: the elF2a signalling in the control of biological functions Direct measurements of dynamic granular compaction at the mesoscale using synchrotron X-ray radiography Intelligence and the frontal lobes |