From the Information Extraction Pipeline to Global Models, and Back
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Microsoft Research Cambridge Talks Admins.
This event may be recorded and made available internally or externally via http://research.microsoft.com. Microsoft will own the copyright of any recordings made. If you do not wish to have your image/voice recorded please consider this before attending
Decisions in information extraction (IE), such as determining the types and relations of entities mentioned in text, depend on each other. To remain efficient, most systems make decisions in a sequential pipeline fashion, even if later decisions could help earlier ones. In this talk I will show how we used Conditional Random Fields to make these decisions jointly, substantially outperformed less global approaches and ranked first in several international IE competitions. I will then present relaxation methods we developed and applied to scale up (exact) inference in such models. In the final part of my talk I will argue why we should not dismiss the pipeline and present an exact beam-search algorithm, based on column generation, to overcome the pipeline’s greedy nature.
This talk is part of the Microsoft Research Machine Learning and Perception Seminars series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|