BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//talks.cam.ac.uk//v3//EN
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:19700329T010000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:19701025T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CATEGORIES:NLIP Seminar Series
SUMMARY:GPstruct: Bayesian non-parametric structured predi
ction model - Novi Quadrianto\, Machine Learning\,
University of Cambridge
DTSTART;TZID=Europe/London:20140117T120000
DTEND;TZID=Europe/London:20140117T130000
UID:TALK49110AThttp://talks.cam.ac.uk
URL:http://talks.cam.ac.uk/talk/index/49110
DESCRIPTION:In this talk\, I will introduce a conceptually nov
el structured prediction model\, GPstruct\, which
is kernelised\, non-parametric\, and supporting Ba
yesian posterior inference. GPstruct can be instan
tiated for a wide range of structured objects such
as linear chain\, tree\, grid\, and other general
graphs. As a first proof of concept\, the model i
s benchmarked on segmentation\, chunking\, and nam
ed entity recognition of text processing tasks and
gesture segmentation of video processing task inv
olving a linear chain structure. One of practical
issues of GPstruct is the memory demand which is q
uadratic in the number of latent variables and tra
ining runtime that scales cubically. This prevents
GPstruct from being applied to problems involving
grid factor graphs\, which are prevalent in compu
ter vision applications. In the second part of the
talk\, I will describe a scaling trick based on e
nsemble learning\, with weak learners (predictors)
trained on subsets of the latent variables and bo
otstrap data\, which can easily be distributed. We
show experiments with 2 millions latent variables
on image segmentation. Our method outperforms wid
ely-used conditional random field models trained w
ith pseudo-likelihood. Moreover\, it improves over
recent state-of-the-art marginal optimization met
hods in terms of predictive performance and uncert
ainty calibration. Finally\, it generalizes well o
n all training set sizes.\n\nJoint work with Sebas
tien Bratieres\, Zoubin Ghahramani\, and Sebastian
Nowozin.
LOCATION:FW26\, Computer Laboratory
CONTACT:Tamara Polajnar
END:VEVENT
END:VCALENDAR