BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//talks.cam.ac.uk//v3//EN
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:19700329T010000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:19701025T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CATEGORIES:Machine Learning @ CUED
SUMMARY:Beam Sampling for Infinite Hidden Markov Models -
Jurgen Van Gael
DTSTART;TZID=Europe/London:20080402T140000
DTEND;TZID=Europe/London:20080402T150000
UID:TALK11310AThttp://talks.cam.ac.uk
URL:http://talks.cam.ac.uk/talk/index/11310
DESCRIPTION:The Infinite Hidden Markov Model (iHMM) [1\,2] is
an extension of the\nclassical Hidden Markov Model
widely used in machine learning and\nbioinformati
cs. As a tool to model sequential data\, Hidden Ma
rkov\nModels suffer from the need to specify the n
umber of hidden states.\nAlthough model selection
and model averaging are widely used in this\nconte
xt\, the Infinite Hidden Markov Model offers a non
parametric\nalternative. The core idea of the iHMM
is to use Dirichlet Processes\nto define the dist
ribution of the rows of a Markov Model transition\
nmatrix. As such\, the number of used states can a
utomatically be\nadapted during learning\; or can
be integrated over for prediction.\nUntil now\, th
e Gibbs sampler was the only known inference algor
ithm\nfor the iHMM. This is unfortunate as the Gib
bs sampler is known to be\nweak for strongly corre
lated data\; which is often the case in\nsequentia
l or time series data. Moreover\, it is suprising
that we have\npowerful inference algorithms for fi
nite HMM's (the forward-backward\nor Baum-Welch dy
namic programming algorithms) but cannot apply the
se\nmethods for the iHMM. In this work\, we propos
e a method called the\n"Beam Sampler" which combin
es ideas from slice sampling and dynamic\nprogramm
ing for inference in the iHMM. We show that the be
am sampler\nhas some interesting properties such a
s: (1) it is less susceptible to\nstrong correlati
ons in the data than the Gibbs sampler\, (2) it ca
n\nhandle non-conjugacy in the model more easily t
han the Gibbs sampler.\nWe also show that the scop
e of the beam sampler idea goes beyond\ntraining t
he Infinite Hidden Markov Model\, but can also be
used to\nefficiently train finite HMM's.
LOCATION:Engineering Department\, CBL Room 438
CONTACT:Carl Edward Rasmussen
END:VEVENT
END:VCALENDAR