BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//talks.cam.ac.uk//v3//EN
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:19700329T010000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:19701025T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CATEGORIES:Statistics Reading Group
SUMMARY:The EM algorithm and applications - Robert Gramacy
\, University of Cambridge
DTSTART;TZID=Europe/London:20090311T163000
DTEND;TZID=Europe/London:20090311T173000
UID:TALK17239AThttp://talks.cam.ac.uk
URL:http://talks.cam.ac.uk/talk/index/17239
DESCRIPTION:An expectation-maximization (EM) algorithm is used
in statistics for\nfinding maximum likelihood est
imates of parameters in probabilistic\nmodels\, wh
ere the model depends on unobserved latent variabl
es. EM\nalternates between performing an expectati
on (E) step\, which computes\nan expectation of th
e likelihood by including the latent variables as\
nif they were observed\, and a maximization (M) st
ep\, which computes the\nmaximum likelihood estima
tes of the parameters by maximizing the\nexpected
likelihood found on the E step. The parameters fou
nd on the M\nstep are then used to begin another E
step\, and the process is\nrepeated.\n\nThe EM al
gorithm was explained and given its name in a clas
sic 1977\npaper by Arthur Dempster\, Nan Laird\, a
nd Donald Rubin in the Journal\nof the Royal Stati
stical Society (see link below). They pointed out\
nthat the method had been "proposed many times in
special\ncircumstances" by other authors\, but the
1977 paper generalized the\nmethod and developed
the theory behind it.\n\nEM is frequently used for
data clustering in machine learning and\ncomputer
vision. In natural language processing\, two prom
inent\ninstances of the algorithm are the Baum-Wel
ch algorithm (also known as\nforward-backward) and
the inside-outside algorithm for unsupervised\nin
duction of probabilistic context-free grammars.\nI
n psychometrics\, EM is almost indispensable for e
stimating item\nparameters and latent abilities of
item response theory models. With\nthe ability to
deal with missing data and observe unidentified\n
variables\, EM is becoming a useful tool to price
and manage risk of a\nportfolio. The EM algorithm
is also widely used in medical image\nreconstructi
on\, especially in Positron Emission Tomography an
d Single\nPhoton Emission Computed Tomography. See
below for other faster\nvariants of EM.\n\nWe wil
l go through the algorithm in general\, prove an i
mportant\nconvergence property\, comment on histor
ical context\, illustrate on a\nfamous application
to clustering\, and talk about extensions includi
ng\nMCEM and ECM which can be used with the E-step
and M-step\,\nrespectively\, are not analytically
tractable.\n\nhttp://www.jstor.org/stable/2984875
\n
LOCATION:MR5\, CMS
CONTACT:Richard Samworth
END:VEVENT
END:VCALENDAR