BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//talks.cam.ac.uk//v3//EN
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:19700329T010000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:19701025T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CATEGORIES:Machine Learning @ CUED
SUMMARY:Structure in tensor-variate data: a trivial byprod
uct of simpler phenomena? - John P. Cunningham
DTSTART;TZID=Europe/London:20180306T110000
DTEND;TZID=Europe/London:20180306T120000
UID:TALK102463AThttp://talks.cam.ac.uk
URL:http://talks.cam.ac.uk/talk/index/102463
DESCRIPTION:As large tensor-variate data become increasingly c
ommon across machine learning and statistics\, com
plex analysis methods for these data similarly inc
rease in prevalence. Such a trend offers the oppo
rtunity to understand subtler and more meaningful
features of the data that\, ostensibly\, could not
be studied with simpler datasets or simpler metho
dologies. While promising\, these advances are al
so perilous: novel analysis techniques do not alwa
ys consider the possibility that their results are
in fact an expected consequence of some simpler\,
already-known feature of simpler data. For examp
le\, suppose one fits a time series model (e.g. Ka
lman Filter or multivariate GARCH) to data indexed
by time\, measurement dimension\, and experimenta
l sample. Was a particular model fit achieved sim
ply because the data was temporally smooth\, and/o
r had correlated dimensions (or samples)? I will
present two works that address this growing proble
m\, the first of which uses Kronecker algebra to d
erive a tensor-variate maximum entropy distributio
n that has user-specified moments along each mode.
This distribution forms the basis of a statistic
al hypothesis test\, and I will use this test to a
nswer two active debates in the neuroscience commu
nity over the triviality of certain observed struc
ture in data. In the second part\, I will discuss
how to extend this maximum entropy formulation to
arbitrary constraints using deep neural network a
rchitectures in the flavor of implicit generative
modeling\, and I will use this method in a texture
synthesis application.\n\nJohn P. Cunningham is a
n associate professor in the Department of Statist
ics at Columbia University. He received a B.A. in
computer science from Dartmouth College\, and a M.
S. and Ph.D. in electrical engineering from Stanfo
rd University\, and he completed postdoctoral work
in the Machine Learning Group at the University o
f Cambridge. His research group at Columbia invest
igates several areas of machine learning and stati
stical neuroscience. http://stat.columbia.edu/~cu
nningham/
LOCATION:CBL Seminar Room
CONTACT:Dr R.E. Turner
END:VEVENT
END:VCALENDAR