BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//talks.cam.ac.uk//v3//EN
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:19700329T010000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:19701025T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CATEGORIES:Machine Learning @ CUED
SUMMARY:Random Function Classes for Machine Learning - Pro
f Alexander Smola\, Carnegie Mellon University
DTSTART;TZID=Europe/London:20150630T110000
DTEND;TZID=Europe/London:20150630T120000
UID:TALK60022AThttp://talks.cam.ac.uk
URL:http://talks.cam.ac.uk/talk/index/60022
DESCRIPTION:Random function classes offer an extremely versati
le tool for describing nonlinearities\, as they ar
e commonly employed in machine learning. This rang
es from compact summaries of distributions to nonl
inear function expansions. We show that Bloom Filt
ers\, the Count-Min sketch\, and a new family of S
emidefinite Sketches can all be viewed as attempts
at finding the most conservative solution of a co
nvex optimization problem (and with matching guara
ntees) when querying properties of a distribution.
Moreover\, the sketches themselves prove useful\,
e.g. when representing high-dimensional functions
\, thus leading to the hash kernel for generalized
linear models and recommender systems. Next we di
scuss random kitchen sinks and their accelerated v
ariants\, fastfood\, a-la-carte and deep-fried con
vnets. They offer memory-efficient alternatives to
incomplete matrix factorization and decomposition
for kernel functions. Finally\, we combine this a
pproach with sketches\, using Pagh's compressed ma
trix multiplication construction\, yielding comput
ationally efficient two-sample and independence te
sts.
LOCATION:Cambridge University Engineering Department\, LT1
CONTACT:
END:VEVENT
END:VCALENDAR