BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//talks.cam.ac.uk//v3//EN
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:19700329T010000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:19701025T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CATEGORIES:Isaac Newton Institute Seminar Series
SUMMARY:Multi-agent learning: Implicit regularization and
order-optimal gossip - Patrick Rebeschini (Univers
ity of Oxford)
DTSTART;TZID=Europe/London:20180614T100000
DTEND;TZID=Europe/London:20180614T110000
UID:TALK107269AThttp://talks.cam.ac.uk
URL:http://talks.cam.ac.uk/talk/index/107269
DESCRIPTION:In distributed machine learning\, data are stored
and processed in multiple locations by different a
gents. Each agent is represented by a node in a gr
aph\, and communication is allowed between neighbo
urs. In the decentralised setting typical of peer-
to-peer networks\, there is no central authority t
hat can aggregate information from all the nodes.
A typical setting involves agents cooperating with
their peers to learn models that can perform bett
er on new\, unseen data. \;In this talk\, we p
resent the first results on the generalisation cap
abilities of distributed stochastic gradient desce
nt methods. Using algorithmic stability\, we deriv
e upper bounds for the test error and provide a pr
incipled approach for implicit regularization\, tu
ning the learning rate and the stopping time as a
function of the graph topology. We also present a
new Gossip protocol for the aggregation step in di
stributed methods that can yield order-optimal com
munication complexity. Based on non-reversible Mar
kov chains\, our protocol is local and does not re
quire global routing\, hence improving existing me
thods. (Joint work with Dominic Richards)
LOCATION:Seminar Room 2\, Newton Institute
CONTACT:INI IT
END:VEVENT
END:VCALENDAR