University of Cambridge > Talks.cam > Frontiers in Artificial Intelligence Series > Optimal algorithms for smooth and strongly convex distributed optimization in networks

Optimal algorithms for smooth and strongly convex distributed optimization in networks

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Microsoft Research Cambridge Talks Admins.

 Please note, this event may be recorded. Microsoft will own the copyright of any recording and reserves the right to distribute it as required.

In this work, we determine the optimal convergence rates for strongly convex and smooth distributed optimization in two settings: centralized and decentralized communications over a network. For centralized (i.e. master/slave) algorithms, we show that distributing Nesterov’s accelerated gradient descent is optimal and achieves a precision in time that depends on the condition number of the (global) function to optimize, the diameter of the network, and the time needed to communicate values between two neighbors (resp. perform local computations). For decentralized algorithms based on gossip, we provide the first optimal algorithm, called the multi-step dual accelerated (MSDA) method, that achieves the a precision that depends on the condition number of the local functions and the (normalized) eigengap of the gossip matrix used for communication between nodes. We then verify the efficiency of MSDA against state-of-the-art methods for two problems: least-squares regression and classification by logistic regression. (joint work with Kevin Scaman, Sébastien Bubeck, Yin Tat Lee, and Laurent Massoulié)

This talk is part of the Frontiers in Artificial Intelligence Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity