University of Cambridge > > Mathematics and Machine Learning > Cutting edge topics: distributed asynchronous learning

Cutting edge topics: distributed asynchronous learning

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Damon Wischik.

For models that involve structured input e.g. a different graph structure for each data instance, the standard deep learning technology is inefficient. This talk will describe Asynchronous Model-Parallel (AMP) training, in which multiple cores or devices each learn different parts of the network asynchronously, which makes much more efficient use of hardware.

This talk is part of the Mathematics and Machine Learning series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2017, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity