Talks.cam will close on 1 July 2026, further information is available on the UIS Help Site
 

University of Cambridge > Talks.cam > Cambridge ML Systems Seminar Series > DISCO: Dynamical Integration Systems for Convergence Optimisation in Distributed Low-Communication Training

DISCO: Dynamical Integration Systems for Convergence Optimisation in Distributed Low-Communication Training

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Sally Matthews.

Distributed training faces fundamental challenges from client heterogeneity in compute, memory, and network conditions. Existing approaches use staleness-dependent decay, per-client adjustments, or distance-weighted averaging, but often lack substantial convergence guarantees. I present DISCO , a control-theoretic framework that recasts federated learning as a linear time-delay system. Using Lyapunov stability analysis, DISCO derives lightweight online adjustments with verifiable convergence bounds. Deployed on a Raspberry Pi 4 cluster, DISCO achieves 3.0–4.0× faster time-to-accuracy across text classification benchmarks. This work demonstrates how dynamical-systems theory enables provably efficient federated learning on commodity hardware.

Bio: Finn is an Engineer at Cedana AI with an MSc in Computer Science from UCL . His background spans machine learning and financial mathematics, with a focus on distributed training systems. His research explores homomorphic encryption, hardware acceleration, and distributed algorithms for efficient training at scale. Finn is particularly passionate about hardware-aware optimization and designing training workloads for commodity devices.

This talk is part of the Cambridge ML Systems Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2025 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity