# "Delay-independent incremental global convergence in monotone systems of delay differential equations" and "(Non)-convergence of the gradient method to saddle points of concave-convex functions"

This week we have a pair of short talks that will be given at the 53rd IEEE Conference on Decision and Control later this year.

Talk 1; Delay-independent incremental global convergence in monotone systems of delay differential equations (Eoin Devane)

Monotone systems generated by delay differential equations with explicit time-variation are of importance in the modeling of a number of significant practical problems, including the analysis of communication systems and consensus protocols. In such problems, it is often of importance to be able to guarantee delay-independent incremental global convergence, whereby all solutions converge towards each other asymptotically. Such guarantees allow the asymptotic properties of all trajectories of the system to be determined by simply studying those of some particular convenient solution. A class of these systems was recently studied in the context of wireless networks through the imposition of a scalability condition. In this work, we seek to weaken the notions of monotonicity and system structure that were employed in that setting so as to extend this analysis to a more general context and make explicit exactly which system properties are required. We show that a generalised two-sided scalability condition, which is weaker than the separate assumptions of quasimonotonicity and scalability, can be used to express the system properties that are needed to prove delay-independent global convergence. Furthermore, we obtain as a corollary a result of guaranteed convergence of all solutions to a quantifiable invariant set, enabling time invariant asymptotic bounds to be obtained for the trajectories even if the precise values of time-varying parameters are unknown.

Talk 2: (Non)-convergence of the gradient method to saddle points of concave-convex functions (Tom Holding)

The problem of finding the saddle point of a concave (in x for fixed y) convex (in y for fixed x) function f(x,y) is ubiquitous in optimization and numerical analysis. These arise, for example, from the Lagrangian of an optimization problem. In the 50’s Arrow and Hurwicz proposed a first order gradient ascent/descent method to solve this problem, and proved convergence to a saddle point in the case of strict concavity. In the past two decades this simple method has been rediscovered in the context of network optimization as it gives rise to localised update rules. However, in many of these problems the function is not strictly concave and oscillatory behaviour has been observed in practice. While higher order methods are available, these are more complicated to implement, and, in the case of network optimization problems, can prevent distributed computation by requiring additional information transfer between nodes.

In this talk I shall discuss my recent work on classifying the asymptotic behaviour of the gradient method on an arbitrary concave-convex function. Despite the the non-linearity of the dynamics, it is shown that solutions of the gradient method converge to solutions of an explicit linear ODE specified by only local information at the saddle point. Exact characterisation is given for special cases of optimization problems, and modification methods are presented. The proof uses simple geometric arguments which will be illustrated with pictures.

This talk is part of the Cambridge Analysts' Knowledge Exchange series.