University of Cambridge > > Isaac Newton Institute Seminar Series > The tensor graphical lasso (Teralasso)

The tensor graphical lasso (Teralasso)

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact INI IT.

VMVW02 - Generative models, parameter learning and sparsity

Co-authors: Kristjian Greenewald (Harvard University), Shuheng Zhou (University of Michigan), Alfred Hero (University of Michigan)

We propose a new ultrasparse graphical model for representing multiway data based on a Kronecker sum representation of the process inverse covariance matrix. This statistical model decomposes the inverse covariance into a linear Kronecker sum representation with sparse Kronecker factors.

Under the assumption that the multiway observations are matrix-normal the l1 sparsity regularized log-likelihood function is convex and admits significantly faster statistical rates of convergence than other sparse matrix normal algorithms such as graphical lasso or Kronecker graphical lasso.

We specify a scalable composite gradient descent method for minimizing the objective function and analyze both the statistical and the computational convergence ratesm, showing that the composite gradient descent algorithm is guaranteed to converge at a geometric rate to the global minimizer. We will illustrate the method on several real multiway datasets, showing that we can recover sparse graphical structures in high dimensional data.

Related Links

This talk is part of the Isaac Newton Institute Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2022, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity