University of Cambridge > Talks.cam > Microsoft Research Cambridge, public talks > A new metric on kernel matrices with applications to matrix means

A new metric on kernel matrices with applications to matrix means

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Microsoft Research Cambridge Talks Admins.

This event may be recorded and made available internally or externally via http://research.microsoft.com. Microsoft will own the copyright of any recordings made. If you do not wish to have your image/voice recorded please consider this before attending

I will talk about some recent but fundamental work related to distance metrics on the manifold of kernel matrices, including a bit about the original application in nearest neighbor search that motivated our work.

Symmetric positive definite (spd) matrices are remarkably pervasive, especially in machine learning, statistics, and optimization. We consider the fundamental task of measuring distances between two spd matrices; a task that is nontrivial whenever an application uses distance functions that must respect the non-Euclidean geometry of spd matrices. Unfortunately, typical non-Euclidean distance measures such as the Riemannian metric are computationally demanding and also complicated to use. To ameliorate these difficulties, we introduce a new metric on spd matrices: this metric not only respects non-Euclidean geometry, it also offers faster computation than the Riemannian metric while being less complicated to use. We support our claims theoretically via a series of theorems that relate our metric to the Riemannian metric, and experimentally by studying the problem of computing matrix geometric means. Amazingly, though nonconvex, we show it to be efficiently solvable to global optimality.

Brief Bio:

Suvrit Sra is a Senior Research Scientist at the Max Planck Institute for Intelligent Systems (formerly Biological Cybernetics) in Tübingen, Germany. He obtained his M.S. and Ph.D. in Computer Science from the University of Texas at Austin in 2007, and a B.E. (Hons.) in Computer Science from BITS , Pilani (India) in 1999. His primary research focus is on large-scale optimization (both convex and nonconvex) with application to problems in machine learning, statistics, and scientific computing. He has a strong interest in several flavors of analysis, most notably matrix analysis.

His research has won awards at leading international conferences on machine learning and data mining (ICML, ECML , SIAM DM). His work on “the Metric Nearness Problem” was selected to receive the SIAM Outstanding Paper Prize (2011) (awarded triennially). He regularly organizes the Neural Information Processing Systems (NIPS) workshops on “Optimization for Machine Learning,” and has recently co-edited a book with the same title (MIT Press, 2011).

This talk is part of the Microsoft Research Cambridge, public talks series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity