University of Cambridge > > Machine Learning @ CUED > Local Deep Kernel Learning for Efficient Non-linear SVM Prediction

Local Deep Kernel Learning for Efficient Non-linear SVM Prediction

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Dr Jes Frellsen.


The time taken by an algorithm to make predictions is of critical importance as machine learning transitions to becoming a service available on the cloud. Algorithms that are efficient at prediction can service more calls and utilize fewer cloud resources and thereby generate more revenue. They can also be used in real time applications where predictions need to be made in micro/milliseconds.

Non-linear SVMs have defined the state-of-the-art on multiple benchmark tasks. Unfortunately, they are slow at prediction with costs that are linear in the number of training points. This reduces the attractiveness of non-linear SVMs trained on large amounts of data in cloud scenarios.

In this talk, we develop LDKL —an efficient non-linear SVM classifier with prediction costs that grow logarithmically with the number of training points. We generalize Localized Multiple Kernel Learning so as to learn a deep primal feature embedding which is high dimensional and sparse. Primal based classification decouples prediction costs from the number of support vectors and our tree-structured features efficiently encode non-linearities while speeding up prediction exponentially over the state-of-the-art. We develop routines for optimizing over the space of tree-structured features and efficiently scale to problems with millions of training points. Experiments on benchmark data sets reveal that LDKL can reduce prediction costs by more than three orders of magnitude over RBF -SVMs in some cases. Furthermore, LDKL leads to better classification accuracies as compared to leading methods for speeding up non-linear SVM prediction.

LDKL is available on AzureML, Microsoft’s cloud machine learning platform, and I will briefly discuss how it can be used to develop a highly performant virus and malware classifier which needs to predict potential threats every time files are opened, saved or executed on hundreds of millions of machines on a daily basis.

Spaker’s brief biography

Manik Varma is a researcher at Microsoft Research India where he helps manage the Machine Learning and Optimization area. Manik received a bachelor’s degree in Physics from St. Stephen’s College, University of Delhi in 1997 and another one in Computation from the University of Oxford in 2000 on a Rhodes Scholarship. He then stayed on at Oxford on a University Scholarship and obtained a DPhil in Engineering in 2004. Before joining Microsoft Research, he was a Post-Doctoral Fellow at the Mathematical Sciences Research Institute Berkeley. He has been an Adjunct Professor at the Indian Institute of Technology (IIT) Delhi in the Computer Science and Engineering Department since 2009 and jointly in the School of Information Technology since 2011. His research interests lie in the areas of machine learning, computational advertising and computer vision. He has served as an Area Chair for machine learning and computer vision conferences such as NIPS , ICCV and CVPR . He has been awarded the Microsoft Gold Star award and has won the PASCAL VOC Object Detection Challenge.

This talk is part of the Machine Learning @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2023, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity