University of Cambridge > Talks.cam > Mathematics and Machine Learning > Universal Adversarial Perturbations: Fooling Deep Networks with a Single Image

Universal Adversarial Perturbations: Fooling Deep Networks with a Single Image

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Frank Kelly.

The robustness of classifiers to small perturbations of the data points is a highly desirable property when the classifier is deployed in real and possibly hostile environments. Despite achieving excellent performance on recent visual benchmarks, I will show in this talk that state-of-the-art deep neural networks are highly vulnerable to universal, image-agnostic, perturbations. After demonstrating how such universal perturbations can be constructed, I will analyse the implications of this vulnerability and provide a geometric explanation for the existence of such perturbations via an analysis of the curvature of the decision boundaries.

This talk is part of the Mathematics and Machine Learning series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2018 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity