COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Mathematics and Machine Learning > Universal Adversarial Perturbations: Fooling Deep Networks with a Single Image
Universal Adversarial Perturbations: Fooling Deep Networks with a Single ImageAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Frank Kelly. The robustness of classifiers to small perturbations of the data points is a highly desirable property when the classifier is deployed in real and possibly hostile environments. Despite achieving excellent performance on recent visual benchmarks, I will show in this talk that state-of-the-art deep neural networks are highly vulnerable to universal, image-agnostic, perturbations. After demonstrating how such universal perturbations can be constructed, I will analyse the implications of this vulnerability and provide a geometric explanation for the existence of such perturbations via an analysis of the curvature of the decision boundaries. This talk is part of the Mathematics and Machine Learning series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsCambridge University Astronomical Society (CUAS) cued Acoustics Lab SeminarsOther talksBiomolecular Thermodynamics and Calorimetry (ITC) CANCELLED: Beverly Gage: G-Man: J. Edgar Hoover and the American Century What quantum computers tell us about physics (even if no one ever builds one!) My ceramic practice, and Moon Jars for the 21st century The Most Influential Living Philosopher? Religion, revelry and resistance in Jacobean Lancashire |