University of Cambridge > Talks.cam > Computer Laboratory Security Seminar > Realistic Adversarial Machine Learning

Realistic Adversarial Machine Learning

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Jack Hughes.

While vulnerability of machine learning is extensively studied, most work considers security or privacy in academic settings. This talk studies studies three aspects of recent work on realistic adversarial machine learning, focusing on the “black box” threat model where the adversary has only query access to a remote classifier, but not the complete model itself.

I first study if this black-box threat model can provide apparent robustness to adversarial examples (i.e., test time evasion attacks). Second, I turn to the question of privacy and examine to what extent adversaries can leak sensitive data out of classifiers trained on private data. Finally, I ask to what extent the black-box threat model can be relied upon, and study “model extraction”: attacks that allow an adversary to recover the approximate parameters using only queries.

This talk is part of the Computer Laboratory Security Seminar series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity