COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Computer Laboratory Security Seminar > Realistic Adversarial Machine Learning
Realistic Adversarial Machine LearningAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Jack Hughes. While vulnerability of machine learning is extensively studied, most work considers security or privacy in academic settings. This talk studies studies three aspects of recent work on realistic adversarial machine learning, focusing on the “black box” threat model where the adversary has only query access to a remote classifier, but not the complete model itself. I first study if this black-box threat model can provide apparent robustness to adversarial examples (i.e., test time evasion attacks). Second, I turn to the question of privacy and examine to what extent adversaries can leak sensitive data out of classifiers trained on private data. Finally, I ask to what extent the black-box threat model can be relied upon, and study “model extraction”: attacks that allow an adversary to recover the approximate parameters using only queries. This talk is part of the Computer Laboratory Security Seminar series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsTalks on Category Theory Visual Constructions of South Asia (2015-16) Palaeolithic-Mesolithic Discussion Group, Department of ArchaeologyOther talksThe corporation and law in the making of global capitalism Antiviral Immunity against dengue and Zika viruses in Aedes mosquitoes Effective boundary conditions at a regularly microstructured wall Immunophenotyping for classifying canine multicentric lymphoma: is it a useful tool for prognosis and treatment? Classical Heredity in the Roman and Medieval Worlds (Domestication Practices across History) |