COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
Adversarial Machine LearningAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Matthew Ireland. Machine learning models, including neural networks, have been shown to be vulnerable to malicious inputs designed to compromise their integrity. These adversarial examples manipulate system behaviours in order to cause undesirable outputs. This talk will discuss this problem, its ramifications, an explantation of how these adversarial examples are generated and an overview of the methods used to try and fight this problem. This talk is part of the Churchill CompSci Talks series. This talk is included in these lists:Note that ex-directory lists are not shown. |
Other listsCIDC/Dept. of Veterinary Medicine Forum for Youth Participation and Democracy Nineteenth-Century EpicOther talksScenarios, forecasts, and projections – thinking about the future of technological change and climate change mitigation Curve fitting, errors and analysis of binding data Nonlinear modelling for active noise control applications in vehicles Racism, education and international development Patient reported outcome measures are different What is the Sociology of Reproduction all about? |