COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
Adversarial Machine LearningAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Matthew Ireland. An adversarial example is an instance of input data which has been modified in such a way that a human observer would not see the difference, but a Machine Learning model would be tricked into misclassifying it. In this talk, we are going to see how such examples could affect some Neural Network models by compromising their integrity. For example, a person could ‘paint’ a STOP sign in such a way that a self-driving car would interpret it as something completely different. We are going to explore how easy it is to generate images which will be misclassified by state-of-the-art architectures. Afterwards, we will look into currently available defences and how one can employ a transferability attack to bypass them. The talk will conclude by comparing testing with verification, seeing how each is used to appreciate the security of a Neural Network architecture. This talk is part of the Churchill CompSci Talks series. This talk is included in these lists:Note that ex-directory lists are not shown. |
Other listsCambridge University International Development Society Stories Behind the Stitches: Schoolgirl Samplers of the 18th and 19th Centuries Neonatal Neuroscience SeminarsOther talksPart Ib Group Project Presentations A passion for pottery: a photographer’s dream job Making Refuge: Flight Climate and Sustainable Development Finance for Industrial Sustainability in Developing Countries Poison trials, panaceas and proof: debates about testing and testimony in early modern European medicine Handbuchwissenschaft, or: how big books maintain knowledge in the twentieth-century life sciences |