COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Engineering Safe AI > Decision Boundary Geometries and Robustness of Neural Networks
Decision Boundary Geometries and Robustness of Neural NetworksAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact AdriĆ Garriga Alonso. Adversarial examples are small perturbations to an input point that cause a Neural Network (NN) to misclassify it. Some recent research shows the existence of “universal adversarial perturbations” which, unlike previous adversarial examples, are not specific to data points and network architectures. We will also talk about some results which try to link this behaviour to the geometry of decision boundaries learned by neural networks. Adversarial inputs by themselves aren’t the main concern for the value alignment problem. However, the insight they can give about NN internals will be important if future AIs rely on NNs at all. Relevant readings: The Robustness of Deep Networks: A Geometrical Perspective http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8103145&tag=1 Adversarial Spheres https://arxiv.org/abs/1801.02774 This talk is part of the Engineering Safe AI series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsBiochem The Cultures of Climate Change Cambridge Social Ontology Group (CSOG)Other talksDevelopment of machine learning based approaches for identifying new drug targets Deep & Heavy: Using machine learning for boosted resonance tagging and beyond St Johns Linacre Lecture 2018: Professor Sir Peter Ratcliffe FRS Inaugural Lecture:Islam and Science in Modern Egypt Primary liver tumor organoids: a new pre-clinical model for drug sensitivity analysis Flow Cytometry |