Towards secure and efficient DNNs
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Andrew Caines.
On the efficiency side, I will show our recent progress on neural network compression methods that help to drastically reduce the size and the compute of a particular neural network. In addition, I will have a slight touch on the custom hardware accelerator that we’ve built for accelerating DNN inference. On the safety side, I will demonstrate the portability of adversarial samples and show an alternative detection method that rejects cheap adversarial samples with minimal computation cost.
This talk is part of the NLIP Seminar Series series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|