University of Cambridge > > NLIP Seminar Series > Achieving Verified Robustness to Adversarial NLP Inputs

Achieving Verified Robustness to Adversarial NLP Inputs

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Guy Aglionby.

Note later start

Neural networks are part of many contemporary NLP systems, yet their empirical success comes at the price of vulnerability to adversarial attacks, e.g. by synonym replacements or adversarial text deletion. While much previous work uses adversarial training or data augmentation to partially mitigate such brittleness, these methods are unlikely to actually find worst-case inputs due to the complexity of the search space arising from discrete text perturbations. In this talk, I will introduce an approach that tackles the problem of adversarial robustness from the opposite direction: we formally verify a system’s robustness against pre-defined classes of adversarial attacks. To this end we adopt Interval Bound Propagation and bound the consequences which input changes can have on model predictions, thus establishing bounds on worst-case adversarial attacks. We furthermore modify the conventional log-likelihood training objective to train models which can be efficiently verified in constant time—this would otherwise come with exponential search complexity. The resulting models have much improved verified accuracy, and come with an efficiently computable formal guarantee on worst case adversarial attacks.

This talk is part of the NLIP Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity