University of Cambridge > > Computer Laboratory Security Seminar > On the Effectiveness of Generating Adversarial Examples for Evading Blackbox Malware Classifiers

On the Effectiveness of Generating Adversarial Examples for Evading Blackbox Malware Classifiers

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Jack Hughes.

Recent advances in adversarial attacks have shown that machine learning classifiers based on static analysis are vulnerable to adversarial attacks. However, real-world antivirus systems do not rely only on static classifiers, thus many of these static evasions get detected by dynamic analysis whenever the malware runs. The real question is to what extent these adversarial attacks are actually harmful to the real users? In this paper, we propose a systematic framework to create and evaluate realistic adversarial malware to evade real-world systems. We propose new adversarial attacks against real-world antivirus systems based on code randomization and binary manipulation and use our framework to perform the attacks on 1000 malware samples and test 4 commercial antivirus software and 1 open-source classifier. We demonstrate that the static detectors of real-world antivirus can be evaded by changing only 1 byte in some malware samples and that many of the adversarial attacks are transferable between different antivirus. We also tested the efficacy of the complete (i.e. static + dynamic) classifiers in protecting users. While most of the commercial antivirus use their dynamic engines to protect the users’ device when the static classifiers are evaded, we are the first to demonstrate that for one commercial antivirus, static evasions can also evade the offline dynamic detectors and infect users’ machines. We discover a new attack surface for adversarial examples that can cause harm to real users.


Sadia Afroz is a research scientist at the International Computer Science Institute (ICSI) and Avast Software. Her work focuses on anti-censorship, anonymity, and adversarial learning. Her work on adversarial authorship attribution received the 2013 Privacy Enhancing Technology (PET) award, the best student paper award at the 2012 Privacy Enhancing Technology Symposium (PETS), and the 2014 ACM SIGSAC dissertation award (runner-up). More about her research can be found:

This talk is part of the Computer Laboratory Security Seminar series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity