University of Cambridge > > Computer Laboratory Security Seminar > Analysis and Classification of Android Malware

Analysis and Classification of Android Malware

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Alexander Vetterl.

Mobile devices and their application marketplaces drive the entire economy of today’s mobile landscape. Android platforms alone have produced staggering revenues, exceeding five billion USD , which has attracted cybercriminals and increased malware in Android markets at an alarming rate. To better understand this slew of threats, in this talk I first introduce CopperDroid, an automatic VMI based dynamic analysis system to reconstruct the behaviors of Android malware, developed within the Systems Security Research Lab at Royal Holloway, University of London.

The novelty of CopperDroid lies in its agnostic approach to identify interesting OS- and high-level Android-specific behaviors often expressed through complex inter-component interactions involving Android objects. CopperDroid’s analysis generates detailed behavioral profiles that abstract a large stream of low-level-often uninteresting-events into concise, high-level semantics, which is well-suited to provide insightful behavioral traits and open the possibility to further research directions.

To this end, I then show our research efforts to investigate the efficacy of behavioral profiles of different abstractions to differentiate between families of Android malware. In addition, in a significant departure from traditional classification techniques, we further apply a statistical classification approach to include samples showing poor behavior counts and depict a means to achieve near-perfect accuracy by considering a prediction set of top few matches than a singular choice. Despite the promising results, malware evolves rapidly and it thus becomes hard-if not impossible-to generalize learning models to reflect future, previously-unseen behaviors.

I conclude my talk by introducing Transcend, a framework to identify aging classification models in vivo during deployment, much before the machine learning model’s performance starts to degrade. Our approach uses a statistical comparison of samples seen during deployment with those used to train the model, thereby building metrics for prediction quality. I show how Transcend can be used to identify concept drift based on two separate case studies on Android and Windows malware, raising a red flag before the model starts making consistently poor decisions due to out-of-date training.


Lorenzo Cavallaro is a Reader (Associate Professor) of Information Security in the School of Mathematics and Information Security at Royal Holloway, University of London. In 2014, he established and is since leading the Systems Security Research Lab (S2Lab,, whose underpinning research builds on program analysis and machine learning to address threats against the security of computing systems. Prior joining Royal Holloway, University of London in 2012 as a Lecturer (Assistant Professor), Lorenzo held Post-Doctoral (UC Santa Barbara, Vrije Universiteit Amsterdam) and visiting scholar (Stony Brook University) positions, as well as a PhD in Computer Science awarded from the University of Milan in 2008. He sits on the technical program committees of and has published in top-tier and well-known venues (e.g., ACM CCS , NDSS, IEEE TIFS , ACSAC, RAID , USENIX WOOT ) as well as being PI in a number of research projects primarily funded by the UK EPSRC , the EU, Royal Holloway, and McAfee. Lorenzo teaches Malicious Software (undergraduate) and Software Security (graduate), a passion he also nurtured through the participation to (e.g., DEF CON 2008 -09) and co-organization of (e.g., DIMVA 2011 , UCSB iCTF 2008-09, ISG Open Day 2016) CTF -like computer security exercises.

This talk is part of the Computer Laboratory Security Seminar series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity