University of Cambridge > Talks.cam > Centre for Mobile, Wearable Systems and Augmented Intelligence Seminar Series > PETs, POTs, and pitfalls: rethinking the protection of users against machine learning

PETs, POTs, and pitfalls: rethinking the protection of users against machine learning

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact ss2138.

Abstract: In a machine-learning dominated world, users’ digital interactions are monitored, and scrutinized in order to enhance services. These enhancements, however, may not always have the benefit and preferences of the users as a primary goal. Machine learning, for instance, can be used to learn users’ demographics and interests in order to fuel targeted advertisements, regardless of people’s privacy rights; or to learn bank customers’ behavioral patterns to optimize the monetary benefits of loans, with disregard for discrimination. In other words, machine learning models may be adversarial in their goals and operation. Therefore, adversarial machine learning techniques that are usually considered undesirable can be turned into robust protection mechanisms for users. In this talk we discuss two protective uses of adversarial machine learning, and challenges for protection arising from the biases implicit in many machine learning models.

Bio: Carmela Troncoso is an Assistant Professor at EPFL where she leads the Security and Privacy Engineering (SPRING) Laboratory. Her research focuses on privacy protection, with particular focus on developing systematic means to build privacy-preserving systems and evaluate these system’s information leakage.

This talk is part of the Centre for Mobile, Wearable Systems and Augmented Intelligence Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2020 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity