Explaining Non-Linear Classifier Decisions with application to Deep Learning
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Zoubin Ghahramani.
Understanding and interpreting classification decisions of automated image classification systems is of
high value in many applications, as it allows to verify the reasoning of the system and provides additional
information to the human expert. Although machine learning methods are solving very successfully a
plethora of tasks, they have in most cases the disadvantage of acting as a black box, not providing
any information about what made them arrive at a particular decision. This work proposes a general
solution to the problem of understanding classification decisions by pixel-wise decomposition of non-
linear classifiers. We introduce a methodology that allows to visualize the contributions of single pixels
to predictions for kernel-based classifiers over Bag of Words features and for deep neural networks.
Applications in computer vision and beyond are provided.
This talk is part of the Machine Learning @ CUED series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|