COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > CCIMI Seminars > Relevance Forcing: More Interpretable Neural Networks through Prior Knowledge
Relevance Forcing: More Interpretable Neural Networks through Prior KnowledgeAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Rachel Furner. Neural networks are able to reach high accuracies across many different classification tasks. However, these ‘black-box models’ suffer from one drawback: it is generally difficult to assess how the network reached its classification decision. Nevertheless, through different relevance measures, it is possible to determine which parts of the given input contribute to the resulting output. By imposing certain penalties on this relevance, through which we can encode prior information about the problem domain, we can train models which take this information into account. If we view these relevance measures as discretized dynamical systems, we may get some insight on the reliability of their explanation. This talk is part of the CCIMI Seminars series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsInfant Sorrow German Society Speaker Events GenderOther talksOptimisation methods for Bayesian inference: Application to high dimensional inverse problems China Goes Global: China's AI Challenge: How Confucian Communism Can Help MEASUREMENT SYSTEMS AND INSTRUMENTATION IN THE OIL AND GAS INDUSTRY Image-specific Fine-tuning and Uncertainty Estimation for Medical Image Segmentation Regulatory principles in human development and evolution Transboundary protected areas: source of cooperation or conflict? |