COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Machine Learning @ CUED > Targeted Disclosure to Support Auditing and Accountability for Automated Decision-making
Targeted Disclosure to Support Auditing and Accountability for Automated Decision-makingAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Adrian Weller. Audits of automated decision-making systems promise to identify problems with the correctness, fairness, and accountability of those systems. However, audits alone cannot provide sufficient information to verify properties of interest. Audits are a type of black-box testing, the least powerful testing scenario available for computer systems; even internal audits may not have sufficient information to conclude whether or not a statement about a computer system is true. All auditing relies on at least some disclosure about the internals of the system under examination. The evidence necessary to establish properties of interest for any particular system will depend strongly on the context of that system and its deployment. Disclosures need not rise to the level of full transparency, but must only constitute evidence that a system satisfies properties of interest. Further, such evidence must be robust, convincing, and verifiable. It must also tolerate underspecification of the task for which a system was designed and avoid lending credence to incorrect solutions or low-confidence guesses. The purpose of evidence is to establish properties of interest in the context of a particular system and its deployment; explanations, interpretations, or justifications alone are not evidence of correctness, robustness, or any other property, and should not be treated as such. This talk describes necessary evidence and disclosures for effective auditing and outlines practical steps and a research agenda in targeted, partial disclosure to facilitate accountability. It focuses on the requirements for understanding and governing software systems, especially machine-learning systems. Specifically, it questions the value of human-interpretable machine learning systems in fulfilling these requirements. Finally, it outlines open research questions in the area of building human-governable data-driven systems. This talk is part of the Machine Learning @ CUED series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsTechfugees Cambridge 9th Cambridge Immunology Forum - Visions of Immunology Connecting with Collections SymposiumOther talksEvolution’s Bite: Dental evidence for the diets of our distant ancestors ADMM for Exploiting Structure in MPC Problems Well-posedness of weakly hyperbolic systems of PDEs in Gevrey regularity. Insight into the molecular mechanism of extracellular matrix calcification in the vasculature from NMR spectroscopy and electron microscopy Paracelsus' Chickens - Strange Tales from the History of Chemistry |