University of Cambridge > > Machine Learning @ CUED > Targeted Disclosure to Support Auditing and Accountability for Automated Decision-making

Targeted Disclosure to Support Auditing and Accountability for Automated Decision-making

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Adrian Weller.

This talk has been canceled/deleted

Audits of automated decision-making systems promise to identify problems with the correctness, fairness, and accountability of those systems. However, audits alone cannot provide sufficient information to verify properties of interest. Audits are a type of black-box testing, the least powerful testing scenario available for computer systems; even internal audits may not have sufficient information to conclude whether or not a statement about a computer system is true. All auditing relies on at least some disclosure about the internals of the system under examination. The evidence necessary to establish properties of interest for any particular system will depend strongly on the context of that system and its deployment. Disclosures need not rise to the level of full transparency, but must only constitute evidence that a system satisfies properties of interest. Further, such evidence must be robust, convincing, and verifiable. It must also tolerate underspecification of the task for which a system was designed and avoid lending credence to incorrect solutions or low-confidence guesses. The purpose of evidence is to establish properties of interest in the context of a particular system and its deployment; explanations, interpretations, or justifications alone are not evidence of correctness, robustness, or any other property, and should not be treated as such.

This talk describes necessary evidence and disclosures for effective auditing and outlines practical steps and a research agenda in targeted, partial disclosure to facilitate accountability. It focuses on the requirements for understanding and governing software systems, especially machine-learning systems. Specifically, it questions the value of human-interpretable machine learning systems in fulfilling these requirements. Finally, it outlines open research questions in the area of building human-governable data-driven systems.

This talk is part of the Machine Learning @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

This talk is not included in any other list

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity