University of Cambridge > > Data Insights Cambridge > The model is simple until proven otherwise.

The model is simple until proven otherwise.

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Dr Sobia Hamid.

Further info: Registration from 7PM, talk starts 7:30PM.

Machine Learning and AI have enjoyed an unprecedented rise in popularity. In academia as well as industry, they are often viewed as the future solution to all problems. However, systems have become so complex that it is no longer humanly comprehensibly, how an algorithm arrives at an answer, see for example “AAAS: Machine learning ‘causing science crisis’” ( In some cases, companies refuse to disclose the proprietary algorithm. This has led to controversies such as the COMPAS algorithm giving scores on the likelihood to re-offend. The organisation ProPublica claims that the software exhibits racial bias ( which the company disputes ( 989/images/ProPublica_Commentary_Final_070616.pdf). Another example is Amazon’s gender bias recruitment tool ( Partly to blame is the data used to train algorithms. If the data is biased, then the algorithm will be. More seriously, it might exacerbate the bias, since algorithms distil the essential distinguishing features. If these are then highly correlated with black – white, male – female, we have a problem. While humans can also have bias, they are also capable of realizing their world view is too simplistic. The talk presents work in progress of increasing the complexity of a model, if the data suggests more features are necessary to model the data. This approach aides to understand the “black magic” inside the “black box”.

This talk is part of the Data Insights Cambridge series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2023, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity