University of Cambridge > Talks.cam > RSE Seminars > The model is simple until proven otherwise

The model is simple until proven otherwise

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Chris Richardson.

Machine Learning and AI have enjoyed an unprecedented rise in popularity. In academia as well as industry, they are often viewed as the future solution to all problems. However, systems have become so complex that it is no longer humanly comprehensibly, how an algorithm arrives at an answer, see for example “AAAS: Machine learning ‘causing science crisis’” (https://www.bbc.co.uk/news/science-environment-47267081)

In some cases, companies refuse to disclose the proprietary algorithm. This has lead to controversies such as the COMPAS algorithm giving scores on the likelihood to re-offend. The organisation ProPublica claims that the software exhibits racial bias (https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm) which the company disputes (http://go.volarisgroup.com/rs/430-MBX-989/images/ProPublica_Commentary_Final_070616.pdf).

Another example is Amazon’s gender bias recruitment tool (https://www.bbc.co.uk/news/technology-45809919). Partly to blame is the data used to train algorithms. If the data is biased, then the algorithm will be. More seriously, it might exacerbate the bias, since algorithms distill the essential distinguishing features. If these are then highly correlated with black – white, male – female, we have a problem.

While humans can also have bias, they are also capable of realizing their world view is too simplistic. The talk presents work in progress of increasing the complexity of a model, if the data suggests more features are necessary to model the data. This approach aides to understand the “black magic” inside the “black box”.

This talk is part of the RSE Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2019 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity