University of Cambridge > Talks.cam > Machine Learning Reading Group @ CUED > Causal Machine Learning

Causal Machine Learning

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact James Allingham.

Zoom link available upon request (it is sent out on our mailing list, eng-mlg-rcc [at] lists.cam.ac.uk). Sign up to our mailing list for easier reminders via lists.cam.ac.uk.

In the first portion of the talk, we introduce some fundamental notions in causal inference that serve as a foundation for causal machine learning. We discuss the relationships between Markov properties, faithfulness, and the correspondence between conditional independencies in causal graphs and observational data. We then illustrate how these principles have direct applications in machine learning via half-sibling regression and invariant causal prediction.

In the second portion of the talk, we discuss causal structure learning, which aims to find causal relations between variables from observational data or a mixture of observational and experimental data for robustness and generalizability. We introduce three categories of causal structure learning and dive deep into ‘neural’ causal structure learning using gradient-based optimization for scalable causal discovery.

In real-world problems with structured data, the symbolic variables connected in a causal graph are not provided apriori. In the third portion of the talk, we introduce causal representation learning, which aims to learn the symbols required by causal inference/discovery from structured data, which resembles machine learning going beyond symbolic AI. We discuss why unsupervised causal representation learning is challenging and present a recently proposed causal representation learning method based on identifiable deep generative models.

References (recommended reading, not required):

Peters, Jonas, Peter Bühlmann, and Nicolai Meinshausen. 2015. “Causal Inference Using Invariant Prediction: Identification and Confidence Intervals.” arXiv. https://doi.org/10.48550/arXiv.1501.01332.

Zheng, Xun, et al. “Dags with no tears: Continuous optimization for structure learning.” Advances in neural information processing systems 31 (2018).

Vowels, Matthew J., Necati Cihan Camgoz, and Richard Bowden. “D’ya like dags? a survey on structure learning and causal discovery.” ACM Computing Surveys 55.4 (2022): 1-36

Schölkopf, B., Locatello, F., Bauer, S., Ke, N. R., Kalchbrenner, N., Goyal, A., & Bengio, Y. (2021). Toward causal representation learning. Proceedings of the IEEE , 109(5), 612-634.

Locatello, F., Bauer, S., Lucic, M., Raetsch, G., Gelly, S., Schölkopf, B., & Bachem, O. (2019, May). Challenging common assumptions in the unsupervised learning of disentangled representations. In international conference on machine learning (pp. 4114-4124). PMLR .

Lu, C., Wu, Y., Hernández-Lobato, J. M., & Schölkopf, B. (2021). Invariant causal representation learning for out-of-distribution generalization. In International Conference on Learning Representations.

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity