Learning non-DAGs by learning DAGs
- đ¤ Speaker: James Cussens (University of Bristol)
- đ Date & Time: Friday 06 March 2026, 09:15 - 10:00
- đ Venue: Seminar Room 1, Newton Institute
Abstract
Although learning directed graphical (DAG) models (aka “Bayesiannetworks”) from data is known to be an NP-hard problem, even with nolatent or selection variables, it remains substantially easier thanlearning many other model classes (e.g. DAGs with latent and selectionvariables). However, if our ultimate goal is data-driven suggestion ofplausible causal models then restricting to ‘vanilla’ DAG models maylead us to miss good causal models. One option, explored in this talk,is to learn a set of vanilla DAG models which are well supported bythe data, and look for models from a more general class whichare ‘near’ to members of this set. Studeny’s imset representation ofconditional independence models will be used to frame thisinvestigation. Imset representation does not make difficult problemsof model evaluation and model search go away, but provides a helpfuluniform representation of models.
Series This talk is part of the Isaac Newton Institute Seminar Series series.
Included in Lists
- All CMS events
- bld31
- dh539
- Featured lists
- INI info aggregator
- Isaac Newton Institute Seminar Series
- School of Physical Sciences
- Seminar Room 1, Newton Institute
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)

James Cussens (University of Bristol)
Friday 06 March 2026, 09:15-10:00