COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Machine Learning Reading Group @ CUED > Learning Directed Acyclic Graphs (DAGs) With Continuous Optimization
Learning Directed Acyclic Graphs (DAGs) With Continuous OptimizationAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Isaac Reid. Zoom link available upon request (it is sent out on our mailing list, eng-mlg-rcc [at] lists.cam.ac.uk). Sign up to our mailing list for easier reminders via lists.cam.ac.uk. Estimating the structure of directed acyclic graphs (DAGs) is a challenging problem since the search space of DAGs is combinatorial and scales super-exponentially with the number of nodes. Traditional approaches rely on various local heuristics for enforcing the acyclicity constraint. Recent advancements have introduced a fundamentally different strategy that formulates DAG learning as a purely continuous optimisation problem over real matrices. This is achieved by capitalising on innovative, differentiable acyclicity characterization functions of DAGs. By eliminating the need for combinatorial constraints, it offers efficient solutions through standard numerical algorithms. Notably, this strategy exhibits several advantages, including the detection of large cycles, improved gradient behaviour, and faster runtime performance. This talk will introduce a few representative acyclicity characterisation, e.g. trace of matrix exponential function proposed in the No-Tears paper (which is based on the idea that powers of an adjacency matrix contain information about walks and cycles), and log-determinant (log-det) function introduced in the DAGMA paper (which leverages the nilpotency property of DAGs and the property of M-matrices.) These works open possibilities for more effective and efficient DAG learning. Reading suggestions: Zheng, Xun, Bryon Aragam, Pradeep K. Ravikumar, and Eric P. Xing. “Dags with no tears: Continuous optimization for structure learning.” Advances in neural information processing systems 31 (2018). Bello, Kevin, Bryon Aragam, and Pradeep Ravikumar. “Dagma: Learning dags via m-matrices and a log-determinant acyclicity characterization.” Advances in Neural Information Processing Systems 35 (2022): 8226-8239. This talk is part of the Machine Learning Reading Group @ CUED series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsModern European History Research Seminar University of Pennsylvania Seminar Cambridge Energy ConferenceOther talksQuantifying the co-benefits and side-effects of Negative Emissions Technologies and Practices to advance their sustainable scale-up Nonlocal aggregation models for biological movement. Glioblastoma response to standard treatment stratifies patients into two responder subtypes, creating the potential for precision medicine Non-Markovian models of collective motion The Sabines of the Apennines before the Roman conquest: settlement patterns, culture and society of a mountain community. Exploring cognition across cultures: Insights for testing world-wide navigation |